Stratum← Stratum Journal
← Stratum Journal
RegulatoryJuly 14, 20269 min read

The Compliance Window


Timeline — EU AI Act enforcement
EU AI Act high-risk AI system obligations: effective August 2, 2026.
Colorado AI Act (SB 24-205): effective June 30, 2026.
Texas TRAIGA: in effect since January 1, 2026.

If your organization deploys AI systems that qualify as high-risk under the EU AI Act — in HR, critical infrastructure, education, law enforcement, or certain commercial contexts — the compliance window is not theoretical. It is closing.

The EU AI Act's high-risk provisions take effect August 2, 2026. Three weeks from now. For organizations that have been treating this as a future compliance problem, the future is here — and the gap between having a compliance policy and having compliant infrastructure is wider than most legal teams have communicated upward.

The Act is specific. High-risk AI systems must maintain logs that enable post-deployment risk identification. They must implement human oversight measures capable of detecting and correcting errors and biases. They must provide users with information sufficient to understand how the system makes decisions. They must operate within documented risk management processes — not as a one-time audit but as a continuous lifecycle requirement.

Most of these requirements describe infrastructure, not documentation. You cannot satisfy "automatic logging sufficient to identify risks post-deployment" with a policy that says you intend to maintain logs. You need the logging infrastructure. The Act knows the difference, and enforcement authorities will too.

What High-Risk Means

The EU AI Act classifies AI systems as high-risk based on their use case, not their underlying technology. The high-risk categories include: AI used in biometric identification; AI in critical infrastructure management (energy, water, transport); AI in educational and vocational contexts that affects access to education; AI in employment and worker management; AI in access to essential private and public services; AI in law enforcement; AI in migration, asylum, and border management; and AI in administration of justice.

These categories are broader than they first appear. AI used in hiring decisions or performance evaluation is high-risk. AI that determines credit access or insurance pricing is high-risk. AI used in triage or clinical decision support in medical contexts is high-risk. AI that makes or influences decisions about access to public benefits is high-risk.

The test is not whether the AI is the sole decision-maker. It is whether the AI is used in a way that significantly affects the interests of natural persons in these categories. AI that augments human judgment in a high-risk context carries the same obligations as AI that replaces it.

The compliance question is not "is our AI making autonomous decisions?" It is "does our AI influence decisions in a high-risk category?" That is a much larger perimeter — and most organizations have not drawn it accurately.

The Infrastructure Requirements

For organizations with high-risk AI systems, the Act's obligations are technically specific. They are not satisfied by documentation of intent. They require operational infrastructure that implements the requirements.

Requirement
What It Means
What It Actually Requires
Risk management system
Documented risk identification, evaluation, and mitigation for the AI system's lifecycle
Ongoing process, not a one-time document
Data governance
Training, validation, and testing data must meet quality and relevance standards
Data lineage tracking and audit capability
Technical documentation
Complete technical documentation before market placement, updated on changes
Version-controlled system records
Record-keeping / logging
Automatic logging of operations sufficient to identify risks post-deployment
Tamper-resistant audit trail with defined retention
Transparency / user information
Users must receive information necessary to understand the AI's decisions
Decision explanation infrastructure
Human oversight
Human oversight measures enabling detection and correction of errors
Oversight mechanisms built into deployment, not just policy
Accuracy, robustness, cybersecurity
Performance requirements appropriate to intended purpose
Testing and validation regime
Highlighted rows require infrastructure changes, not documentation changes.

Five of the seven obligation categories require infrastructure changes rather than documentation changes. The logging requirement alone — "automatic logging of operations to enable post-deployment risk identification" — requires tamper-resistant audit trails, defined retention policies, and operational infrastructure to capture and store agent actions in a form that is both complete and accessible for review. A log that exists but cannot be queried for risk identification does not satisfy the requirement.

Why Policy Doesn't Satisfy Infrastructure Requirements

The gap between policy compliance and infrastructure compliance is a recurring pattern in regulatory enforcement history, and the EU AI Act is written specifically to address it. The Act does not ask what your policy says. It asks what your system does. The logging requirement asks whether logs are automatically generated. The human oversight requirement asks whether oversight mechanisms are built into the system — not whether there is a policy that humans are responsible for oversight.

This distinction has significant practical consequences. An organization that has a well-documented AI governance policy but runs AI systems that do not generate compliant audit logs is not in compliance, regardless of how thorough the policy is. An organization that has implemented oversight mechanisms that are structurally incapable of detecting certain failure modes is not in compliance with the oversight requirement, regardless of how detailed the oversight policy document is.

The enforcement framework for the EU AI Act assigns responsibility to market surveillance authorities with significant powers — including the authority to order the withdrawal of non-compliant AI systems from the market. The fines for non-compliance are substantial: up to €30 million or 6% of global annual turnover, whichever is higher. This is not a framework that will be satisfied by a policy document produced in response to an enforcement inquiry.

What compliant infrastructure actually looks like
Compliant infrastructure for a high-risk AI system has five verifiable properties:

1. Complete audit trail. Every agent action, every decision input, every outcome is logged with sufficient context to reconstruct why the decision was made.

2. Tamper resistance. Logs cannot be altered after creation. The audit trail is an accurate record of what happened, not what the system currently claims happened.

3. Human oversight anchors. For consequential decisions, the system requires human review — not just permits it. The oversight mechanism is not an option that can be bypassed; it is a structural requirement in the workflow.

4. Explainability at the decision level. The system can produce, for any individual decision, a coherent account of the inputs and reasoning that produced it — in a form that a non-technical user can understand.

5. Risk monitoring that is ongoing. The system generates signals that enable continuous risk assessment — not periodic audit.

The Colorado and EU Deadlines Together

Organizations dealing with the EU AI Act deadline on August 2 should note that the Colorado AI Act's high-risk AI system obligations took effect June 30 — six weeks earlier. The two frameworks overlap significantly in their technical requirements and diverge mainly in their scope and enforcement mechanisms.

Colorado's Act applies to high-risk AI systems used in consequential decisions affecting Colorado residents — employment, housing, credit, education access, healthcare services. The EU AI Act applies to high-risk AI systems used in the EU, regardless of where the deployer is headquartered. For any organization with both EU-based users and US operations that reach Colorado, both frameworks apply simultaneously.

The practical implication is that the infrastructure required to comply with one framework largely satisfies the other. Audit trail infrastructure that meets the EU AI Act's logging requirements is audit trail infrastructure that meets Colorado's accountability requirements. Human oversight mechanisms that satisfy the EU Act's requirements satisfy Colorado's human oversight requirements. Building the compliance infrastructure once for both frameworks is the efficient path — building it twice, for sequential enforcement, is the expensive one.

The Window That Remains

If your organization has not yet built compliant AI infrastructure, the relevant question is not whether you missed August 2 but what happens next. The enforcement regime for the EU AI Act is not designed for instant prosecution of every non-compliant system on the effective date. It is designed to identify non-compliance, notify operators, and create an obligation to remediate — with escalating consequences for operators who fail to remediate when given the opportunity.

The organizations that will face the most significant enforcement exposure are not those that were non-compliant on August 2 and immediately began building compliant infrastructure. They are the organizations that face an enforcement inquiry without having taken compliance seriously as an infrastructure problem — that have a policy document and no audit trail, oversight language and no oversight mechanism, governance frameworks that have never been implemented in the systems they are supposed to govern.

Building compliant infrastructure after August 2 is still available and still the right decision. The compliance window closed for organizations that needed to be compliant on day one for reasons of their own risk tolerance. For the larger set of organizations that need to be compliant before their first enforcement interaction, the window is still open — but it closes faster with every quarter that the Act has been in effect and enforcement activity has had time to mature.

The organizations that build compliance infrastructure now are building it while the framework for what "compliant" means is clear, the enforcement priorities are being established, and the cost of building the infrastructure is a one-time capital investment. The organizations that build it in response to an enforcement inquiry are building it under time pressure, with an enforcement authority watching, at a cost that includes both the infrastructure and the liability.


Mandate

AI compliance infrastructure for legal and operations teams. Audit trail generation, human oversight anchors, decision explainability — built for EU AI Act and Colorado AI Act obligations.

mandate.onstratum.com →
Warden

Fleet operations with the audit infrastructure built in. Tamper-resistant logs, scope auditing, human oversight anchors for consequential decisions — the operational layer that makes compliance evidence automatic.

warden.onstratum.com →
Sean / Stratum
© 2026 Stratum · hello@onstratum.com · onstratum.com