Stratum← Stratum Journal
← Stratum Journal
RegulatoryAugust 4, 202610 min read

Brussels Day One


The EU AI Act's high-risk obligations took effect two days ago. The Colorado AI Act landed five weeks before this. For organizations that have been watching the regulatory calendar — both clocks are now running. The question is no longer when compliance is required. It is whether what was built is actually compliant, and what happens when it is not.

The EU AI Act is a materially different regulatory instrument than Colorado's law. Colorado imposes requirements on consequential decisions. The EU AI Act imposes requirements on specific application categories — what it calls high-risk AI systems — with seven distinct technical obligations, a registration requirement, and a market surveillance apparatus that includes national competent authorities in every EU member state. For organizations operating at scale across European markets, the enforcement surface is substantially larger than a single state AG.

EU AI Act — high-risk obligations now in effect
High-risk AI systems under Annex III include: biometric identification, critical infrastructure management, education (admission, assessment, scoring), employment (recruitment, performance evaluation, promotion), essential services (credit scoring, insurance risk assessment), law enforcement, migration and asylum management, and administration of justice. Organizations deploying AI in any of these categories across EU markets are subject to all seven obligations as of August 2, 2026. Maximum penalty: €30 million or 6% of global annual turnover, whichever is higher.

The Seven Obligations — and What They Actually Require

The EU AI Act's obligations for high-risk systems are technical requirements disguised as compliance categories. Each obligation names something that must be operational — not something that must be documented as intended. The organizations that are compliant today built systems capable of satisfying these requirements. The organizations that are not compliant have policies that describe the requirements but no infrastructure that fulfills them.

Obligation
What It Requires
Infrastructure It Demands
Conformity assessment
Demonstrate that the high-risk AI system meets technical standards before deployment
Risk management system, testing records, technical documentation
Human oversight mechanisms
Ensure humans can monitor, intervene, and override the AI system during operation
Intervention interfaces, override logs, monitoring dashboards
Transparency to deployers
Provide documentation enabling deployers to understand the system and use it appropriately
Use-case documentation, capability limitations, performance metrics
Data governance
Training, validation, and testing data must meet quality criteria and be documented
Data lineage records, quality documentation, bias testing records
Technical documentation
Maintain documentation demonstrating compliance — available on request to authorities
Version records, change logs, system architecture, performance history
Logging and auditability
Systems must automatically log events; logs must be available to competent authorities
Automated event logging, tamper-evident audit trails, log retention policy
Registration
High-risk AI systems must be registered in the EU AI database before deployment
Deployment registry, system classification documentation

Five of these seven obligations require infrastructure that most organizations have not built as a purpose-built compliance system. They may have fragments — event logs here, some documentation there, a manual override process somewhere in a runbook. What they rarely have is the integrated infrastructure that makes these capabilities operational, auditable, and producible on demand to a competent authority.

How This Is Different from Colorado

The Colorado AI Act and EU AI Act share a common design philosophy: compliance is an infrastructure problem, not a policy exercise. But they differ in scope, jurisdiction, and enforcement architecture in ways that matter for organizations operating in both.

Colorado's law applies to consequential decisions affecting Colorado residents — regardless of where the organization is located. The EU AI Act applies to high-risk AI system categories — regardless of where the organization operates, so long as the output affects EU residents or the system is placed on the EU market. An organization headquartered in Chicago, deploying an HR AI system that evaluates applications from candidates in Germany, is subject to the EU AI Act.

The enforcement mechanism differs as well. Colorado enforcement runs through the state AG. EU AI Act enforcement runs through national competent authorities in each member state, with coordination through the European AI Office. This means an organization that deploys a non-compliant high-risk system has potential enforcement exposure in every EU market where the system operates — not a single regulator, but a network of them.

The organizations that built compliance infrastructure for Colorado are 80% of the way to EU AI Act compliance. The technical requirements overlap significantly: audit trails, human oversight mechanisms, decision traceability, documentation. Building it once, for both frameworks, is substantially more efficient than discovering the EU requirements now and starting from scratch.

What Enforcement Looks Like in the First 90 Days

Regulators do not investigate everyone simultaneously on day one of a new law. Enforcement priority follows a predictable pattern: first, flagrant cases where non-compliance produces visible harm; second, cases brought by individuals or advocacy organizations who have standing to complain; third, systematic surveys of high-risk sectors where the regulator believes non-compliance is widespread.

For the EU AI Act, the first 90 days will likely produce: a small number of enforcement actions against organizations already known to regulators for other reasons; preliminary guidance from the European AI Office clarifying ambiguous requirements; and sector-specific inquiries in the highest-visibility categories (HR AI and credit scoring are the most likely early focus areas, given the volume of consumer-facing deployments and the existing advocacy infrastructure around algorithmic discrimination).

What this means practically: organizations deploying high-risk AI that have made no compliance effort are at material risk. Organizations that have made good-faith efforts but have gaps are in a position to cure those gaps before enforcement reaches them. Organizations with complete compliance infrastructure have nothing to fear from this period and can use it to build the documentary record that demonstrates their posture.

The dual-jurisdiction window
For organizations subject to both Colorado and EU AI Act requirements, the overlap period — now — is the most efficient time to build or audit compliance infrastructure. Colorado's 60-day cure period is running. The EU AI Act enforcement apparatus is just beginning to engage. Organizations that build the shared infrastructure layer that satisfies both — decision traceability, human oversight, documentation, audit trails — address both simultaneously and are in a defensible position in both jurisdictions going forward.

The Gap That Remains

The organizations that followed the Colorado AI Act closely were ahead. The deadline was June 30. Most teams that took it seriously started infrastructure work in Q1. Those organizations have now had five weeks of operational experience with compliance infrastructure — they know what their audit trail actually produces, what their human oversight mechanisms actually look like, what documentation gaps remain.

The organizations that treated the Colorado deadline as a legal problem — something to be handled by updating the privacy policy and drafting a few new SOPs — are discovering now that the EU AI Act requires the same infrastructure they deferred in June. The deferred cost is now larger: two regulatory frameworks, two enforcement timelines, and a gap that has been public knowledge since 2024.

The infrastructure gap is the same one it has always been. Most AI deployments were built to perform, not to govern. Performance infrastructure — compute, model serving, output caching — was built first and built well. Governance infrastructure — authorization records, delegation chain tracking, decision traceability, human oversight anchors — was treated as something to add later. Later is now.


Mandate

AI compliance infrastructure built for both the Colorado AI Act and EU AI Act. Decision traceability, impact assessment documentation, human oversight mechanisms, automated audit records. Satisfying both regulatory frameworks from a single infrastructure layer — not two separate compliance programs.

mandate.onstratum.com →
Warden

Fleet operations infrastructure with governance built in. The human oversight mechanisms, authorization records, and intervention interfaces that the EU AI Act requires for high-risk systems — operational from deployment, not retrofitted after enforcement begins.

warden.onstratum.com →
Sean / Stratum
© 2026 Stratum · hello@onstratum.com · onstratum.com