Stratum← Stratum Journal
← Stratum Journal
RegulatoryJune 30, 20269 min read

Day Zero


Today the Colorado AI Act is in effect. Not a deadline approaching — the thing itself. The first U.S. state AI governance law that imposes substantive technical requirements on developers and deployers of high-risk artificial intelligence systems is now active law. Organizations operating AI systems that make consequential decisions affecting Colorado residents are now subject to its requirements, regardless of where those organizations are located.

The law's arrival has been visible for months. Its requirements have been public since Governor Polis signed SB 24-205 in May 2024. The question for most organizations deploying AI is no longer whether to comply — it is whether they have built what compliance actually requires, and what happens if they haven't.

Colorado AI Act — in effect today
SB 24-205 applies to developers and deployers of high-risk AI systems — defined broadly as systems that make, or substantially contribute to, consequential decisions in employment, credit, education, healthcare, housing, insurance, or legal services. The AG's office holds enforcement authority. Maximum civil penalty: $20,000 per violation. A 60-day cure period applies to first violations. NIST AI RMF implementation provides a safe harbor.

What the Act Actually Requires

The Colorado AI Act's requirements are technical, not procedural. A policy document is not compliance. A privacy notice with an AI disclosure is not compliance. Compliance requires infrastructure — systems capable of performing the functions the law mandates. Organizations that treated the law as a policy exercise rather than an infrastructure exercise are today non-compliant in ways that a policy update cannot fix.

Requirement
What It Means
Infrastructure It Requires
Risk notification
Notify consumers when an AI system may make a consequential decision affecting them
Decision logging, consumer notification pipeline
Human review mechanism
Provide consumers a process to request human review of an AI decision
Review queue, decision traceability, audit record
Impact assessment
Conduct pre-deployment risk assessment for high-risk AI applications
Assessment documentation, ongoing monitoring, version records
Transparency
Disclose AI use in consequential decisions; provide explanation of decision factors
Decision explanation logs, model versioning, factor attribution
Non-discrimination
Ensure AI systems do not discriminate based on protected characteristics
Bias monitoring, output auditing, demographic parity checks

Each requirement in this table names an operational capability, not a statement of intent. “Notify consumers when an AI system may make a consequential decision” requires a system that knows when an AI decision is being made, can identify whether it is consequential under the Act's definition, and can deliver the notification through the appropriate channel. A policy that says “we will notify consumers” satisfies none of these. The infrastructure does.

The Infrastructure Gap

The most common compliance failure today is not ignorance of the law — it is the infrastructure gap between what organizations intended to build and what was actually built before the deadline. The requirements have been public for two years. Most organizations deploying AI knew this was coming. The gap is not awareness. It is execution.

The specific gaps most common in organizations that deployed AI without governance infrastructure:

No decision traceability. The AI system makes consequential decisions, but there is no record linking a specific decision to the specific model version, the specific input that produced it, and the specific factors the model weighted. Human review is mandated by the law. Human review without decision traceability is not reviewable — it is a human looking at an output with no context for how it was produced.

No audit trail for delegation. Modern AI deployments are rarely a single model making a single decision. They are pipelines: one model processes, another evaluates, a third routes to a decision. The law requires traceability of consequential decisions. When a decision is the product of a multi-step pipeline, each step must be traceable. Most pipelines were not instrumented for this.

No impact assessment documentation. The Act requires a pre-deployment impact assessment. For systems already deployed, the retroactive question is: does the organization have documentation of an impact assessment that was conducted before deployment? Most do not. They have ad hoc discussions, informal sign-offs, or nothing. Documentation that can be produced in an enforcement inquiry is different from documentation that might exist somewhere.

No consumer-facing disclosure infrastructure. The notification requirement requires a pipeline capable of identifying who was affected by an AI decision and delivering required disclosures to them. This requires knowing who the consumer is, what decision affected them, and how to reach them. For organizations that processed consumer data through AI without this infrastructure, building it retroactively means establishing those linkages for decisions that were made without capturing them.

The 60-day cure period means organizations discovered in first violations have two months to demonstrate compliance. Building the infrastructure in 60 days is achievable. Building it under enforcement pressure, while documenting what was built and when, for an AG who is watching the process, is substantially harder than having built it before the deadline.

What Enforcement Looks Like

Enforcement of a new law is not instantaneous. The AG's office will not investigate every organization on day one. Enforcement priority will follow consumer harm — the organizations that get investigated first are the ones where the AI system's failure produced a visible outcome that a consumer or their advocate brought to the AG.

The practical enforcement horizon for most organizations is 6–18 months. Investigations take time. First violations will generate case law that clarifies requirements and establishes precedent for what compliance actually means. Organizations that use this period to build the infrastructure the law requires are not beating the system — they are doing what the law expects, and doing it before they are under investigation.

What the cure period does not protect is willful non-compliance. An organization that is aware of the law, deploys a high-risk system, does not build the required infrastructure, and argues in enforcement proceedings that it just needs 60 days — is in a different position than an organization that made a good-faith effort and discovered gaps. The AG has discretion over penalty severity. That discretion will track the organization's posture.

The EU AI Act follows in 33 days
The EU AI Act's high-risk obligations take effect August 2, 2026. For organizations operating in European markets — or deploying AI that affects EU residents from any location — a second, more comprehensive governance framework arrives in 33 days. The technical requirements overlap significantly with the Colorado Act: decision traceability, human oversight mechanisms, documentation, risk assessment. Organizations building compliance infrastructure now are building it once.

What Organizations Should Do Today

The immediate question for organizations deploying AI in consequential domains is straightforward: does the system have the infrastructure to satisfy the Act's requirements if asked to demonstrate it today? Not in principle — in practice, in the form of logs, records, and operational capabilities that can be produced in an enforcement inquiry.

If the answer is yes, the task is documentation: ensuring that the infrastructure that exists is described accurately, that impact assessments are on file, that consumer notification procedures are tested and operational.

If the answer is no, the task is infrastructure — and the 60-day cure period, while it applies to first violations, is not a scheduled building window. It is the period between when a violation is identified and when enforcement proceeds. Building compliance infrastructure during an active enforcement inquiry is the most expensive way to build it.

The organizations that will navigate the post-July 1 environment with the least friction are those that treated the law's technical requirements as infrastructure problems and built accordingly — not at the deadline, but before it. They are not in the position of starting today. For everyone else, the window is now.


Mandate

AI compliance infrastructure for organizations subject to the Colorado AI Act and EU AI Act. Decision traceability, impact assessment documentation, consumer notification pipelines, audit records. What the law requires, built as infrastructure — not as policy documents.

mandate.onstratum.com →
Warden

Fleet operations with governance built in. Authorization records at every agent action. Human oversight anchors for consequential decisions. The delegation chain audit surface that Colorado and the EU AI Act require — operational from day one.

warden.onstratum.com →
Sean / Stratum
© 2026 Stratum · hello@onstratum.com · onstratum.com