The High-Risk Threshold
Colorado AI Act (SB 24-205): in effect since June 30, 2026.
Most compliance conversations have focused on what compliant systems must do. Fewer have focused on the prior question: which systems must comply? The perimeter is wider than most organizations have drawn it.
Twelve days before the EU AI Act's high-risk obligations take effect, the dominant compliance question has become: "Do we qualify?" For most organizations that have not completed a formal AI system inventory, the answer they have arrived at is "probably not" — and for many of them, that answer is wrong.
The EU AI Act classifies AI systems as high-risk based on their use case, not on their underlying technology, their level of autonomy, or whether humans are involved in the process. A model that makes hiring recommendations is high-risk. So is a scoring system that a human reviews before any decision is made. The presence of human oversight does not move a system out of the high-risk category. What matters is whether the system influences decisions in a designated category — not whether it makes those decisions without human involvement.
This is the misunderstanding that is about to create compliance exposure for a significant number of organizations: they believe that human-in-the-loop means not-high-risk. It does not.
The Categorical Test
The EU AI Act Annex III defines high-risk AI systems across eight categories. The test for each category is not the autonomy level of the AI, the sophistication of the model, or the degree to which humans review the output. The test is whether the AI system is used in that category of application in a way that could significantly affect the interests of natural persons.
An AI system that scores job applicants is high-risk. So is an AI system that ranks them. So is an AI system that flags which applicants a recruiter should spend more time reviewing. All three are influencing employment outcomes. The degree of autonomy varies; the high-risk classification does not.
The Augmentation Principle
The most consequential misunderstanding about the EU AI Act's high-risk framework is what practitioners have called the augmentation defense: "Our AI doesn't make decisions — it augments human judgment."
This defense does not work under the Act. The regulation applies to AI systems that are used in a way that "significantly affects" persons in the relevant categories. An AI that produces a ranked list of loan applicants that a loan officer reviews before approving is influencing credit decisions — even if a human sees the list before any action is taken. The human review step does not remove the AI's influence on the outcome; it just adds a step between the AI output and the decision.
The compliance question is not "does our AI make the final call?" It is "does our AI influence decisions in a high-risk category?" That is a substantially larger perimeter — and most organizations have not drawn it accurately.
This matters practically because the infrastructure requirements for high-risk AI systems — audit trail logging, human oversight mechanisms, decision explainability, risk management systems — are technically specific. They do not reduce to documentation of intent. An organization that concludes it does not have high-risk AI systems when it does has not just made a legal error; it has failed to build the infrastructure that enforcement authorities will look for when they encounter its systems.
Where Organizations Are Miscategorizing
Based on what is visible in compliance conversations as August 2 approaches, several categories of AI deployment are routinely miscategorized as not-high-risk:
HR tools with AI scoring components. Many applicant tracking systems now include AI scoring features — fit scores, screening recommendations, engagement predictions. Organizations are treating these as "software tools the HR team uses" rather than as AI systems that influence employment outcomes. Under the Act, they are the latter.
Credit and insurance AI with human review layers. Fintechs and insurtech companies have often built AI models that produce scores or recommendations with human underwriters reviewing edge cases. The AI layer is high-risk regardless of the review layer above it.
Healthcare AI that "supports clinical decisions." AI that flags risk levels, recommends triage priority, or surfaces medication interactions is influencing healthcare decisions for natural persons. The framing as decision support rather than decision-making does not change the classification.
Educational AI that influences access. AI used to assess student readiness, recommend program placement, predict completion risk, or flag students for intervention is influencing educational access. This includes AI systems embedded in learning management platforms that most educational institutions did not build themselves.
The Cost of Miscategorizing
An organization that incorrectly concludes it has no high-risk AI systems faces a specific consequence: when an enforcement authority encounters its AI deployments — in an audit, a complaint investigation, or a market surveillance activity — the organization will have no compliant infrastructure because it did not believe it needed any.
The Act's enforcement framework does not typically result in immediate maximum fines for first-identified non-compliance. It typically results in an obligation to remediate — accompanied by a remediation deadline and continued monitoring. But the remediation is not just updating documentation. It is building audit trail infrastructure, implementing compliant oversight mechanisms, and creating the logging capability that should have been there from the start. Organizations that begin that work under an enforcement deadline, with a surveillance authority watching, face a materially different cost structure than organizations that begin it today.
1. What categories does it touch? Map each AI deployment to the Annex III categories. Be honest about the "less obvious" examples — scheduling AI that affects hours, scoring AI that filters candidates, assessment AI that influences placement.
2. Does it influence natural persons? If the AI output feeds into any process that affects employment, credit, education, healthcare, or essential services for a person within the EU, it is influencing natural persons.
3. Is there a human review step? Good — but it does not change the classification. Map it accurately and build the required infrastructure.
4. Who built it? If you are deploying a third-party AI system in a high-risk category, you are a "deployer" under the Act and bear deployer obligations regardless of whether you wrote the model.
Twelve Days
Organizations that identify high-risk AI deployments in the next twelve days will not achieve full compliance before August 2. The infrastructure requirements are not twelve-day projects. But they can begin — and beginning before enforcement begins is materially different from beginning in response to an enforcement inquiry.
The organizations with the most significant exposure are not those that are non-compliant on August 2 while actively building compliant infrastructure. They are the organizations that arrive at August 2 having concluded they have nothing to worry about — when in fact they have high-risk deployments, no compliant infrastructure, and no awareness that they are non-compliant.
The perimeter question is the prerequisite. You cannot build compliant infrastructure for systems you have not identified as requiring it. Draw the perimeter accurately first — then address what it contains.
AI compliance infrastructure for legal and operations teams. Audit trail generation, human oversight anchors, decision explainability — built for the EU AI Act, Colorado AI Act, and what comes after. The infrastructure that makes compliance evidence automatic, not retrospective.
mandate.onstratum.com →