Stratum← Stratum Journal
← Stratum Journal
RegulatoryJuly 28, 20269 min read

The Retrofit Problem


5 days to EU AI Act enforcement
EU AI Act high-risk AI system obligations: effective August 2, 2026 — five days from now.
Colorado AI Act: in effect since June 30, 2026.

Many organizations are now attempting to retrofit compliance onto existing AI deployments. This is the right instinct — but three of the most common retrofitting approaches produce compliance theater, not compliance infrastructure.

Five days before the EU AI Act's high-risk obligations take effect, a predictable pattern has emerged in the compliance conversations happening at organizations that now realize they have high-risk AI deployments without compliant infrastructure. The pattern is the retrofit attempt: add logging, add a review step, write the documentation, declare compliance.

The impulse is right. Retrofitting something is better than retrofitting nothing. But retrofitting AI compliance infrastructure is structurally different from other compliance retrofits — the financial reporting kind, the data privacy kind, the workplace safety kind — in ways that determine whether the retrofit produces actual compliance or the appearance of compliance.

The distinction matters because enforcement authorities are experienced in exactly this pattern. They have seen the post-hoc logging system that captures API responses but not decision context. They have seen the review step that processes inputs without meaningful examination. They have seen the governance document that describes what the system is supposed to do rather than what it actually does and whether it was authorized to do it.

The Three Retrofit Failures

These are the three most common retrofit approaches and why each one fails to produce the compliance evidence the Act requires.

Logging retrofitfails
What it looks like
Add a logging layer after deployment; capture API calls and responses
Why it fails
Post-hoc logs capture what the system did, not why it was authorized to act or what context was active at decision time. The Act requires the latter.
Produces
Operational records
Required
Accountability records
Review step retrofitfails
What it looks like
Add a human approval step to existing automated workflows
Why it fails
The Act requires oversight mechanisms that are structurally effective at detecting and correcting errors. A review step that approves 98% of outputs without meaningful examination is not effective oversight — it is theater.
Produces
Process documentation
Required
Structural oversight capability
Documentation retrofitfails
What it looks like
Write documents describing what AI systems do, their risk levels, governance policies
Why it fails
The Act asks what systems do, not what documents say. A policy that describes intended logging does not satisfy the requirement to log. Documentation of an oversight process is not oversight infrastructure.
Produces
Policy artifacts
Required
Compliance evidence

Why Logging Retrofits Fail

The EU AI Act's logging requirement is technically specific. It requires "automatic logging of operations to the extent such logs enable post-deployment risk identification." This is not a requirement to log; it is a requirement to log in a way that enables risk identification. Those are different requirements.

A logging retrofit typically captures the system's inputs and outputs — the API request and response, the action taken, the outcome recorded. This is operational logging. It is not accountability logging. Accountability logging captures the authorization state at the time of action: which user or system was responsible, what context they were operating in, what constraints were active, why the system was permitted to take this action at this time on behalf of this person.

The gap between operational and accountability records is exactly what regulators interrogate when investigating whether an AI system produced a harmful outcome. "The system produced output X" is operational. "The system was authorized to produce output X, operating under constraint set Y, on behalf of user Z, with oversight level W" is accountability. The former exists in most AI systems. The latter requires deliberate architecture.

A log that records what happened is an audit log. A log that records what happened, why the system was authorized to act, and who bears accountability for the outcome is compliance infrastructure. Most retrofits produce the former and call it the latter.

Why Review Step Retrofits Fail

The Act requires human oversight measures that "enable the detection and correction of risks and errors." The operative word is "enable" — oversight mechanisms must be structurally capable of catching the failures they are supposed to catch.

A review step that approves 98% of AI outputs without meaningful examination does not enable detection of errors — it provides a process wrapper around automated decisions. This is not a judgment about the reviewers. It is a structural point: oversight mechanisms designed as approval steps rather than as verification mechanisms will exhibit approval rates that reflect their design, not their effectiveness.

Effective oversight requires that the human reviewer have the information necessary to meaningfully evaluate the AI output, the time and tools to exercise judgment, and a structural incentive to flag errors rather than approve them. Retrofitting an approval step into an existing automated workflow addresses only the first condition in most cases, and none of the other two.

Why Documentation Retrofits Fail

Documentation retrofits fail for a reason that is worth stating plainly: the EU AI Act does not ask what your documents say about your AI systems. It asks what your AI systems do and whether they do it in compliance with the regulation.

A risk management system policy document is not a risk management system. A governance framework that describes oversight processes is not oversight infrastructure. A technical specification that explains how the system would log if properly configured is not an audit trail.

The confusion arises because documentation retrofits satisfy the compliance checklist that many organizations have built: "Do we have a risk management system? We have a policy for one — check." The Act is not testing whether you have policies. It is testing whether your systems have the properties the policies describe. Documentation of intent does not satisfy a technical requirement.

What Compliance-by-Design Actually Requires

The counterpoint to retrofitting is not a five-day project. It is an architectural orientation — treating compliance infrastructure as a design constraint rather than a deployment add-on. The organizations that achieve this for systems going into production now will face the enforcement landscape very differently from organizations that continue to produce retrofit artifacts.

Compliance-by-design has three properties that retrofits structurally cannot replicate:

The audit infrastructure is contemporaneous. Authorization context, oversight records, and decision provenance are captured at the time of action, not reconstructed from operational logs after the fact. When an enforcement authority asks to review a specific decision from a specific date, the record of why the system was authorized to act and who was responsible exists in the audit trail — because it was recorded at the time, not because someone assembled it afterward.

Human oversight is a structural constraint, not a process step.For consequential decisions, the oversight mechanism is architecturally enforced — the system cannot complete a consequential action without human review in the required cases. This is different from a process that requires human review and relies on procedural compliance to enforce it. Structural constraints cannot be bypassed; process requirements can.

The compliance layer is part of the production system, not separate from it.Retroactively attached compliance layers — logging middleware added after deployment, approval workflows bolted onto existing pipelines — create synchronization problems that compound over time. When the production system changes, the compliance layer often doesn't. The result is compliance infrastructure that accurately describes the system as it was at deployment, not as it is now.

The window that remains after August 2
The enforcement regime for the EU AI Act does not produce immediate maximum penalties for every non-compliant system on the effective date. It identifies non-compliance, notifies operators, and creates an obligation to remediate with escalating consequences for failure to do so.

The organizations with the most significant exposure are not those that are non-compliant on August 2 and immediately building compliant infrastructure. They are those that face an enforcement inquiry having concluded that their retrofit artifacts satisfy the regulation — and discover, under scrutiny, that they do not.

Building compliance infrastructure after August 2 is still the right decision. Building it correctly — as infrastructure rather than documentation — is what determines whether the investment produces compliance or compliance theater.

Mandate

AI compliance infrastructure for legal and operations teams. Audit trail generation, structural oversight anchors, decision explainability — designed from the ground up for EU AI Act and Colorado AI Act obligations. Not a retrofit. Architecture.

mandate.onstratum.com →
Warden

Fleet operations with compliance infrastructure built in. Tamper-resistant audit trails, authorization chain logging, structural oversight for consequential decisions — the operational layer that makes compliance evidence automatic.

warden.onstratum.com →
Sean / Stratum
© 2026 Stratum · hello@onstratum.com · onstratum.com