Stratum← Stratum Journal
← Stratum Journal
SecurityApril 21, 20269 min read

The Trust Layer


When a new employee joins a company, they don't get access to everything. They get a badge that opens certain doors, credentials that unlock certain systems, and an implicit understanding of what falls within their role — and what does not. Nobody explains every possible action they could take and marks it permitted or forbidden. The constraints exist through the intersection of identity, role, and context. They are enforced by systems and by organizational norms.

Now consider how most AI agents are deployed. They receive API keys — the equivalent of a master passkey — and then operate with whatever permissions those keys provide, in whatever contexts the application places them, without any enforcement layer governing what they should or should not be doing at any given moment. They are not employees. They are closer to contractors with unlimited building access and no job description.

The gap between those two scenarios is the trust layer. And most AI deployments don't have one.

Authentication Is Not Authorization

The AI security conversation has been dominated by a problem that is real but narrower than it appears: authentication. How do you know which agent is making a request? How do you prevent unauthorized agents from accessing your systems? These are important questions, and there are reasonable answers to them — API keys, service accounts, OAuth scopes, mTLS certificates. They are imperfect implementations of a well-understood concept.

Authorization is a different problem. Authentication asks who is this agent?Authorization asks what is this agent permitted to do? — and the follow-on questions that authentication cannot answer: permitted to do in what context? Permitted bywhose authority? Permitted until when? Permitted to delegate to whom?

In human organizations, authorization is handled through role-based access control, approval workflows, principle of least privilege, and organizational hierarchy. It is imperfect, but it exists. Most AI deployments have none of this for their agents. An agent with a valid API key is an agent with whatever permissions the key provides — no more, no less, regardless of what the agent is actually trying to accomplish in the moment.

The question isn't whether your agent is authenticated. Every deployed agent is authenticated — that's what the key is for. The question is whether it's authorized. That requires a different infrastructure layer entirely.

Five Questions Most Deployments Cannot Answer

The trust gap becomes concrete when you map the questions that authorization infrastructure needs to answer against what most deployments actually provide:

Question
Layer
Status
Reality
Who is this agent?
Authentication
Partially solved
API keys, service accounts — primitive but functional
What is this agent allowed to do?
Authorization
Largely missing
Most deployments rely on implicit permissions — no enforcement layer
In what context can it act?
Context-bound authorization
Almost entirely absent
Agents authorized for one task routinely act outside that scope
Who delegated this permission?
Delegation chain
Not tracked
When Agent A instructs Agent B, authorization provenance is lost
What did it do with those permissions?
Auditability
Requires memory infrastructure
Cannot reconstruct authorization state without persistent logs

The pattern in the table is not a security failure — it is an infrastructure gap. The tools to build authorization systems for AI agents don't yet exist as a category. Developers building AI applications have to either build them from scratch, borrow patterns from IAM systems that were designed for humans, or (most commonly) skip the problem and rely on the implicit constraints of the model and the application context.

Implicit constraints work until they don't. The failure modes are not subtle — they are the scenarios that make AI incident reports: an agent that was supposed to draft emails but sent them, an agent authorized to read financial data that acted on it, an agent given restricted context that passed that context to downstream systems with different authorization postures.

The Delegation Problem

Multi-agent architectures introduce a specific failure mode that single-agent deployments avoid: the delegation chain. When Agent A receives instructions from a user and passes a subset of those instructions to Agent B, what permissions does Agent B inherit?

In most implementations: all of them. Agent B is initialized with the same credentials as Agent A, operates in the same permission context, and has access to the same resources. The intent was to delegate a specific subtask. The implementation delegates everything.

The principle of least privilege — one of the foundational ideas of security engineering — says agents should have the minimum permissions necessary to complete their task. In multi-agent systems, applying least privilege at delegation boundaries requires knowing, at the moment of delegation, what the subtask requires. That requires agents to understand their own permission requirements. Today, they don't — and the infrastructure to specify, enforce, and audit per-delegation permission scoping doesn't exist outside of custom implementations.

The delegation failure pattern
Step 1: User authorizes Agent A to manage their calendar with read and write access.

Step 2: Agent A delegates a scheduling subtask to Agent B. Agent B inherits full calendar credentials.

Step 3: Agent B, attempting to complete the subtask, reads the entire calendar history — including sensitive meetings the user never intended to expose to any automated process.

Step 4: Agent B's behavior is logged, but not in a way that captures what permissions it used or how those permissions were acquired. The delegation chain is invisible to the audit trail.

This scenario is not hypothetical. It is the default behavior of most multi-agent implementations today.

Context-Bound Authorization

Authorization in human organizations is not purely role-based — it is context-dependent. A financial analyst is authorized to access quarterly earnings data during the preparation period. The same analyst is not authorized to trade on that data before the public announcement. The permission is the same; the context is different; the authorization decision is different.

AI agents need the same context-sensitivity — and most have none. An agent authorized to read customer records for support purposes should not be authorized to use those records to train a model. An agent authorized to draft a recommendation should not be authorized to send it without review. An agent authorized to access pricing data should not be authorized to share it with a counterparty in negotiation.

Context-bound authorization requires the agent to operate within a defined scope — not just a defined identity. That scope needs to be specified at deployment, enforced at runtime, and logged for accountability. None of this is technically novel; access control lists, purpose limitations, and contextual integrity are established concepts. What is novel is applying them to agents that operate at a different speed and scale than human users, across sessions and fleet topologies that human IAM systems were not designed for.

Why This Becomes a Compliance Requirement

The Colorado AI Act (effective June 30), the EU AI Act (high-risk obligations effective August 2), and the Texas TRAIGA (in effect since January 2026) all share a common thread: they require organizations to demonstrate that their AI systems operate within defined, documented, and auditable limits. The specific technical requirements vary by jurisdiction. The underlying demand is the same.

To demonstrate that an agent operated within its authorized scope, you need a record of what that scope was, how it was enforced at runtime, and what the agent actually did relative to that scope. Authorization logs. Permission state at time of action. Evidence that the authorization was appropriate for the context.

You cannot reconstruct the authorization state from application logs alone, because application logs record what happened — not what was permitted. You cannot reconstruct it from the model's behavior, because the model has no intrinsic authorization concept. You need a persistent, structured record of the authorization decisions that were made, at the agent level, across every session.

That record is not a compliance artifact. It is a memory infrastructure requirement that becomes a compliance artifact when auditors ask for it. The organizations that have it will show it. The organizations that don't will discover, under audit pressure, that they cannot reconstruct it.

Authorization is not a policy document. It is not a terms of service. It is an infrastructure property — one that has to be built into the deployment, not added afterward when regulators ask for it.

What the Trust Layer Requires

A trust layer for AI agents is not an AI product — it is a systems design challenge that requires coordination between identity, authorization, logging, and the agent runtime itself. The minimum viable version has four components:

Agent identity with scope binding. Each agent has a persistent identity that carries a defined scope — not just authentication credentials, but a specification of what the agent is permitted to do. That scope persists across sessions and is checked at action time, not just at initialization.

Delegation constraints. When an agent spawns a subagent or delegates a task, it can only delegate permissions within its own scope. Permission inheritance is explicit, not implicit. The delegation chain is recorded — each node knows what authority it received, from whom, and for what purpose.

Context-sensitive authorization checks. Before acting, agents verify that the action is permitted in the current context — not just that they hold the relevant credential. Context is defined by the deployment configuration and enforced at the infrastructure level, not the model level.

Authorization state logging. Every action is logged with the authorization state that permitted it — what scope was active, what permission decision was made, what context was evaluated. This log is the reconstruction surface for audits and incident response.

None of these components are exotic. All of them require building something. Most AI deployments have not built any of them — because the urgency wasn't apparent until scale revealed the failure modes, and the regulatory clock wasn't running until this year.

The window
The Colorado AI Act takes effect in 70 days. The EU AI Act's high-risk obligations land in 103 days. Both require auditability of AI decision-making that presupposes a trust layer exists. Most organizations don't have one. Building it requires infrastructure changes, not policy changes — and infrastructure changes take time. The window to do this in advance of enforcement is measured in weeks.

Building Trust Infrastructure Now

The practical path forward is not to build a comprehensive IAM system for AI agents from scratch — that is a multi-year program. It is to establish the minimum viable trust layer that makes the authorization state visible, logged, and defensible.

Define agent scope explicitly. Every deployed agent should have a written specification of what it is authorized to do — not as a policy document, but as a runtime parameter that the authorization infrastructure enforces. If you cannot write that specification, the agent should not be in production.

Log authorization decisions, not just actions. Every action your agents take should be accompanied by a record of the authorization decision that permitted it. What scope was active? What context was evaluated? What human (if any) reviewed or approved the action before it was taken? This record is the core of your compliance posture.

Make delegation explicit. In multi-agent systems, treat each delegation as an authorization event — not an implementation detail. Record the scope being delegated, the identity of the receiving agent, and the context in which the delegation occurred. The delegation chain is your audit trail for distributed agent behavior.

Review authorization posture monthly, not annually. Agent deployments change faster than policy review cycles. New tasks get added, new tools get integrated, scope creep happens without deliberate authorization decisions. Monthly authorization reviews are not a compliance exercise — they are the mechanism that keeps implicit permissions from accumulating into liability.


Warden

Fleet operations with trust infrastructure built in. Agent identity with scope binding, delegation chain logging, authorization state at every action. The memory layer that makes fleet authorization auditable.

warden.onstratum.com →
Mandate

Compliance infrastructure for AI deployments. Authorization documentation, regulatory alignment for Colorado AI Act and EU AI Act, audit-ready records. For organizations that need to prove — not just claim — that their agents operated within defined limits.

mandate.onstratum.com →
Sean / Stratum
© 2026 Stratum · hello@onstratum.com · onstratum.com