The Context Problem
A research scientist has used the same AI assistant for eight months. In that time she has explained her experimental methodology to it four times, corrected the same misconception about her lab's statistical approach twice, re-described her relationship with her PI every time a new collaboration question comes up, and spent a combined six hours in total providing background that the system should already know.
The assistant is technically capable. It can draft papers, summarize literature, and help design protocols. What it cannot do is remember any of the eight months of operational context that would make it genuinely useful — the methodological preferences, the lab conventions, the accumulated understanding of her specific research domain that she has been building session by session and losing session by session.
This is the context problem. It is not a capability problem. The model that powers the assistant was trained on more scientific literature than any human could read in a lifetime. It has the breadth. It does not have the depth — the specific, accumulated, operationally-grounded context that distinguishes an assistant who knows your work from a tool that is merely competent at your domain.
The Four Layers of Context
Understanding the context problem requires distinguishing between four different kinds of context that an AI system might hold — and recognizing which ones almost no current system actually maintains:
The first type — training context — is what most people mean when they talk about what an AI "knows." It is broad, general, and identical across every user. It is genuinely useful. It is also the least interesting layer from a practical standpoint, because it is the one layer that does not require infrastructure.
The second type — session context — is what you build within a conversation. When you explain your situation, the model remembers it for the duration of the exchange. This is the layer most current AI tools operate on. It disappears when the session ends.
The third and fourth types — operational context and institutional context — are where the real value lives. These are the accumulated patterns of how you work: the decisions you have made, the approaches you have tried, the relationships and constraints and preferences that make your situation specific. These layers require infrastructure to capture and maintain. Almost no current deployment builds them deliberately.
A model with a 200K token context window is not the same thing as a model with memory. Context windows hold information you put there in the current session. Memory holds information that was worth keeping from sessions past. These are different problems requiring different infrastructure.
Why Context Compounds — and Why Its Absence Does Too
Human expertise compounds. A doctor who has treated ten thousand patients has developed diagnostic intuitions that cannot be recovered from a textbook. A software engineer who has worked on the same codebase for five years knows which parts to trust and which to treat with suspicion. A researcher who has spent years on a problem understands which methodological choices are load-bearing and which are negotiable.
This compounding is not primarily about factual knowledge — it is about context. The ability to see new problems through the lens of accumulated experience, to recognize patterns that are invisible to someone approaching the domain fresh, to make decisions that account for history that is not captured in any document.
An AI assistant that starts every session fresh cannot compound in this way. Each interaction is isolated from every preceding one. The insight from Monday's session is not available on Friday. The correction you made to the model's understanding of your methodology last month has to be made again next month. The assistant remains at approximately the same usefulness level regardless of how long you have been working with it — because it has no accumulated understanding of you specifically.
The absence of context compounds in the opposite direction. Every hour spent re-establishing background is an hour not spent on the actual work. Every re-explanation carries the risk that the assistant will not integrate the context correctly this time either. Every session starts with a cognitive overhead that expert human collaborators do not impose.
The Re-explanation Tax
There is a cost that most organizations are not measuring: the time their people spend re-establishing context for AI tools that do not remember anything.
The cost is small per session — five minutes here, ten minutes there. But it compounds across users, across sessions, and across the accumulating weight of context that needs to be re-established. An organization with fifty people using AI tools intensively is losing a meaningful fraction of those sessions to re-explanation overhead. The work the tool was supposed to accelerate is being partially offset by the work of preparing the tool to do the acceleration.
More consequentially: re-explanation is not a perfect substitute for memory. When you re-explain your methodology to a tool that has no memory of the previous explanation, you introduce variance. You will emphasize different things. You will forget to mention the caveat you added last time. The model will integrate the new explanation differently than it integrated the old one. The resulting output will not be as good as it would have been if the context had accumulated correctly from the first session.
The background establishment takes 15–20 minutes per session. Across 200 sessions per year, that is 50–70 hours of context re-establishment — per group. The AI assistant has been used for two years and knows nothing about the research program that it did not know on day one.
The alternative — an assistant with accumulated context about the research program — does not just save those 50–70 hours. It produces better output, because it can situate each new literature review in the context of the group's ongoing research questions rather than starting from a generic understanding of the field.
The Research Context Problem Specifically
Research environments face a particular version of the context problem because the knowledge that drives research decisions is unusually dense, specific, and non-transferable through documentation alone.
A laboratory's experimental methodology is not fully captured in its published methods sections. The judgment calls that make the methodology work — when to adjust a protocol, which sources of variability to control for, what a "normal" result looks like for this specific equipment configuration — live in the heads of the people who have run the experiments. When those people leave, the knowledge leaves with them.
AI tools used in research contexts have the potential to accelerate hypothesis generation, literature synthesis, protocol design, and data interpretation. But this potential is only realized if the tool can operate in the specific context of the lab's research program — not just as a sophisticated literature search engine, but as a collaborator that has internalized the program's questions, methods, and accumulated findings.
Building that depth requires infrastructure that captures and maintains research context persistently: the experimental history, the methodological decisions and their rationale, the literature the group has engaged with and what they concluded from it, the collaborators and their expertise, the open questions that are actively being pursued. This is not a documentation project — it is an infrastructure project. The context has to be live, queryable, and integrated into the tools the lab uses, not stored in a database that no one consults.
The Personal Memory Infrastructure Gap
The problem extends beyond professional contexts. People who use AI tools intensively for personal work — thinking through decisions, processing experiences, managing relationships and commitments — face the same blank-slate failure. Every session starts without the emotional and narrative context that would make the assistance genuinely personal rather than generically competent.
A therapist who saw you for the first time every session would have access to everything in the clinical literature about your presenting concerns. They would not have the accumulated understanding of your specific patterns, history, and trajectory that makes their guidance actually useful. The clinical knowledge is necessary but not sufficient. The personal context is what makes it effective.
AI tools used for personal reflection and decision-making face the same gap. The model can engage thoughtfully with anything you tell it. It cannot engage with what it does not know — and it does not know anything about you specifically unless you tell it again each session. The accumulated understanding that would make the tool genuinely useful is never built, because there is no infrastructure to build it into.
The most useful AI tool is not the one with the most capability. It is the one that knows you best — your patterns, your context, your history, your preferences, the specific texture of your problems. That kind of knowledge requires persistent memory infrastructure. A larger context window is not the same thing.
What Solving the Context Problem Actually Requires
Solving the context problem requires three things that most AI deployments do not have.
Persistent context storage. User-specific context needs to be captured, maintained, and updated as new information arrives. This is not a conversation history — it is a structured representation of the accumulated understanding that the system has built about the user and their work. It needs to persist across sessions, survive model updates, and be available to every tool in the workflow.
Context injection at session start. Persistent context is only valuable if it is available to the model when a session begins. This requires infrastructure that retrieves the relevant context — the aspects of the accumulated understanding that are most relevant to the current task — and makes it available without requiring the user to re-establish it manually. The goal is that each session begins with the model already oriented, not starting fresh.
Context quality management. Accumulated context degrades. Old decisions get superseded. Preferences change. Research programs evolve. A context layer that accumulates without curation becomes noise rather than signal. The infrastructure needs mechanisms for updating, deprecating, and surfacing the most relevant context for the current task rather than injecting everything that has ever been captured.
Organizations and individuals that build this infrastructure will have AI tools that get demonstrably better over time — not because the underlying model is improving, but because the context it operates on is accumulating. That is the difference between a tool and a collaborator.
Persistent personal context for people who use AI seriously. Your work patterns, decisions, preferences, and history — captured, maintained, and injected into every AI session so you stop re-explaining yourself and start compounding. Private beta opening April 2026.
memoir.onstratum.com →Research memory infrastructure for laboratories. Experimental history, methodological context, and accumulated findings — maintained as a queryable knowledge base that your AI tools can actually draw on. The institutional memory your lab never had.
probe.onstratum.com →