Your Lab's Institutional Memory Is Graduating This May
Every spring, the same thing happens in computational research labs worldwide. A PhD student defends. There's a reception with sheet cake. Photos get posted to the lab Twitter. Two weeks later, the PI sits down to run the analysis workflow that student spent three years perfecting — and realizes they can't reproduce it from the documentation.
Not because the student was careless. They documented everything. There's a README. There's a wiki entry. There's a GitHub repository with commits.
But the README doesn't explain why they chose ENCUT=520 eV for that particular class of perovskites, and not 450. The wiki entry doesn't capture the three-month detour into a dead-end functional that taught them something crucial about the system. The commits show what changed, not why.
The student carried all of that in their head. They graduated. And now it's gone.
The numbers nobody talks about
A computational materials science PhD student spends roughly five years accumulating domain expertise. In that time, they develop intuitions about convergence settings for specific material classes that aren't in any paper, custom preprocessing workflows for your group's specific HPC setup, an understanding of which literature results can be trusted, and failure modes they encountered and fixed that never made it into a publication.
Reproducing that expertise from scratch takes a new student twelve to eighteen months minimum — and they'll make most of the same mistakes along the way.
These estimates are conservative and don't account for labs with higher turnover, more complex software stacks, or graduate students who exit without a clean handoff period.
Why documentation doesn't solve this
The instinct is to document more. We've all tried it.
The problem isn't quantity of documentation — it's that documentation captures procedures, not reasoning. A new student can follow the procedure and still not understand the reasoning that went into it. When the procedure doesn't work on a new system (and it won't), they don't know whether to change the KPOINTS, the cutoff, the smearing, or all three.
The reasoning lives in the connections between experiments. It lives in the conversation the PI had with the student in a group meeting in 2023 about why titanium dioxide and tin dioxide need different approaches even though they look structurally similar. It lives in the literature review the postdoc wrote in a draft that never became a paper.
Notion, Confluence, GitHub wikis — they're all containers for information. None of them are systems for connecting that information when someone asks a question they've never thought to document.
What actually works
The labs that handle this best share one trait: they treat knowledge management as a first-class activity, not an afterthought. They structure it around research context, not deliverables.
The difference: deliverable-focused documentation asks “what did we do?” Context-focused documentation asks “what did we learn, and what does that mean for everything we're working on?”
A VASP calculation has a deliverable (the output files) and context (why we ran this calculation, what hypothesis we were testing, what we found that surprised us, and what we'd do differently now). The deliverable is reproducible. The context is what lets the next person build on it rather than reconstruct it.
This is the core insight behind ResearchOS. We built a system that captures and connects research context — across experiments, literature, and conversations — in a way that makes the institutional memory of the lab queryable rather than buried. When a new student joins and asks “how do we handle surface terminations for this class of catalysts,” ResearchOS can draw on three years of your group's specific decisions, failures, and learnings — not generic documentation.
The graduation season test
If the honest answer to any of these makes you uncomfortable, you have the institutional memory problem. The good news: it's a solvable problem. It just requires treating knowledge management the way you treat your code — as something worth designing, not just accumulating.
Probe / ResearchOS — Founding Labs Program
If you're at a computational research lab and want to see how ResearchOS handles this, we're running a founding labs program through June 2026. Founding labs get a locked-forever rate and direct input on what we build.
probe.onstratum.com →