Stratum← Stratum Journal
← Stratum Journal
Research OperationsMarch 6, 20267 min read

The Hidden Cost of Siloed Research


Consider a PI with a joint appointment. One department for the processing and experimental side, another for the computational and structural work. This is common in materials science — the field lives in the gap between chemistry, physics, and engineering, and the most interesting problems tend not to respect departmental boundaries.

The PI understands both sides. They hold both conversations. They are, in their own head, the integration layer.

The lab does not have this integration layer. The CHBE group knows what they are doing. The MSE group knows what they are doing. The knowledge that would connect them — the synthesis insight that explains why the simulation result matters, the simulation result that explains why the synthesis failed — lives in the PI's head, surfaces in lab meeting, and then disappears.

The Taxonomy of Research Silos

Silos in multi-department labs are not primarily a communication problem. They are a knowledge routing problem. The information exists; it just never arrives where it would be useful.

In a typical cross-departmental computational materials lab, the silos run in at least three directions:

01

Method silos

The student running DFT calculations in VASP does not know the synthesis parameters the experimental student is optimizing. The experimental student does not know which computational results are robust and which are still being converged. They work on the same material, toward the same goal, from opposite ends of the problem.

02

Literature silos

The CHBE literature and the MSE literature have significant overlap in the relevant papers, but the students read from different journals, follow different conferences, and surface different work in group meeting. A breakthrough result in soft matter mechanics may solve an open question in the synthesis group's experimental design — and they never see it.

03

History silos

When a synthesis approach fails, that failure lives in the student's notebook and the PI's memory. When a DFT calculation reveals a mechanism that explains a previous synthesis failure, the connection requires that both facts be simultaneously in someone's head. Usually that someone is the PI. When the PI is traveling, or overloaded, the connection never gets made.

What the Overhead Actually Costs

The direct cost of research silos is not usually measured in papers lost. It is measured in time spent re-establishing context. Every time a student from one subgroup needs to understand what the other subgroup has done, there is a retrieval overhead: finding the right notebook, asking the right person, waiting for lab meeting, scheduling a side conversation. In a lab that runs fast, this overhead is a constant drag.

The graduate student who has been in your lab for four years is not just faster — they are faster because they have memorized the connections between the two sides of the work. When they leave, that integration leaves with them.

The indirect cost is harder to quantify but more significant: the experiment that never gets designed because the DFT result that would have motivated it never reached the right person. The collaboration that never forms because the literature overlap was never surfaced. The grant proposal whose cross-disciplinary angle is weaker than it should be because the synthesis and computational narratives never fully merged.

In a well-run lab, the PI performs this integration manually, continuously. It is often listed nowhere in their job description. It is among the most valuable things they do.

The Scale Problem

A PI with eight people can hold the connections between the two sides of the lab in their head. The lab meeting is short enough to synthesize in real-time. The relevant history is recent enough to retrieve from memory.

A PI with fifteen people cannot. The coordination overhead scales nonlinearly. The PI spends more time in meetings, more time answering "what did we decide about X?" questions, more time being the integration layer — and less time doing the research that makes the integration worth doing.

At 8 people, the PI can hold the connections. At 15, the integration starts to slip. At 20, critical cross-subgroup connections routinely never get made. The silo problem does not scale gracefully.

The failure mode is not dramatic. Nothing breaks. Publications still happen. Students still graduate. But the pace of discovery slows in ways that are almost impossible to attribute: papers take longer, experiments are repeated, literature reviews cover less ground than they should. The lab is doing fine. It could be doing significantly better.

Why Documentation Doesn't Solve It

The obvious answer to the silo problem is documentation. If both subgroups documented their work thoroughly, the integration could happen asynchronously. Any student could read the other group's notes and understand where the work stood.

This answer is technically correct and practically insufficient. Documentation-as-integration fails at exactly the moment it is needed most: under deadline pressure, when the lab is moving fast, and when the student who should be writing notes is instead running the next experiment.

More fundamentally, documentation captures what happened. It does not surface what is relevant. A well-documented CHBE notebook and a well-documented MSE notebook are still two separate notebooks. Connecting them still requires someone to read both and recognize the relationship.

The Architecture of Integration

The silo problem in multi-department labs is not solved by more documentation or better communication habits. It is solved by a shared knowledge layer — one that understands what each subgroup has done, can answer questions about the full body of lab work, and can surface connections that no individual student would have thought to make.

In practical terms: when a student in the computational group asks what ENCUT converged to for the MXene surface calculations, they should get the answer. When a student in the synthesis group asks whether anyone has tried the 600°C annealing approach before, they should find out — including what the computational prediction was for that temperature range, if one exists.

The PI has always been that integration layer. The question is whether the PI should continue to hold that role at scale, or whether there is a better architecture.


ResearchOS — a shared knowledge layer for research labs

Indexes your lab's history across subgroups, departments, and HPC clusters. Answers cross-disciplinary questions. Surfaces the connections your PI has been making manually.

Early access — probe.onstratum.com