Stratum← Stratum Journal
← Stratum Journal
Research ComputingMarch 5, 20266 min read

LAMMPS and VASP Workflow Management: What Works and What Doesn't


If your lab runs LAMMPS molecular dynamics or VASP density functional theory calculations on an HPC cluster, you have a workflow management problem whether or not you've named it yet. Jobs fan out across hundreds of cores, parameters iterate across system classes, and the reasoning behind each configuration choice accumulates entirely in the heads of whoever ran the jobs. The tools available to manage this fall into two broad categories: those that help you run calculations, and those that help you remember what you learned from them. Most labs have the first. Almost none have the second.

The Tools That Exist

Here is an honest assessment of the main options computational materials and chemistry labs reach for, what they actually solve, and where they stop.

Workflow Tools for LAMMPS / VASP Labs
AiiDA
Solves: Provenance tracking for DFT workflows — records which inputs produced which outputs with a full DAG. Strong VASP and Quantum ESPRESSO plugins.
Doesn't solve: Records what ran, not why. The decision to try PBE vs. PBE+U for a particular system class, or why ENCUT was set to 600 eV instead of the default — that reasoning is invisible to AiiDA.
Fireworks + atomate
Solves: Automated workflow chaining for VASP calculations — relax → static → DFPT → HSE in sequence, with MongoDB-backed state tracking.
Doesn't solve: Automation of a predetermined protocol. When researchers deviate from the protocol (and they always do), the rationale for the deviation is stored nowhere. The parameter choices in the workflow itself also have no documented history.
ASE (Atomic Simulation Environment)
Solves: Python interface for LAMMPS, VASP, and 40+ other codes. Makes it easy to script calculations, set up structures, and parse outputs.
Doesn't solve: A library, not a knowledge system. ASE scripts accumulate in repositories with no record of why specific interface choices or parameter defaults were selected.
Pyiron
Solves: Interactive workflow management for atomistic simulations — good LAMMPS integration, Jupyter-based, good for teaching and reproducibility.
Doesn't solve: Pyiron records the workflow. It does not record the scientific judgment: why this potential, why this thermostat, why this timestep for this system. That lives in the notebook author's head.
SLURM / PBS logs
Solves: Complete record of what ran: job IDs, wall times, exit codes, resource allocations. Free, always-on, authoritative.
Doesn't solve: Records the execution layer with perfect fidelity. Zero information about scientific rationale. SLURM knows job 47283 failed with exit code 137. It doesn't know what the researcher was trying to find out.

What None of These Tools Capture

Every tool above addresses a real problem. The gap they share is the same one. None of them capture the scientific reasoning behind workflow decisions — and that reasoning is the knowledge that actually walks out the door when a graduate student or postdoc leaves your lab.

Concretely: when your lab has been running LAMMPS with the ReaxFF potential for a particular system for three years, the following knowledge exists somewhere in your group:

None of that knowledge is in AiiDA, Fireworks, ASE, Pyiron, or SLURM. It is in the head of whoever ran those jobs — typically a PhD student in year 3 or 4, with 2 or 3 years left before they graduate and take it with them.

The compute stack is well-managed. The knowledge layer above it almost never is.

What Actually Works for the Knowledge Layer

The workflow tools above are worth using. AiiDA genuinely improves DFT provenance. Pyiron genuinely improves reproducibility for LAMMPS workflows. The problem is treating them as knowledge management systems when they are execution management systems.

The knowledge layer — the reasoning behind parameter choices, the failure modes your lab has mapped for specific system classes, the accumulated judgment of researchers who have since graduated — requires a different kind of system. One that is active during research rather than visited after publication. One that connects to your actual workflows rather than sitting in a separate documentation tool. One that grows more useful over time as more people contribute knowledge, rather than decaying as the person who set it up graduates.

This is what we built ResearchOS to do. It is not a replacement for AiiDA or SLURM — it connects to them. It builds a queryable institutional memory layer above your existing compute stack. When a new student joins and asks "what ENCUT has the lab used for transition metal oxide surface calculations?" the answer comes from the lab's actual decision history, not a wiki nobody has touched in two years.


Probe / ResearchOS

ResearchOS is the institutional memory layer for computational research labs. It connects to your HPC environment and builds a queryable record of the scientific reasoning behind your workflows — the layer AiiDA, Fireworks, and SLURM don't touch. We're currently in early access with founding labs at R1 universities.

probe.onstratum.com →
Sean / Stratum
© 2026 Stratum · hello@onstratum.com · onstratum.com