Research ↔ production parity
The two arrows from feature-forge carry the same graph — one materialised, one streaming. The boundary is the only place training and serving can diverge, so it gets a schema hash check at every emit.
The five project writeups aren't a portfolio of isolated tools. They're one platform, split along the seams a research engineer cares about: where data crosses into research, where research crosses into production, and where production crosses into observability. The diagram below is how I'd draw it on a whiteboard in the first meeting.
The two arrows from feature-forge carry the same graph — one materialised, one streaming. The boundary is the only place training and serving can diverge, so it gets a schema hash check at every emit.
The registry → signal-stream arrow only fires for an artifact with a signed evaluation report and an explicit human approver. The API enforces both; neither can be argued past at deploy time.
The observability box has two inbound edges (lineage from the registry, metrics from signal-stream) and emits to two owners: freshness pages on call, alpha decay surfaces to research. More on that here.
The diagram exists because the question "what does the platform do?" has many wrong shapes and one right one. When a teammate or stakeholder asks, this is the shape — not the codebase tour, not the org chart, not the per-project README. A research engineer's job is to keep that shape coherent as the codebase grows and as the people on it rotate.