Skip to main content
application bridge interview prep

What I'd say, and what I'd ask.

Every other page on this site is the long form of an answer. This page is the short form — the ten-second versions of the questions I'd expect to come up in a Senior Research Engineer interview at a systematic-trading firm, and the questions I'd want to ask back in the second half of the conversation.

The answers will sound different in the room — preparation isn't a script. But the direction of each answer is what I'd defend, and that's what I'm betting this page captures.

What I'd expect a senior research engineer interview to spend the most time on. The answers below are what I'd say in the room — not the writeups themselves, but the ten-second versions of them.

  • Q ·

    Walk me through how you'd design the research framework if you started from scratch.

    A ·

    Three layers, owned by one team. (1) Features as a build system — declarative, content-addressed, point-in-time enforced at the boundary so a researcher cannot write a feature that consumes the future. (2) Model spec as a small dataclass + a single executor that owns walk-forward partitioning and the evaluation report. (3) A registry that stores `(code, features, data_window, hyperparams, eval_report, owner)` and refuses to load anything that doesn't pass a schema-hash check at deploy time. The interesting decisions are at the boundaries — what the framework rejects, not what it accepts.

  • Q ·

    What's the most subtle leakage bug you've caught?

    A ·

    Re-fitting a global scaler on the full sample before walk-forward partitioning. The Sharpe doubled in research, dropped to nothing in paper trading. It wasn't a bug in any one line — it was a contract violation between two engineers who each thought the other handled it. Fixed by enforcing per-fold scaler refit inside the framework's `walk_forward` helper; subsequent attempts to opt out of that contract have to go through code review.

  • Q ·

    How would you measure whether the platform is working?

    A ·

    Four numbers, looked at together. (1) Median time from idea-in-notebook to model-in-registry. (2) Median time from candidate-promotion to live-canary. (3) Number of distinct quants who promoted at least one model this quarter. (4) The complement of (3) — quants who didn't. If (4) is large, the framework is too foreign, not too small. If (3) is large but (1) is climbing, we're bottlenecking on review, not on engineering.

  • Q ·

    Where does Python end and a faster language begin in your stack?

    A ·

    Python everywhere a researcher reads or writes code. The faster language sits at the data plane: ingest, serialisation, the streaming feature runtime. The boundary is a typed Arrow channel — same schema as the parquet lake, zero copy on the hot path. The point isn't Rust-for-Rust's-sake; it's that the latency-sensitive surface is small and contained, so Python carries the parts where developer leverage matters most.

Realistic scenes, not abstract ethics. The role mandate is to 'expand the current (limited) scope of the framework and platform' — every scenario below is something I'd expect to land on my desk in the first six months.

  • Q ·

    A senior quant says the framework slows them down and they'll keep using their notebooks. What do you do?

    A ·

    Sit next to them for a morning. Watch what they actually do, not what they say they do. Roughly half the time, the friction is real and the framework is missing something obvious to them but invisible to the team. The other half, the friction is one specific cell — the promote step, usually — and the rest of their workflow already runs through the framework. Fix the real friction; for the cell-specific complaints, take the cell and make it free. Adoption is bought one heavy user at a time.

  • Q ·

    A production model is bleeding money. Walk me through what you do in the first hour.

    A ·

    Roles first. Trading-floor and risk own the position; my job is to give them the fastest possible read on "is the model receiving the data it expects?" — that's the freshness + schema board. If those are green, the model is doing what it was trained to do, on the data it was trained for, and the conversation moves to research triage rather than infra. If they're red, the registry has a one-call rollback path; we use it and write the post-mortem after. The point is to make the decision tree short and pre-decided, not heroic.

  • Q ·

    Compliance asks how a specific trade from six months ago was generated. You have an hour.

    A ·

    Every signal carries a `model_id`. Every fill carries the `signal_id`. The registry pins `model_id` → (code commit, feature graph hash, data window, eval report, owner). That's the join. One query gets the trade → signal → model → exact feature snapshot the model was trained on. The hour is spent assembling the narrative for compliance, not chasing the data. If any of those joins isn't reproducible, that's the platform bug to fix — not a one-off rebuild.

What I'd want the team to know about how I work before they decide to spend a year with me.

  • Q ·

    What kind of work do you not want to do?

    A ·

    Rewriting working code because it offends my taste. I've done it; I regret it. The platform's second year is always shaped by what the first year shipped, not by what the engineer who arrived in year two would have preferred. I'd rather make the existing thing slightly better and earn the right to redesign in quarter three.

  • Q ·

    How do you handle disagreement with a quant who outranks you on the alpha but reports to your function?

    A ·

    Their alpha thesis is theirs. The framework's contracts are mine. If a contract is in their way and they're right that it shouldn't be, we relax the contract together with the team. If they're wrong, I owe them a clear no and a written reason — and they owe me a chance to convince them. Almost every productive disagreement I've had has come out of treating those as the same conversation, not separate fights.

  • Q ·

    What's a strong opinion you hold loosely?

    A ·

    That observability for trading should split the freshness/calibration/PnL boards across two different on-call rotations. I believe it strongly, and I've seen what happens when you mix them — but I haven't worked on a team where it was hard to actually do that, and at a different scale or culture it might be wrong.

An interview that doesn't flip in the second half is a screening, not a conversation. Questions I'd genuinely want to ask, calibrated to a Senior Research Engineer seat on this team:

  • Q ·

    What does the current framework do well that you'd be sad to lose in a redesign?

    A ·

    Tells me what the bright lines are before I propose anything that crosses one. The answer is also a real signal of how much taste the team brings to its own work.

  • Q ·

    Who on the team has the strongest opinion about how the platform should evolve, and where do they want it to go?

    A ·

    A senior IC seat at a small, non-hierarchical firm is in part a partnership with one or two people who have a vision. I'd like to know whose, and what.

  • Q ·

    What's a model in production today whose lineage the team is least confident about?

    A ·

    Every platform has at least one. The answer to this question — and how comfortable the team is naming it — tells me more about culture than any "what's your culture like?" prompt could.

  • Q ·

    How do the research engineers in this office actually collaborate with the ones in your other offices?

    A ·

    If the firm has a global rollout story or a rotation programme, I want to know whether that's a sentence in the careers page or a rhythm — how decisions actually get made when the team is split across timezones and the work is concentrated in one office.

A portfolio is a static thing; an interview is a live one. The gap between them is where good candidates fumble — not because they don't know their work, but because the room asks for it in a different shape than the resume did. Writing this page before the conversation is the cheapest practice for shrinking that gap.