Instruction: Describe how you would place human review in a coding-agent workflow.
Context: Checks whether the candidate can explain the core concept clearly and connect it to real production decisions. Describe how you would place human review in a coding-agent workflow.
The way I'd approach it in an interview is this: I ask for human review when the change crosses a risk boundary: broad scope, unclear product implications, schema or infra changes, destructive commands, weak validation evidence, or any place where local correctness is not enough to judge the real impact.
I also like review when the agent’s reasoning path is fragile, even if the patch looks plausible. A correct-looking change with weak grounding is a bad candidate for autonomy because it is hard to trust and hard to maintain.
Human review should not be a punishment for the agent. It should be used where human judgment changes the outcome.
What I always try to avoid is giving a process answer that sounds clean in theory but falls apart once the data, users, or production constraints get messy.
A weak answer is saying every agent patch should get human review forever. The stronger answer explains where review meaningfully changes risk.
easy
easy
easy
easy
easy
easy