How do you decide whether an agent should ask a human for approval?

Instruction: Explain how you would place human approvals in an agent workflow.

Context: Checks whether the candidate can explain the core concept clearly and connect it to real production decisions. Explain how you would place human approvals in an agent workflow.

Example Answer

The way I'd approach it in an interview is this: I decide based on reversibility, risk, and ambiguity. If the action has meaningful external consequences, cannot be cleanly undone, touches regulated or sensitive data, or depends on subjective judgment the system does not handle well, I want a human approval step.

I also think about confidence. Even a normally low-risk action may deserve review if the evidence is weak, the state is inconsistent, or the tool output looks unusual. Approval should not just be static policy. It should also respond to uncertainty.

The goal is not to slow the system down for comfort. It is to put humans where their judgment reduces real business risk.

Common Poor Answer

A weak answer is saying high-value actions need approval. Value matters, but reversibility, ambiguity, and downstream risk matter just as much.

Related Questions