The assistant answers confidently even when retrieval returns weak evidence. How would you fix that?

Instruction: Explain how you would make a RAG assistant fail more honestly.

Context: Tests how the candidate diagnoses the problem, chooses the safest next step, and reasons through recovery. Explain how you would make a RAG assistant fail more honestly.

Official answer available

Preview the opening of the answer, then unlock the full walkthrough.

I would stop the system from treating weak retrieval as permission to improvise. The fix is a combination of product behavior and model behavior. On the product side, I want an evidence threshold that can trigger abstention, clarification, or a narrower answer. On the model side, I want prompts and post-processing...

Related Questions