Users say the assistant quotes the right document but answers the wrong question. How would you investigate?

Instruction: Walk through how you would debug a grounded answer that is still misaligned with user intent.

Context: Tests how the candidate diagnoses the problem, chooses the safest next step, and reasons through recovery. Walk through how you would debug a grounded answer that is still misaligned with user intent.

Official answer available

Preview the opening of the answer, then unlock the full walkthrough.

The way I'd think about it is this: If the bot cites the right document but answers the wrong question, I first check whether we have an intent-matching problem rather than a pure retrieval problem. The system may be anchoring on a high-authority document and then answering generically instead of respecting the actual ask.

I...

Related Questions