Instruction: Explain how you decide whether context is relevant enough to help the model.
Context: Checks whether the candidate can explain the core concept clearly and connect it to real production decisions. Explain how you decide whether context is relevant enough to help the model.
The way I'd think about it is this: Useful context is context that helps answer the specific question with enough support to trust the result. Longer context is not automatically better. If half the window is boilerplate, duplicates, or loosely related material, the model has more ways to get distracted while still sounding confident.
I look for three things. First, the evidence has to match the user’s intent, not just share keywords. Second, it has to be sufficient, meaning it contains enough detail to support the answer without forcing the model to invent missing steps. Third, it should be attributable to a source so the system can cite it and engineers can debug it.
In production, teams often hide weak retrieval by stuffing more chunks into the prompt. That usually hurts latency and faithfulness together. I would rather have a smaller set of high-support passages than a larger pile of vaguely related text.
A weak answer is, "More context is always better because the model can ignore what it does not need." In practice, noisy context often makes the answer less faithful and harder to debug.
easy
easy
easy
easy
easy
easy