Instruction: Explain why prompt injection is a distinct safety problem in AI systems.
Context: Checks whether the candidate can explain the core concept clearly and connect it to real production decisions. Explain why prompt injection is a distinct safety problem in AI systems.
The way I'd explain it in an interview is this: Ordinary bad input is mostly a content-quality problem. Prompt injection is a control-plane problem. The attacker is not just providing messy data. They are trying to smuggle instructions through a channel the system should have treated as untrusted content.
That distinction matters because the fix is different. Better prompting may help with ordinary ambiguity, but prompt injection requires separation between instructions, data, tool use, and policy enforcement. If the system cannot tell content from control, it is easy to steer.
So I treat prompt injection less like a rude user and more like an adversarial attempt to reshape how the system reasons and acts.
A weak answer is saying prompt injection is just clever bad input. The important difference is that it attacks system control, not just output quality.
easy
easy
easy
easy
easy
easy