What makes tool descriptions misleading to a model even when they read well to humans?

Instruction: Explain why human-readable tool descriptions can still mislead an AI system.

Context: Checks whether the candidate can explain the core concept clearly and connect it to real production decisions. Explain why human-readable tool descriptions can still mislead an AI system.

Example Answer

The way I'd think about it is this: Descriptions become misleading when they imply judgment the tool does not actually perform, hide important preconditions, or use natural language that sounds broad and helpful without defining exact boundaries. Humans fill in those gaps from experience. Models often take them literally or inconsistently.

Another failure mode is overlap. Two tool descriptions can both read well and still give the model no real basis for choosing one over the other. What matters is not just readability. It is whether the description helps the model disambiguate actions under real user intents.

I like descriptions that say when to use the tool, when not to use it, and what kind of result to expect back. That usually helps more than writing polished marketing-style prose.

Common Poor Answer

A weak answer is saying a description is good if it sounds natural and detailed. For models, the most important thing is decision clarity, not prose quality.

Related Questions