The same user intent can be satisfied by chat, search, or tool execution, and the model keeps choosing the expensive path. How would you rebalance it?

Instruction: Describe how you would tune path selection when the assistant overuses costly actions.

Context: Tests how the candidate diagnoses the problem, chooses the safest next step, and reasons through recovery. Describe how you would tune path selection when the assistant overuses costly actions.

Official answer available

Preview the opening of the answer, then unlock the full walkthrough.

I would make the cheaper safe paths easier to choose and more explicitly preferred for the intents they serve well. If the model keeps taking the expensive route, it usually means the control surface makes that route look more generally capable or more trustworthy.

I would define routing rules for obvious cases,...

Related Questions