Instruction: Outline a detailed prompt sequence that guides a language model through the process of identifying a bug in a given snippet of code, suggests possible causes, and recommends corrective actions. Include considerations for different programming languages and error types.
Context: This question gauges the candidate's ability to architect complex interactions with language models, tailor prompts based on context (e.g., programming language, error type), and their understanding of software debugging principles. It tests the candidate's skill in breaking down a complex task into manageable steps that a language model can assist with, demonstrating depth in both prompt engineering and software development.
I would split code debugging into stages rather than ask for one giant answer. A good sequence is: first summarize the bug, then inspect likely root causes, then propose the smallest fix, then suggest tests.
For example:
Step 1: Read the code and explain the likely bug in plain English. Step 2: List the top 3 root-cause hypotheses and rank them. Step 3: Propose the smallest safe code change. Step 4: Explain what test cases should be added or run to verify the fix.
This structure works because debugging is a reasoning workflow. Breaking it into diagnosis, fix, and verification reduces shallow code rewriting and improves traceability.
What I always try to avoid is giving a process answer that sounds clean in theory but falls apart once the data, users, or production constraints get messy.
A weak answer says "fix this code" and skips diagnosis, alternative hypotheses, and verification.
easy
medium
hard
hard
hard