Instruction: Explain what feature scaling is and why it's important in machine learning.
Context: This question evaluates the candidate's understanding of machine learning preprocessing steps and their ability to explain the concept's importance.
The way I'd explain it in an interview is this: Feature scaling matters because many models are sensitive to the relative magnitude of input features. If one feature is measured in tiny values and another in very large values, algorithms based on distance or gradient updates may overemphasize the large-scale feature even when it is not more important.
In practice, scaling is especially important for models like logistic regression, SVMs, k-nearest neighbors, neural networks, and gradient-based optimization generally. It also makes regularization behave more sensibly because coefficients become more comparable. I would add that scaling is not equally important for every model. Tree-based methods are usually much less sensitive to it. So I treat feature scaling as a model-dependent preprocessing choice, not a universal rule.
A weak answer says feature scaling improves accuracy in general, without explaining which models actually need it and why scale can distort optimization or distance calculations.
easy
medium
medium
hard