Instruction: Describe what fine-tuning means in the context of Transfer Learning and how it is accomplished.
Context: This question evaluates the candidate's understanding of fine-tuning, a crucial technique in Transfer Learning, including the steps involved in fine-tuning a model.
The way I'd explain it in an interview is this: Fine-tuning means continuing to train a pretrained model on a target task so its weights adapt to the new objective. Instead of using the pretrained model only as a fixed feature extractor, you let some or all of the network update to fit the target data.
The practical challenge is deciding how much to fine-tune. Too little adaptation may leave performance on the table, while too much can overfit a small dataset or erase useful source-task knowledge. Good fine-tuning balances adaptation with preservation.
What matters in an interview is not only knowing the definition, but being able to connect it back to how it changes modeling, evaluation, or deployment decisions in practice.
A weak answer says fine-tuning is retraining the model on new data, without explaining that it starts from pretrained weights and must balance adaptation carefully.
easy
easy
easy
medium
hard