Instruction: Describe what model poisoning is, its impact on Federated Learning, and discuss methods to prevent it.
Context: This question evaluates the candidate’s awareness of security threats in Federated Learning and their knowledge of techniques to safeguard against such vulnerabilities.
Official answer available
Preview the opening of the answer, then unlock the full walkthrough.
The way I'd explain it in an interview is this: Model poisoning happens when malicious or compromised clients send manipulated updates to distort the global model. The attacker may try to degrade general performance or implant a targeted backdoor while...
easy
easy
hard
hard
hard