Instruction: Discuss strategies to detect and mitigate data poisoning attacks in Federated Learning systems.
Context: Candidates must showcase their knowledge in securing Federated Learning systems against data poisoning, ensuring the integrity of the learning process.
Official answer available
Preview the opening of the answer, then unlock the full walkthrough.
The way I'd think about it is this: Data poisoning in federated learning happens when clients train on intentionally corrupted or adversarially manipulated local data so that the resulting updates push the global model in a harmful direction. It overlaps with...