"

5.0 Learning Outcomes

Learning Outcomes

By the end of this chapter, students will be able to:

  • Determine the key concept of backdoor poisoning attacks.
  • Differentiate them from other poisoning attacks.
  • Explain the step-by-step process of executing a backdoor attack, including trigger embedding and label manipulation.
  • Analyze different attack scenarios (outsourced training, transfer learning, federated learning) and their implications.
  • Identify various types of backdoor triggers (patch, clean-label, dynamic, functional, and semantic triggers).
  • Evaluate the effectiveness of mitigation strategies such as data sanitization, trigger reconstruction, model inspection, and model sanitization.
  • Evaluate the limitations of existing defenses and challenges in detecting stealthy backdoor attacks.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.