"

1.8 End of Chapter Activities

Exercises

Reflective Questions

  1. What are the primary objectives of machine learning security?
  2. How do adversarial attacks differ from data poisoning attacks?
  3. Why is it challenging to secure machine learning systems?

Practical Exercise

  1. Research a real-world example of an ML security breach and present a brief summary of the attack and its consequences.
  2. Identify potential vulnerabilities in a simple ML pipeline (e.g., a spam email classifier). Suggest at least two strategies to mitigate these vulnerabilities.

Group Discussion

Form small groups and discuss the trade-offs between accuracy and security in machine learning. How would you balance these considerations in critical applications such as healthcare or autonomous vehicles?

Knowledge Check

Quiz Text Description
1. MultiChoice Activity
Which of the following is a primary goal of machine learning security?
  1. Maximizing model complexity
  2. Increasing hardware dependency
  3. Reducing training time
  4. Protecting ML models from adversarial attacks
2. MultiChoice Activity
What is an example of a data poisoning attack?
  1. Increasing the size of the dataset
  2. Extracting sensitive information from trained models
  3. Manipulating traffic signs to confuse autonomous vehicles
  4. Introducing mislabeled data into the training set
3. MultiChoice Activity
Which term describes an adversary’s ability to access the ML model’s parameters
  1. Black-box Access
  2. White-box Access
  3. Input Access
  4. Grey-box Access
4. MultiChoice Activity
What is the main challenge in balancing performance and security in ML systems?
  1. Complexity of programming language
  2. Insufficient data availability
  3. Lack of computing power
  4. Trade-offs between accuracy and robustness
5. MultiChoice Activity
What is the primary characteristic of evasion attacks?
  1. Corrupting training data
  2. Crafting inputs to deceive the model
  3. Extracting sensitive information
  4. Reducing model accuracy during training

Correct Answers:
  1. d. Protecting ML models from adversarial attacks
  2. d. Introducing mislabeled data into the training set
  3. b. White-box Access
  4. d. Trade-offs between accuracy and robustness
  5. b. Crafting inputs to deceive the model

High Flyer. (2025). Deep Seek. [Large language model]. https://www.deepseek.com/

Prompt: Can you provide end-of-chapter questions for the content?  Reviewed and edited by the author.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.