"

6.5 End of Chapter Activities

Exercises

Review Questions

  1. What are the three main types of privacy attacks, and how do they differ?
  2. Explain how Federated Learning enhances privacy. What are its main steps?
  3. Describe the trust model in a machine learning deployment and explain why it matters.
  4. What is Differential Privacy, and how is it applied in ML systems?
  5. List two real-world scenarios where privacy attacks could have serious consequences.
  6. Why is model extraction particularly dangerous in a Machine Learning-as-a-Service (MLaaS) setting?
  7. Suppose you’re working on a healthcare ML system. How would you apply privacy-preserving strategies to protect patient data?
  8. Discuss the trade-off between model performance and privacy when implementing differential privacy techniques.
  9. In FL, the server is assumed to be honest but curious. What happens if the server is malicious? Propose safeguards.
  10. Should companies like Google be allowed to use FL for data collection if users cannot audit the aggregation process? Debate pros/cons.
  11. Can quantum computing break current privacy-preserving techniques (e.g., DP)? Justify your answer.

Knowledge Check

Quiz Text Description
1. MultiChoice Activity
Which of the following is NOT a type of privacy attack?
  1. Data reconstruction attack
  2. Model extraction attack
  3. Membership inference attack
  4. Adversarial perturbation attack
2. MultiChoice Activity
In Federated Learning, what is shared with the central server instead of raw data?
  1. Model accuracy
  2. Encrypted data
  3. Anonymized datasets
  4. Model parameters
3. MultiChoice Activity
Which of the following is a defense technique that adds noise to queries or training data?
  1. Differential privacy
  2. Pruning
  3. Homomorphic encryption
  4. Data augmentation
4. MultiChoice Activity
Model extraction attacks are commonly used to:
  1. Duplicate or approximate a target model
  2. Improve model robustness
  3. Eliminate data bias
  4. Detect adversarial inputs
5. MultiChoice Activity
Shadow models are primarily used in which type of attack?
  1. Data poisoning
  2. Model extraction
  3. Model compression
  4. Membership inference
6. MultiChoice Activity
Which privacy-preserving technique allows multiple parties to compute a function without revealing their individual inputs?
  1. Differential privacy
  2. Federated learning
  3. Secure multi-party computation
  4. Label smoothing
7. MultiChoice Activity
Which of the following is a limitation of differential privacy?
  1. It cannot prevent model extraction attacks effectively
  2. It improves model accuracy
  3. It is only useful for image data
  4. It eliminates the need for model training
8. MultiChoice Activity
Which of the following mechanisms is NOT associated with Differential Privacy?
  1. Dropout mechanism
  2. Gaussian mechanism
  3. Exponential mechanism
  4. Laplace mechanism

Correct Answers:
  1. d. Adversarial perturbation attack
  2. d. Model parameters
  3. a. Differential privacy
  4. a. Duplicate or approximate a target model
  5. d. Membership inference
  6. c. Secure multi-party computation
  7. a. It cannot prevent model extraction attacks effectively
  8. a. Dropout mechanism

High Flyer. (2025). Deep Seek. [Large language model]. https://www.deepseek.com/

Prompt: Can you provide end-of-chapter questions for the content?  Reviewed and edited by the author.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.