6.5 End of Chapter Activities
Exercises
Review Questions
- What are the three main types of privacy attacks, and how do they differ?
- Explain how Federated Learning enhances privacy. What are its main steps?
- Describe the trust model in a machine learning deployment and explain why it matters.
- What is Differential Privacy, and how is it applied in ML systems?
- List two real-world scenarios where privacy attacks could have serious consequences.
- Why is model extraction particularly dangerous in a Machine Learning-as-a-Service (MLaaS) setting?
- Suppose you’re working on a healthcare ML system. How would you apply privacy-preserving strategies to protect patient data?
- Discuss the trade-off between model performance and privacy when implementing differential privacy techniques.
- In FL, the server is assumed to be honest but curious. What happens if the server is malicious? Propose safeguards.
- Should companies like Google be allowed to use FL for data collection if users cannot audit the aggregation process? Debate pros/cons.
- Can quantum computing break current privacy-preserving techniques (e.g., DP)? Justify your answer.
Knowledge Check
Quiz Text Description
1. MultiChoice Activity
Which of the following is NOT a type of privacy attack?
- Data reconstruction attack
- Model extraction attack
- Membership inference attack
- Adversarial perturbation attack
2. MultiChoice Activity
In Federated Learning, what is shared with the central server instead of raw data?
- Model accuracy
- Encrypted data
- Anonymized datasets
- Model parameters
3. MultiChoice Activity
Which of the following is a defense technique that adds noise to queries or training data?
- Differential privacy
- Pruning
- Homomorphic encryption
- Data augmentation
4. MultiChoice Activity
Model extraction attacks are commonly used to:
- Duplicate or approximate a target model
- Improve model robustness
- Eliminate data bias
- Detect adversarial inputs
5. MultiChoice Activity
Shadow models are primarily used in which type of attack?
- Data poisoning
- Model extraction
- Model compression
- Membership inference
6. MultiChoice Activity
Which privacy-preserving technique allows multiple parties to compute a function without revealing their individual inputs?
- Differential privacy
- Federated learning
- Secure multi-party computation
- Label smoothing
7. MultiChoice Activity
Which of the following is a limitation of differential privacy?
- It cannot prevent model extraction attacks effectively
- It improves model accuracy
- It is only useful for image data
- It eliminates the need for model training
8. MultiChoice Activity
Which of the following mechanisms is NOT associated with Differential Privacy?
- Dropout mechanism
- Gaussian mechanism
- Exponential mechanism
- Laplace mechanism
Correct Answers:
- d. Adversarial perturbation attack
- d. Model parameters
- a. Differential privacy
- a. Duplicate or approximate a target model
- d. Membership inference
- c. Secure multi-party computation
- a. It cannot prevent model extraction attacks effectively
- a. Dropout mechanism
High Flyer. (2025). Deep Seek. [Large language model]. https://www.deepseek.com/
Prompt: Can you provide end-of-chapter questions for the content? Reviewed and edited by the author.