"

6.3 Mitigation Strategies

Differentiated privacy (DP) has emerged as a leading defense mechanism to counter privacy attacks. DP introduces controlled noise into query responses or training processes, limiting how individual records can influence model outputs (Dwork, 2006; Dwork et al., 2006).

Key DP techniques include:

  • Gaussian and Laplace mechanisms: Inject statistical noise into results.
  • Exponential mechanism: Ensures privacy in discrete outcome settings.
  • DP-SGD (Differentially Private Stochastic Gradient Descent): Applies DP principles to neural network training.

While DP effectively mitigates data reconstruction and membership inference, it offers limited protection against model extraction and property inference. Thus, additional security measures, such as query rate limiting, adversarial training, and hardware-level protections, are necessary to build comprehensive defenses.

Inference-based attacks also require countermeasures such as privacy-preserving machine learning (PPML) techniques, including homomorphic encryption, secure multi-party computation, and federated learning with privacy enhancements.


Adapted from “Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations” by Apostol Vassilev, Alina Oprea, Alie Fordyce, & Hyrum Anderson, National Institute of Standards and Technology – U.S. Department of Commerce. Republished courtesy of the National Institute of Standards and Technology.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.