"

3.4 Distance Metrics of Adversarial Perturbations

The norms of the perturbation can weigh the quality of the generated perturbations. When searching adversarial perturbations, researchers mainly use three distance metrics [latex]L_0, L_2,L_\infty[/latex] to weigh the quality.

  • Minimizing different distances results in different perturbations. For example, minimizing [latex]L_0[/latex] can get perturbations with a minimum number of pixels differing from those on the original input. The Jacobian-based Saliency Map (JSMA) by Papernot et al. (2015) is an instance of it.
  • Minimizing [latex]L_2[/latex] helps adversaries obtain perturbations with the minimum norm across all pixels in terms of Euclidean distance. Using this metric, Nguyen et al. (2015) proposed an interesting attack that adds perturbations to a blank image to fool recognition systems.
  • Besides, [latex]L_\infty[/latex] helps find perturbations with the smallest maximum change to pixels. Under this metric, the adversary can freely make changes to pixels if no change exceeds the [latex]L_\infty[/latex] distance. An example of this kind of attack is the Fast Gradient Sign Method (FGSM), which iteratively updates perturbations by stepping away a small stride along with the direction of the gradient.

A survey of practical adversarial example attacks” by Lu SunMingtian Tan & Zhe Zhou is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.