"

3.2 Why Are We Interested in Adversarial Examples?

Are they not just curious by-products of machine learning models without practical relevance? The answer is a clear “no”. Adversarial examples make machine learning models vulnerable to attacks, as in the following scenarios.

Autonomous Vehicles and Road Signs

The video below reports on how graffiti on road signs in Metro Atlanta is creating a potential safety threat for autonomous vehicles. Tests show that even small stickers or markings can cause autonomous vehicle systems to misread signs, for example, mistaking a stop sign for a 45 MPH speed limit sign. The issue underscores the need to address vulnerabilities before broader adoption.

Video: “Graffiti on road signs may confuse autonomous vehicles, research shows” by 11Alive [3:13] is licensed under the Standard YouTube License.Transcript and closed captions available on YouTube.

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.