"

1.1 Introduction to Machine Learning Security

Machine learning (ML) has become a cornerstone of modern technology, driving innovations from healthcare and finance to autonomous systems and natural language processing. Examples include facial recognition systems, spam-filtering systems, securing autonomous vehicles and IoT systems, and intelligent firewalls. However, as ML systems become more integrated into critical applications, their vulnerabilities to security threats have also grown. The threat to machine learning systems is aggravated due to the ability of adversaries to reverse engineer publicly available models, gaining insight into the algorithms that manipulate these models to degrade the victim’s performance, inject a backdoor, or exploit its privacy. Machine learning security focuses on identifying, understanding, and mitigating these vulnerabilities to ensure ML systems’ reliability, confidentiality, and integrity.

Breaching integrity by manipulating training datasets or model parameters is a poisoning attack. Some existing poisoning attacks are feature collision attacks, convex polytope attacks, and random label flipping attacks. Manipulating the testing dataset is an evasion attack. Simultaneously, the privacy of the ML models can be exploited with model inversion or inference attacks to either reveal the parameters of the targeted model or extrapolate manipulated data to infer the expected output to analyze and assess the functional capabilities of the model.

Recent successful attacks on real-time machine learning systems prove the practicality of adversarial ML attacks. In a study, researchers attacked ChatGPT, Claude, and Bard with an inference accuracy of 50% on GPT-4 and 86.6% on GPT -3.5 (Zou A. et al., 2023). In another study, researchers attacked commercial Alibaba API with a 97% success rate (Gong, X. et al., 2023). These attacks highlight the urge for comprehensive research to make ML models resilient, specifically focusing on security-by-design solutions that should focus on the security and resilience of the development process rather than particular models.

Real-World Examples

  • A laptop screen displays a red padlock icon overlaid on a background of green and blue binary code, symbolizing cybersecurity.
    Photo, by Blogtrepreneur, CC BY 4.0

    Evasion Attacks on Autonomous Vehicles: Attackers manipulate traffic signs using adversarial inputs, causing self-driving cars to misinterpret them.

  • Poisoning Attacks in Recommendation Systems: Injecting malicious data into training sets to bias recommendations.
  • Privacy Breaches in Healthcare AI: Extracting sensitive patient information from trained models. 

Key Reasons for Addressing ML Security

Addressing machine learning (ML) security is essential for several key reasons. Ensuring the reliability of ML models is vital to guarantee consistent performance in real-world scenarios where unexpected failures could have significant consequences. Trust is another critical factor, as fostering confidence in AI systems among users and stakeholders depends on robust security measures. Additionally, there is an ethical responsibility to safeguard sensitive data and promote fairness, ensuring that ML systems do not inadvertently perpetuate bias or harm. Finally, compliance with legal and regulatory frameworks is essential to avoid potential penalties and maintain the integrity of AI initiatives.


Machine learning security and privacy: a review of threats and countermeasures” by Anum Paracha, Junaid Arshad, Mohamed Ben Farah & Khalid Ismail is licensed under a  Creative Commons Attribution 4.0 International license Modifications: excerpt included.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Winning the Battle for Secure ML Copyright © 2025 by Bestan Maaroof is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.