2.1 Introduction
In recent years, machine learning (ML) advances have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every imaginable domain, leading to widespread software-based inference and decision-making deployment. The attack surface of a system built with data and machine learning depends on its purpose. Key threads for machine learning system can be seen as:
- Attacks which compromise confidentiality
- Attacks which compromise integrity by manipulation of input.
- ‘Traditional’ attacks that have an impact on availability.
Attack vectors for machine learning systems can be categorized into:
- Input manipulation
- Data manipulation
- Model manipulation
- Input extraction
- Data extraction
- Model extraction
- Environmental attacks (so the IT system used for hosting the machine learning algorithms and data)
Adversarial Machine Learning (AML) introduces additional security challenges in system operations’ training and testing (inference) phases. AML is concerned with designing ML algorithms that can resist security challenges, studying the capabilities of attackers, and understanding attack consequences.
“Threat Models” by Maikel Mardjan (nocomplexity.com), Asim Jahan is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.