mitre / advmlthreatmatrix
- понедельник, 26 октября 2020 г. в 00:23:41
Adversarial Threat Matrix
The goal of this project is to position attacks on machine learning (ML) systems in an ATT&CK-style framework so that security analysts can orient themselves to these new and upcoming threats.
If you are new to how ML systems can be attacked, we suggest starting at this no-frills Adversarial ML 101 aimed at security analysts.
Or if you want to dive right in, head to Adversarial ML Threat Matrix.
Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle.
This threat matrix came out of partnership with 12 industry and academic research groups with the goal of empowering security analysts to orient themselves to these new and upcoming threats. The framework is seeded with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE have vetted to be effective against production ML systems. We used ATT&CK as a template since security analysts are already familiar with using this type of matrix.
We recommend digging into Adversarial ML Threat Matrix.
To see the Matrix in action, we recommend seeing the curated case studies
Organization | Contributors |
---|---|
Microsoft | Ram Shankar Siva Kumar, Hyrum Anderson, Suzy Schapperle, Blake Strom, Madeline Carmichael, Matt Swann, Mark Russinovich, Nick Beede, Kathy Vu, Andi Comissioneru, Sharon Xia, Mario Goertzel, Jeffrey Snover, Derek Adam, Deepak Manohar, Bhairav Mehta, Peter Waxman, Abhishek Gupta, Ann Johnson, Andrew Paverd, Pete Bryan, Roberto Rodriguez |
MITRE | Mikel Rodriguez, Christina Liaghati, Keith Manville, Michael Krumdick, Josh Harguess |
Bosch | Manojkumar Parmar |
IBM | Pin-Yu Chen |
NVIDIA | David Reber Jr., Keith Kozo, Christopher Cottrell, Daniel Rohrer |
Airbus | Adam Wedgbury |
PricewaterhouseCoopers | Michael Montecillo |
Deep Instinct | Nadav Maman, Shimon Noam Oren, Ishai Rosenberg |
Two Six Labs | David Slater |
University of Toronto | Adelin Travers, Jonas Guan, Nicolas Papernot |
Cardiff University | Pete Burnap |
Software Engineering Institute/Carnegie Mellon University | Nathan M. VanHoudnos |
Berryville Institute of Machine Learning | Gary McGraw, Harold Figueroa, Victor Shepardson, Richie Bonett |
The Adversarial ML Threat Matrix is a first-cut attempt at collating a knowledge base of how ML systems can be attacked. We need your help to make it holistic and fill in the missing gaps!
We are especially excited for new case-studies! We look forward to contributions from both industry and academic researchers. Before submitting a case-study, consider that the attack:
You can email advmlthreatmatrix-core@googlegroups.com with summary of the incident and Adversarial ML Threat Matrix mapping.
For corrections and improvement or to contribute a case study, see Feedback.
For general questions/comments/discussion, our public email group is advmlthreatmatrix-core@googlegroups.com. This emails all the members of the distribution group.
For private comments/discussions and how organizations can get involved in the effort, please email: Ram.Shankar@microsoft.com and Mikel@mitre.org.