NIST just released a comprehensive report on adversarial machine learning and AI security
The National Institute of Standards and Technology (NIST) has released a comprehensive report on adversarial machine learning (AML) that provides a taxonomy of concepts, terminology, and mitigation methods for AI security. The report, authored by experts from NIST, Northeastern University, and Robust Intelligence, reviewed the AML literature and organized the major types of ML methods, attacker goals, and capabilities into a conceptual hierarchy. It also provides methods for mitigating and managing the consequences of attacks, and highlights open challenges in the lifecycle of AI systems. With a glossary for non-expert readers, the report aims to establish a common language for future AI security standards and best practices. The full 106-page report is very detailed and references real-world attacks such as Prompt Injections. It's available for free.
AI News Without the Hype – Curated by Humans
Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section.
Subscribe nowAI news without the hype
Curated by humans.
- More than 16% discount.
- Read without distractions – no Google ads.
- Access to comments and community discussions.
- Weekly AI newsletter.
- 6 times a year: “AI Radar” – deep dives on key AI topics.
- Up to 25 % off on KI Pro online events.
- Access to our full ten-year archive.
- Get the latest AI news from The Decoder.