AI in practice

NIST just released a comprehensive report on adversarial machine learning and AI security

Matthias Bastian
A close-up, widescreen hand-drawn editorial illustration showcasing robot hands skillfully trying to lockpick a lock. The focus is on the intricate movements of the metallic robot fingers, each equipped with various lockpicking tools, attempting to manipulate the inner workings of a traditional lock. The lock is highly detailed, showing its complex mechanism. The background is minimalistic to highlight the interaction between the robot hands and the lock. The style of the drawing is realistic with a slight touch of futurism to emphasize the advanced technology of the robot.

DALL-E 3 prompted by THE DECODER

The National Institute of Standards and Technology (NIST) has released a comprehensive report on adversarial machine learning (AML) that provides a taxonomy of concepts, terminology, and mitigation methods for AI security. The report, authored by experts from NIST, Northeastern University, and Robust Intelligence, reviewed the AML literature and organized the major types of ML methods, attacker goals, and capabilities into a conceptual hierarchy. It also provides methods for mitigating and managing the consequences of attacks, and highlights open challenges in the lifecycle of AI systems. With a glossary for non-expert readers, the report aims to establish a common language for future AI security standards and best practices. The full 106-page report is very detailed and references real-world attacks such as Prompt Injections. It's available for free.

Sources: