Meta's Purple Llama aims to make open-source AI safer
Meta AI has launched Purple Llama, an umbrella project aimed at providing open trust and safety tools for responsible generative AI development and "to level the playing field." The project will offer tools and evaluations to help developers build responsibly with open generative AI models, with initial focus on cybersecurity and input/output safeguards. As part of the project, Meta AI is releasing CyberSec Eval, a set of cybersecurity safety evaluation benchmarks for large language models (LLMs), and Llama Guard, a safety classifier for input/output filtering. Purple Llama is supported by partners such as AI Alliance, AMD, AWS, Google Cloud, Hugging Face, IBM, Intel, Microsoft, MLCommons, NVIDIA, and Scale AI, among others.

AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now