Ad
Skip to content

Anthropic tests its "next-generation system for AI safety mitigations"

Anthropic is expanding its bug bounty program to test its "next-generation system for AI safety mitigations." The program focuses on identifying and defending against "universal jailbreak attacks." Anthropic is prioritizing critical vulnerabilities in high-risk areas like chemical, biological, radiological and nuclear (CBRN) defense and cybersafety. Participants get early access to Anthropic's latest safety systems before public release. Their task is to find vulnerabilities or ways to bypass safety measures. Anthropic is offering rewards up to $15,000 for discovering new universal jailbreak attacks.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Anthropic