The AI company Anthropic has developed a method to protect language models from manipulation attempts.
Anthropic has developed a new security method called "Constitutional Classifiers" to prevent people from tricking AI models into giving harmful responses. The technology specifically targets universal jailbreaks - inputs designed to systematically bypass all safety measures.
To put the system through its paces, Anthropic recruited 183 people to try breaking through its defenses over two months. The participants attempted to get the AI model Claude 3.5 to answer ten prohibited questions. Even with $15,000 in prize money and roughly 3,000 hours of testing, no one managed to bypass all the security measures.
Early challenges lead to improvements
The initial version had two main drawbacks: it flagged too many innocent requests as dangerous and required too much computing power. While an improved version addressed these issues, as shown in automated tests with 10,000 jailbreak attempts, some challenges remain.
The tests revealed that while an unprotected Claude model allowed 86 percent of manipulation attempts through, the protected version blocked more than 95 percent. The system only incorrectly flagged an additional 0.38 percent of harmless requests, though it still needs 23.7 percent more computing power to run.
Synthetic training data as a basis
The security system works by using predefined rules about what content is allowed or prohibited. Using this "constitution", it creates synthetic training examples in various languages and styles. These examples then train the classifiers to spot suspicious inputs.
The researchers acknowledge that the system isn't foolproof against every universal jailbreak, and new attack methods could emerge that it can't handle. That's why Anthropic suggests using it alongside other security measures.
To further test the system's strength, Anthropic has released a public demo version. Security experts can try to outsmart it from February 3 to 10, 2025, with results to be shared in an update.