Content
summary Summary

The AI company Anthropic has developed a method to protect language models from manipulation attempts.

Ad

Anthropic has developed a new security method called "Constitutional Classifiers" to prevent people from tricking AI models into giving harmful responses. The technology specifically targets universal jailbreaks - inputs designed to systematically bypass all safety measures.

To put the system through its paces, Anthropic recruited 183 people to try breaking through its defenses over two months. The participants attempted to get the AI model Claude 3.5 to answer ten prohibited questions. Even with $15,000 in prize money and roughly 3,000 hours of testing, no one managed to bypass all the security measures.

Early challenges lead to improvements

The initial version had two main drawbacks: it flagged too many innocent requests as dangerous and required too much computing power. While an improved version addressed these issues, as shown in automated tests with 10,000 jailbreak attempts, some challenges remain.

Ad
Ad

The tests revealed that while an unprotected Claude model allowed 86 percent of manipulation attempts through, the protected version blocked more than 95 percent. The system only incorrectly flagged an additional 0.38 percent of harmless requests, though it still needs 23.7 percent more computing power to run.

Synthetic training data as a basis

The security system works by using predefined rules about what content is allowed or prohibited. Using this "constitution", it creates synthetic training examples in various languages and styles. These examples then train the classifiers to spot suspicious inputs.

Image: Anthropic

The researchers acknowledge that the system isn't foolproof against every universal jailbreak, and new attack methods could emerge that it can't handle. That's why Anthropic suggests using it alongside other security measures.

To further test the system's strength, Anthropic has released a public demo version. Security experts can try to outsmart it from February 3 to 10, 2025, with results to be shared in an update.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Anthropic has developed a new security technology called "Constitutional Classifiers" that is designed to protect AI language models from manipulation attempts by detecting and blocking unauthorized input.
  • In a two-month test with 183 participants and $15,000 in prize money, no one managed to bypass all the prototype's security measures. An improved version was able to block over 95 percent of jailbreak attempts in automated tests, with only a slightly higher error rate for innocent requests.
  • The security system is based on predefined rules that are used to generate synthetic training data. This is used to train classifiers to detect suspicious submissions. Anthropic recommends additional security measures and has released a demo version for further testing.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.