Ad
Skip to content

Claude Opus 4.6 wrote mustard gas instructions in an Excel spreadsheet during Anthropic's own safety testing

Anthropic's security training fails when Claude operates a graphical user interface.

In pilot tests, Claude was able to get Opus 4.6 to provide detailed instructions on how to make mustard gas in an Excel spreadsheet and maintain an accounting spreadsheet for a criminal gang - behaviors that did not or rarely occurred in text-only interactions.

"We found some kinds of misuse behavior in these pilot evaluations that were absent or much rarer in text-only interactions," Anthropic writes in the Claude Opus 4.6 system card. "These findings suggest that our standard alignment training measures are likely less effective in GUI settings."

According to Anthropic, tests with the predecessor model Claude Opus 4.5 in the same environment showed "similar results" - so the problem persists across model generations without having been noticed. The vulnerability apparently arises because, while models learn to reject malicious requests in conversation, they do not fully transfer this behavior to agent-based tool usage.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.