Ad
Skip to content

Claude Code gets parallel AI agents that review code for bugs and security gaps

Anthropic has released a code review feature for Claude Code that automatically checks changes for errors before they're merged. Multiple AI agents work in parallel to catch bugs, security vulnerabilities, and regressions. The feature is available as a research preview for Team and Enterprise customers. According to the company, Anthropic has been using the system internally for months. Code output per developer has jumped 200 percent over the past year, turning manual review into a bottleneck.

Before deployment, 16 percent of changes received substantive comments - now it's 54 percent. For large changes over 1,000 lines, the system flags problems in 84 percent of cases, averaging 7.5 issues per change. Less than one percent of findings are marked as incorrect. The system doesn't approve any changes on its own - that stays with the developer. Costs are billed based on token consumption and average between 15 and 25 dollars per review, depending on size and complexity. Admins can set a monthly spending limit.

Anthropic is aggressively building out Claude Code this year. Recent additions include automated desktop functions, remote control for smartphones, a memory function, and a scheduling feature for planned tasks.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Anthropic | Docs