Ad
Skip to content

Anthropic's new AI security tool sends cybersecurity stocks tumbling

Image description
Nano Banana Pro prompted by THE DECODER

Key Points

  • Anthropic has launched Claude Code Security, a new tool that detects security vulnerabilities in code that traditional rule-based scanners typically miss.
  • Rather than matching known patterns, the tool reads code and understands how components interact and how data flows through an application, mimicking the approach of a human security researcher.
  • The announcement sent shockwaves through cybersecurity stocks: CrowdStrike dropped 8%, Cloudflare fell 8.1%, Okta lost 9.2%, and SailPoint declined 9.4%.

Anthropic has launched Claude Code Security, a tool designed to catch security vulnerabilities that conventional scanners miss. The announcement triggered an immediate sell-off in cybersecurity stocks.

Anthropic has introduced Claude Code Security, a new feature built directly into Claude Code on the web interface. The tool scans codebases for security vulnerabilities and suggests targeted patches, though humans still need to review every fix. It's initially available as a limited research preview for Enterprise and Team customers. Anthropic says maintainers of open-source projects can apply for free and accelerated access.

Moving beyond pattern matching to understand code like a human

Existing analysis tools rely on rules to match code against known vulnerability patterns. While this catches common problems like exposed passwords or outdated encryption, Anthropic says it misses more complex flaws like business logic errors or faulty access controls.

By contrast, Anthropic says Claude Code Security is designed to read and reason about code the way a human security researcher would. It understands how components interact, tracks how data flows through an application, and spots the complex vulnerabilities that rule-based tools miss.

Ad
DEC_D_Incontent-1

Each result goes through a multi-stage verification process before it reaches an analyst. Claude revisits its own findings and tries to confirm or refute them to filter out false positives. Results get both a severity rating and a confidence rating, helping teams prioritize the most critical issues.

Validated findings show up in a dashboard where teams can review them, inspect proposed patches, and approve fixes. Nothing gets applied without human approval, Anthropic says. Claude Code Security identifies problems and suggests solutions, but the developer always makes the final call.

More than 500 vulnerabilities found hiding in production code

Anthropic says the feature draws on more than a year of research into Claude's cybersecurity capabilities. The company's in-house Frontier Red team has systematically tested these capabilities through capture-the-flag competitions, a partnership with the Pacific Northwest National Laboratory to defend critical infrastructure, and ongoing work to improve Claude's ability to find and patch real-world vulnerabilities.

With Claude Opus 4.6, released earlier this month, the team says it has found over 500 vulnerabilities in production open-source codebases, bugs that sometimes went undetected for decades despite years of expert scrutiny. Triage and responsible disclosure to maintainers are currently underway.

Ad
DEC_D_Incontent-2

Anthropic expects a significant share of the world's code to be scanned by AI in the near future, as models keep getting better at finding long-hidden bugs and security issues. At the same time, attackers will also use AI to find exploitable vulnerabilities faster than ever before, the company says.

Cybersecurity stocks take a hit after the announcement

Anthropic's announcement had an immediate impact on Wall Street. According to Bloomberg, cybersecurity stocks dropped sharply on the day of the announcement: CrowdStrike fell 8 percent, Cloudflare 8.1 percent, Okta 9.2 percent, and SailPoint 9.4 percent. The Global X Cybersecurity ETF dropped 4.9 percent to its lowest level since November 2023.

The sell-off fits a broader pattern. Anthropic's earlier announcement of specialized niche plugins for Cowork, including one for legal research, had already dragged software stocks down. Investors worry that new AI tools will let users build their own applications, potentially reducing demand for established software products and squeezing growth, margins, and pricing power across the industry.

That said, it's not very plausible that every company will suddenly build its own security software or other complex applications. Division of labor exists for a reason and drives economic efficiency. Without it, the result would be extreme fragmentation: thousands of in-house tools, each requiring maintenance, security updates, and upkeep, completely losing the economies of scale that established providers offer.

What's more likely is that AI tools drive down software production costs enough for niche applications to emerge that simply weren't worth building before. Companies solve specific problems faster with custom tools but keep relying on proven products for everything else, products that are also adding AI features of their own.

But cheaper to build doesn't mean cheaper to run. Maintenance, updates, compliance, support, and integration with existing systems still make up the bulk of IT spending at many companies. An application built with AI in hours still needs to be operated and maintained long after. The market reaction is pricing in lower production costs while largely ignoring this operational reality.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Anthropic | Bloomberg