Anthropic's leaked AI coding tool has been cloned over 8,000 times on GitHub despite mass takedowns
Following the accidental leak of its AI coding tool's source code, Anthropic has had more than "8,000 copies and adaptations of the raw Claude Code instructions" removed from GitHub via a copyright request, the Wall Street Journal reports. One programmer already used AI tools to rewrite the code in different languages, keeping it available despite takedowns. This shows just how damaging a code leak is in the age of AI: once it's out, it spreads faster than anyone can contain it.
The code contains valuable techniques Anthropic uses to control its AI models as coding agents—the "harness"—including a "dreaming" function for task consolidation. Competitors now have a blueprint to replicate Claude Code's capabilities, weakening Anthropic's edge in an already cutthroat market.
The timing is particularly bad: the company is planning an IPO at a $380 billion valuation, and this kind of leak is unlikely to sit well with investors. It also comes just days after a separate leak about Anthropic's new AI model Mythos, also caused by human error within the company's content management system.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now