Many users had recently complained that Claude's answers were getting worse. Anthropic has now confirmed that three separate technical failures caused the drop in quality - and admits the problems went unnoticed internally for too long.
In a newly published technical report, the company explains why Claude's performance declined between early August and early September. According to Anthropic, three independent infrastructure issues led to weaker outputs:
- Some requests were routed to the wrong servers.
- Certain outputs contained odd characters or broken code.
- The system selected lower-quality responses more often than it should have.
Because the issues looked different depending on the platform and model, Anthropic says they were hard to pinpoint. Internal checks did not catch them right away, and it was only after numerous community reports that the team was able to identify the problems. To prevent this from happening again, Anthropic plans to introduce more sensitive tests, continuous monitoring, and stronger debugging tools.
At the same time, the company stressed that quality was never deliberately reduced, including during times of high demand or heavy server load.
Problems went unnoticed
By late August, developers on Reddit, X, and YouTube had been widely sharing examples of Claude's weaker performance. Many complained about declining code quality, with some pointing to broken or contradictory outputs. Speculation circulated that Anthropic might be throttling results or had quietly changed its models.
When asked for comment, Anthropic initially confirmed the first issues and promised more details. The newly released report now confirms that users were right.