Anthropic has admitted in an ongoing copyright lawsuit that its own chatbot, Claude, fabricated a source that was later used as evidence in court.
According to a filing in a California district court, Claude invented "an inaccurate title and incorrect authors." The error slipped through a manual review and went undetected—along with several other citation mistakes caused by Claude, the court documents show.
The bogus citation appeared in testimony from Olivia Chen, an Anthropic employee who served as an expert witness in the case. The lawsuit centers on claims from several music publishers, including Universal Music Group, who accuse Anthropic of copyright violations involving its generative AI systems.
AI formatting error produces fake citation
Court records reveal that Anthropic's attorneys specifically asked Claude to generate a legally accurate citation for an article from The American Statistician: "Binomial Confidence Intervals for Rare Events: Importance of Defining Margin of Error Relative to Magnitude of Proportion" by Owen McGrath and Kevin Burke. Claude was supposed to create a proper legal citation based on the correct link.
Although Claude got the publication title, year, and link right, the final citation still included a made-up title and false author names. Anthropic says more wording mistakes from Claude also crept into the footnote during the formatting process. The company has not released a complete list of these faulty citations.
Judge demands explanation
After lawyers for the music publishers brought the citation errors to light, Judge Susan van Keulen asked Anthropic to respond. The company called the issue an "honest citation mistake
and not a fabrication of authority," noting that the underlying article exists and supports Chen's statement. Anthropic denies any intentional deception.
Still, the company's attorney was required to formally apologize for the errors generated by Claude.