Content
summary Summary

Anthropic has admitted in an ongoing copyright lawsuit that its own chatbot, Claude, fabricated a source that was later used as evidence in court.

Ad

According to a filing in a California district court, Claude invented "an inaccurate title and incorrect authors." The error slipped through a manual review and went undetected—along with several other citation mistakes caused by Claude, the court documents show.

The bogus citation appeared in testimony from Olivia Chen, an Anthropic employee who served as an expert witness in the case. The lawsuit centers on claims from several music publishers, including Universal Music Group, who accuse Anthropic of copyright violations involving its generative AI systems.

AI formatting error produces fake citation

Court records reveal that Anthropic's attorneys specifically asked Claude to generate a legally accurate citation for an article from The American Statistician: "Binomial Confidence Intervals for Rare Events: Importance of Defining Margin of Error Relative to Magnitude of Proportion" by Owen McGrath and Kevin Burke. Claude was supposed to create a proper legal citation based on the correct link.

Ad
Ad

Although Claude got the publication title, year, and link right, the final citation still included a made-up title and false author names. Anthropic says more wording mistakes from Claude also crept into the footnote during the formatting process. The company has not released a complete list of these faulty citations.

Judge demands explanation

After lawyers for the music publishers brought the citation errors to light, Judge Susan van Keulen asked Anthropic to respond. The company called the issue an "honest citation mistake
and not a fabrication of authority," noting that the underlying article exists and supports Chen's statement. Anthropic denies any intentional deception.

Still, the company's attorney was required to formally apologize for the errors generated by Claude.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • In a copyright lawsuit involving several music publishers, Anthropic admitted that its in-house chatbot, Claude, generated a fictitious source that was used as evidence in a witness statement and went undetected during the review process.
  • Claude was tasked with creating a legally correct citation for a specialized article. However, despite having the correct link and title, Claude provided incorrect authors and a different title. Similar errors occurred with other citations.
  • After the incorrect citations were discovered, the judge demanded a statement. Anthropic emphasized that the error was unintentional and officially apologized for the citation errors caused by Claude.
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.