Content
summary Summary

Update, June 23, 2023:

The verdict is in: the attorneys involved in the ChatGPT failure, Peter LoDuca and Steven A. Schwartz, and their firm, Levidow, Levidow & Oberman P.C., neglected their responsibilities by first filing the wrong cases and then defending them when asked, according to New York Judge Kevin Castel.

Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.

From the court ruling

The punishment is rather mild: the two lawyers involved have to pay $5,000 to the court, inform their clients about the misstep, who should already know about it because of the media frenzy, and inform the real judges who were cited by ChatGPT in the wrong cases. The court itself refrains from demanding an apology "because a compelled apology is not a sincere apology".

Original article from May 28, 2023:

Ad
Ad

Lawyer who used ChatGPT to file charges now faces trial himself

Since the advent of large language models, the media and experts have been sounding the alarm: The content generated may be wrong. This message has not been heard everywhere.

A New York attorney cited cases that didn't exist using ChatGPT, not knowing it was making things up.

The case in question involves a man suing an airline for alleged personal injury because he was struck in the knee by a service cart during a flight.

The airline filed a motion to dismiss the case. The plaintiff's legal team wanted to use ChatGPT to argue their client's position.

The charging document filed by the legal team cites examples from non-existent legal cases made up by ChatGPT, with lengthy and seemingly accurate citations such as "Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019)."

Recommendation

The document dates from early March, so the made-up parts were generated using GPT-3.5, which is even less reliable than GPT-4 in terms of factual accuracy.

Bogus judicial decisions with bogus quotes and bogus internal citations

The court requested copies of the cases, and the legal team again asked ChatGPT for details about the cases. ChatGPT was not slow to respond, inventing numerous details about the fictitious cases, which the legal team attached to the indictment in the form of screenshots - including the ChatGPT interface on a smartphone (!).

The airline's lawyers continued to dispute the authenticity of the cases, prompting the judge to ask the plaintiff's legal team to comment again.

The Court is presented with an unprecedented circumstance. A submission file by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. [...] Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.

Order to Show Cause

In a subsequent affidavit, Steven A. Schwartz, the attorney responsible for the research, admits to using ChatGPT for the first time for the research without being aware that the system could generate false content. ChatGPT had "assured" the accuracy of its sources when asked.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Lawyer regrets using ChatGPT

Schwartz has been in the business for more than 30 years. In a statement, he apologized for using the chatbot. He said he was unaware that the information generated could be false. In the future, he said, he would not use AI as a legal research tool without verifying the information generated.

But OpenAI's chatbot made it easy for Schwartz to believe the lies: screenshots show conversations between Schwartz and ChatGPT in which the lawyer asks ChatGPT for sources for cited cases and the system promises a "double-check" in legal databases such as LexisNexis and Westlaw. However, it is Schwartz's responsibility not to rely on this information without verification.

The lawyer and his colleagues will have to justify in court on June 8th why they should not be disciplined. It is possible that the research for precedents with ChatGPT will itself become a precedent for why ChatGPT should not be used for research, or should not be used without authentication - and a warning to the industry.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A law firm used ChatGPT to draft an indictment. In it, ChatGPT cited fictitious reference cases.
  • ChatGPT also vouched for the authenticity of the data, which allegedly came from well-known legal databases.
  • The lawyer responsible for the research claims to have used ChatGPT for the first time. He said he was not aware that ChatGPT could fabricate information.
  • The two lawyers involved must each pay $5,000 in fines to the court, inform their clients and the judges cited by ChatGPT in the fake cases.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.