Content
summary Summary

The Johannesburg Regional Court has condemned lawyers for using fake legal references generated by ChatGPT.

Ad

The case involved a defamation suit between a woman and her company. The plaintiff's lawyers argued that the question of whether a company could be sued for defamation had been settled in previous cases. The opposing side had challenged this earlier.

You can see where this is going.

The lawyers used ChatGPT to find and cite those cases. In the two months after the first hearing, the attorneys involved tried to find those cited cases, but they couldn't because they were fake citations made up by ChatGPT. They led to the wrong cases that had nothing to do with the issue at hand.

Ad
Ad

Do some "old-fashioned reading"

Judge Arvin Chaitram criticized the lawyers' reliance on AI-generated misinformation, which resulted in their client being ordered to pay costs. The lawyers were deemed "overzealous and careless," but not intentionally misleading the court.

While technology helps with legal research, Chaitram advised that it should be supplemented with "good old-fashioned independent reading" to avoid similar embarrassments in the future. The shame of this case should be punishment enough for the lawyers responsible.

Peter LoDuca and Steven A. Schwartz know the feeling all too well. In an unprecedented malpractice case, the attorneys used ChatGPT to file the lawsuit, citing non-existent legal cases generated by the AI tool.

New York Judge Kevin Castel ruled that lawyers have a gatekeeping role to ensure the accuracy of filings, underscoring the importance of verifying the output of AI tools in the legal profession. They were each fined $5,000 for failing to meet their responsibilities.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The Johannesburg Regional Court condemns lawyers for using fake legal references generated by ChatGPT, resulting in a client being ordered to pay costs in a defamation case.
  • The lawyers were found to have been careless but not intentionally misleading, highlighting the need to verify AI-generated information in legal research.
  • Similar incidents have occurred in the US, where lawyers have been fined for citing non-existent cases generated by AI tools, highlighting the importance of lawyers' gatekeeping role in ensuring the accuracy of submissions.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.