AI and society

Judge tells lawyers who failed at ChatGPT to do some "old-fashioned reading"

Matthias Bastian
A projection of a lawyer with numbers in his face, abstract AI art.

Midjourney prompted by THE DECODER

The Johannesburg Regional Court has condemned lawyers for using fake legal references generated by ChatGPT.

The case involved a defamation suit between a woman and her company. The plaintiff's lawyers argued that the question of whether a company could be sued for defamation had been settled in previous cases. The opposing side had challenged this earlier.

You can see where this is going.

The lawyers used ChatGPT to find and cite those cases. In the two months after the first hearing, the attorneys involved tried to find those cited cases, but they couldn't because they were fake citations made up by ChatGPT. They led to the wrong cases that had nothing to do with the issue at hand.

Do some "old-fashioned reading"

Judge Arvin Chaitram criticized the lawyers' reliance on AI-generated misinformation, which resulted in their client being ordered to pay costs. The lawyers were deemed "overzealous and careless," but not intentionally misleading the court.

While technology helps with legal research, Chaitram advised that it should be supplemented with "good old-fashioned independent reading" to avoid similar embarrassments in the future. The shame of this case should be punishment enough for the lawyers responsible.

Peter LoDuca and Steven A. Schwartz know the feeling all too well. In an unprecedented malpractice case, the attorneys used ChatGPT to file the lawsuit, citing non-existent legal cases generated by the AI tool.

New York Judge Kevin Castel ruled that lawyers have a gatekeeping role to ensure the accuracy of filings, underscoring the importance of verifying the output of AI tools in the legal profession. They were each fined $5,000 for failing to meet their responsibilities.

Sources: