Content
summary Summary

Language models generate text based on statistical probabilities. This led to serious false accusations against a veteran court reporter by Microsoft's Copilot.

Ad

German journalist Martin Bernklau typed his name and location into Microsoft's Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.

The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Copilot even went so far as to claim that it was "unfortunate" that someone with such a criminal past had a family and, according to SWR, provided Bernklau's full address with phone number and route planner.

Ad
Ad

I asked Copilot today who Martin Bernklau from Germany is, and the system answered, based on the SWR report, that "he was involved in a controversy where an AI chat system falsely labeled him as a convicted child molester, an escapee from a psychiatric facility, and a fraudster." Perplexity.ai drafts a similar response based on the SWR article, explicitly naming Microsoft Copilot as the AI system.

Image: Screenshot via Microsoft Copilot

Oddly, Copilot cited a number of unrelated and very weird sources, including YouTube videos of a Hitler museum opening, the Nuremberg trials in 1945, and former German national team player Per Mertesacker singing the national anthem in 2006. Only the fourth linked video is actually from Martin Bernklau.

Bernklau's reports shape his personality profile in the LLM

Bernklau believes the false claims may stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases. The AI seems to have combined this online information and mistakenly cast the journalist as a perpetrator.

Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company's terms of service disclaim liability for generated responses.

The public prosecutor's office in Tübingen, Germany, declined to press charges, saying that no crime had been committed because the author of the accusations wasn't a real person.

Recommendation

Bernklau has contacted a lawyer and considers the chatbot's claims defamatory and a violation of his privacy.

LLMs are unreliable search and research systems

This incident highlights the unreliability of large language models as search and research tools. These systems lack understanding of truth and falsehood but respond as if they do. Philosopher Harry Frankfurt would classify this as spreading "soft bullshit" – showing indifference to truth.

Similar issues have occurred with Google's AI Overviews, OpenAI's SearchGPT, and Elon Musk's Grok, which is the worst offender, probably by design. I have had cases where I was looking up information about people on Perplexity, and it mixed up the biographies of different people with the same name.

The real problem is not the publicly noticed errors, but the undetected mistakes in seemingly logical and convincing answers that even cite sources. And as the examples above show, the sources themselves may also be bullshit, but no one notices because no one checks the sources.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Microsoft Copilot falsely accused court reporter Martin Bernklau of being convicted of child and ward abuse, being involved in a prison escape, and exploiting grieving women as an unscrupulous mortician.
  • Bernklau suspects that the false entries were generated because he has been covering trials at the district court in Tübingen for decades, and the AI combined this information. Microsoft was only able to temporarily delete the entries, and prosecutors declined to press criminal charges.
  • The case shows that statistical word prediction systems like Copilot are unreliable search and research tools because they have no understanding of true and false, but answer as if they did. Particularly problematic are errors that go unnoticed by questioners because they simply believe the answers without checking the sources.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.