OpenAI's AI text detector, introduced in January, is already history. The company cites a lack of reliability as the reason - which was known all along.
In a brief update to the original announcement article, OpenAI says it has discontinued its AI text classifier "due to its low rate of accuracy," as the company writes. The company says that it is researching more effective methods for AI text recognition, and is also "committed" to developing recognition mechanisms for audio and image content.
OpenAI is likely referring to a voluntary self-regulatory agreement signed by OpenAI and other AI companies to follow guidelines set by the White House. One of the methods mentioned is digital watermarking. In the past, however, OpenAI founder Sam Altman has repeatedly criticized the feasibility of watermarking.
Anti-marketing failed?
The question is why OpenAI released the detector in the first place. Even at the time of its unveiling, OpenAI was aware that the system had a low reliable detection rate and that it misidentified human text as AI text nine percent of the time in tests. The company was open about these weaknesses and explicitly stated that the tool was not suitable for recognizing student essays, for example.
So why the release - and now the quick retraction? Perhaps OpenAI wanted to show that its own and other AI detectors don't work in an unusual way. Anti-marketing against AI detectors, so to speak.
Current detectors are worrisome because they suggest a level of control over AI text that does not yet exist and perhaps never will. This in turn makes it difficult to integrate the technology in a meaningful way, for example in education, when educational institutions prefer to play detective rather than innovate. In the worst case, an inaccurate detector leads to hasty and false accusations with serious educational or professional consequences.