Those who outsource their text production to tools like GPT-3 can expect to be downgraded in Google's search engine, says Google's top webmaster advisor John Mueller.
The AI text engine GPT-3 needs about two or three sentences prescribed by humans to recognize a topic or an argument. Then it finishes writing the text.
Especially for familiar topics or summaries, GPT-3 masters text generation at a level where the machine-generated text is indistinguishable from a human-written text, or only in exceptional cases.
A fake blog with completely AI-generated texts by GPT-3 about life help and self-optimization was able to gain thousands of readers. And that was in 2020. In term papers, the essays written by GPT-3 achieved better grades than those written by students.
Since then, OpenAI has continuously improved GPT-3, for example with the possibility to adjust the language AI to individual topics. So GPT-3 and future, even more powerful AI tools like GPT-4 could massively change the content business on the Internet.
But the main gatekeeper, Google, objects.
AI-generated content violates Google's content guidelines
Google's Chief Webmaster, John Mueller, explained Google's stance on AI-generated content during a recent Q&A session. Google would classify AI-generated content as spam, just like other automatically generated content.
There are many methods of automatically generating text, he said. AI methods may be "a little bit better" than previous tools, but according to Mueller, to Google "it’s still automatically generated content, and that means for us it’s still against the Webmaster Guidelines. So we would consider that to be spam," Mueller says.
Such devaluation by Google would render AI-generated content worthless to many online businesses and organizations. Google is the most important gatekeeper for Internet content in the Western world. A downgrade by the search company is often synonymous with the end of a content offering.
Can Google detect GPT-3 and Co.
The spam penalty is probably the worst that Google can impose on a content offer. But can Google impose it at all? That would require that the company can reliably detect AI-generated content.
"I can’t claim that," says John Mueller when asked by SEO professionals whether Google identifies AI-generated texts. He expects a "cat-and-mouse game" between Google's webspam team and those people who use AI content tools.
"Sometimes people will do something and they get away with it, and then the webspam team catches up and solves that issue on a broader scale," Mueller said.
AI could evolve into a tool for authors to write more efficiently, as is already the case with translation or error-checking software, for example, Mueller adds.
Currently, however, the spam team does not consider the ways in which AI tools are used, he said. Any kind of deployment would be declared spam, if it is detected, he said. "But I don’t know what the future brings there," Mueller said.
Mueller doesn't talk about how Google might detect AI-generated content or AI content fragments in human-written text. So far, there are no known reliable detectors.
However, it cannot be ruled out that tools such as GPT-3 may leave subtle patterns in the text that are imperceptible to humans, which could be detected by adversarial AI algorithms.
Mueller's statement is also ambiguous in that many publishers already use automatic text generation, for example for stock market or sports news, which so far are apparently not penalized.