Google documents its position on AI content in search: it will not be universally categorized as spam and downgraded in search results.
In April 2022, Google’s webmaster whisperer John Mueller made the web take notice: Content generated with language models like GPT-3 would be considered spam by Google and like other automatically generated content it would be downgraded in search results, Mueller said at the time.
A lot has happened since then – and Google has now officially documented its position. AI content in search will not be universally categorized as spam and downgraded in search results. It’s an inevitable move anyway: how credible would it be if Google, with its just-announced “AI features,” pushed itself to #1 in search results with AI content, while kicking AI content generated by others out of the rankings?
AI content doesn’t violate Google policies
In a newly released policy on AI-generated content, Google confirms that text automatically generated by AI and other tools won’t be considered spam.
Whether automated or handwritten, each piece of content would be evaluated according to Google’s E-E-A-T scale (experience, expertise, authority, trustworthiness). Text automation has long been a part of publishing, and automatically generated content can also achieve high search rankings, according to Google.
“AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web. This is in line with how we’ve always thought about empowering people with new technologies,” Google writes, which also fits with its new AI strategy of heavily integrating AI tools into search and offering more generative AI tools.
Google vs. AI Spam
A flood of bad and mediocre AI content trying to push its way to the top of search results is arguably the biggest threat to Google’s business model right now. Despite all the criticism, Google’s search results are still considered the most relevant compared to competitors’ search results. Masses of generic AI content could erode this competitive advantage.
When asked how Google plans to address this risk, the company points to existing methods such as SpamBrain. Bad content is not a new challenge, Google says. Existing systems can evaluate how helpful a piece of content is and whether a news story is original. These systems are improving all the time, Google says.
Publishers could consider specifying authors for their content, especially news publishers, and could be transparent about the use of AI tools when “reasonably expected.” However, naming AI as an author is “probably not the best way” to follow this recommendation.