German Wikipedia bans AI-generated content while other language editions take a softer approach
Key Points
- The German-language Wikipedia community has voted to ban AI-generated content from articles and discussion pages, with repeat offenders facing permanent blocks — exceptions remain for AI-assisted translations, spelling corrections, and research.
- Critics warn the ban is largely unenforceable due to unreliable AI detection methods, while some see it as leaving Wikipedia "stuck in the past" as AI use in education continues to grow.
- The decision contrasts with the English Wikipedia's targeted "AI Cleanup" approach and the Wikimedia Foundation's strategy of using AI as a tool for authors, while Wikipedia faces mounting pressure from AI systems that serve its content without sending users back to the site.
The German-language Wikipedia community has passed a sweeping ban on AI-generated content. The move puts it at odds with other Wikipedia language editions and the Wikimedia Foundation, which favor a less restrictive approach.
Following a community vote that ended on February 15, 2026, posting texts created or edited with large language models is now explicitly prohibited under a new policy. The ban applies to both articles and discussion pages.
Repeat offenders can be permanently blocked. The proposal passed with 208 votes in favor and 108 against.
Translations and research get a pass
AI-powered tools that make edits without human review of each individual change are also banned. Directly copying AI-generated phrasing suggestions is prohibited too, unless the changes are limited to spelling and grammar corrections.
Recognizably AI-generated external texts can't be used as sources for Wikipedia articles either. Even some supporters of the ban consider extending it to discussion pages excessive.
The policy does carve out several exceptions: AI-assisted translations from other language editions are allowed as long as they're fully checked for accuracy. Using AI for spelling and grammar corrections also remains permitted. Pure research with AI assistance is fine too, but any material found this way must be evaluated manually.
AI-generated images may only be used in exceptional cases after prior discussion - for example, to illustrate articles about AI image generators - but never as substitutes for missing real photos, such as portraits.
The new policy states that blocks should only be imposed when evidence is very clear. In cases of doubt, no blocks should be issued.
Critics point to unreliable detection methods
Supporters argue that Wikipedia can distinguish itself as a project built on human-written content and send a signal against "AI garbage on the internet." Hallucinations - facts invented by AI - remain a fundamental, ongoing problem, as even companies like OpenAI acknowledge.
Critics, however, point to the lack of detection criteria with sufficient evidentiary value and warn of endless debates over suspected violations. One comment during the vote put it bluntly: "Well-intentioned, but without detection methods and procedures, it's not enforceable."
Several opposing voters also argued that such a far-reaching decision should require a two-thirds majority. Others see the ban as leaving Wikipedia "stuck in the past," given that AI use in education keeps growing.
The Wikimedia Foundation's AI strategy for 2025-2028 takes a different path under its "Humans First" motto: it also wants to prevent AI slop in Wikipedia but explicitly sees AI as a tool for authors - helping with research, quality assurance, error detection, and fighting vandalism.
The English-language Wikipedia community has opted for targeted removal of bad AI content through its "AI Cleanup" project and a speedy deletion criterion rather than a blanket ban.
The project has over 200 registered volunteers who focus on tracking down and removing AI-generated articles. The speedy deletion criterion G15 allows articles clearly identified as AI-generated to be deleted immediately without lengthy discussion, though it requires solid evidence of AI generation.
Wikipedia faces growing pressure from AI systems and Grokipedia
The policy arrives at a time when Wikipedia is under increasing pressure. The Wikimedia Foundation warned in a blog post that AI systems serve up Wikipedia content but send fewer and fewer users to the actual website.
The foundation is calling on AI developers to properly attribute content and license it fairly through Wikimedia Enterprise. Without financial contributions, the open knowledge model is at risk.
At the same time, Elon Musk has launched Grokipedia, an AI-generated online encyclopedia that aims to replace Wikipedia. The platform is entirely operated by Musk's AI company xAI and launched with roughly 800,000 machine-generated articles.
Grokipedia shows notable biases, particularly on sensitive political topics, regularly delivering biased "AI slop" rather than neutral encyclopedia entries.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now