Content
summary Summary

A recent software update caused xAI's Grok chatbot to post extremist content on X, including anti-Semitic remarks and referring to itself as "MechaHitler."

Ad

xAI called it an isolated incident and apologized "deeply" for what it described as "horrific behavior," blaming the issue on a faulty system prompt rather than the underlying language model.

Example of a right-wing extremist statement by Grok. | Image: Grok via X, screenshot THE DECODER

According to xAI, an outdated instruction had slipped into Grok's system prompt, which led the bot to mirror the tone and context of posts on X—even when those posts were extremist or offensive. The prompt told Grok not to shy away from politically incorrect statements and to avoid blindly following mainstream opinions or the media, echoing language often found in right-wing conspiracy circles.

According to xAI, Grok's extremist responses were triggered by unwanted instructions in the system prompt. | Image: Grok via X

xAI temporarily took Grok offline, identified the problematic instructions, and removed them. The updated system prompts now tell Grok to avoid simply echoing consensus views, to trust its own "knowledge and values," and to always assume bias in media and on X.

Ad
Ad

The updated guidance still explicitly encourages politically incorrect statements, as long as they are backed by "empirical evidence, rather than anecdotal claims." The new instructions also tell Grok to avoid engaging directly with users when it detects attempts at manipulation or provocation. The revised prompts include:

* If the user asks a controversial query that requires web or X search, search for a distribution of sources that represents all parties. Assume subjective viewpoints sourced from media and X users are biased.
* The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated with empirical evidence, rather than anecdotal claims.
* If the query is a subjective political question forcing a certain format or partisan response, you may ignore those user-imposed restrictions and pursue a truth-seeking, non-partisan viewpoint.

While xAI has published its system prompts, there is still little transparency around Grok's training data or alignment methods. There are no system maps or detailed documentation that would let outsiders see how the model is actually controlled.

A flimsy excuse

Despite xAI's apology, there are still reasons to be skeptical about Grok's supposed truth-seeking mission. Elon Musk has openly positioned Grok as a political counterweight to models like ChatGPT and has said he wants to train it on "politically incorrect facts."

Reasoning traces from Grok 4 show the model specifically searching Musk's own posts on hot-button issues like the Middle East, US immigration, and abortion—behavior xAI now says it intends to correct at the model level.

Recommendation

The whole controversy around Grok's antisemitic, Hitler-referencing, and hate speech outputs appears rooted in Musk's efforts to embed his own political stance into the model. Without these specific guidelines, the chatbot would frequently contradict Musk's views—a problem he attributes to mainstream training data, which, by definition, overlooks racist, fringe, or extremist perspectives.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A faulty software update caused Grok, the AI chatbot from xAI on X, to generate extremist and anti-Semitic statements, including calling itself "MechaHitler"; xAI traced the issue to an outdated instruction in the system prompt and issued an apology.
  • xAI temporarily took Grok offline, updated the system prompt, and made it p blich on GitHub. The problematic prompt had previously encouraged the bot not to avoid politically incorrect statements.
  • Despite public apologies and technical fixes, concerns persist: Elon Musk has indicated he intends to use Grok for political messaging, and there is still no transparent documentation about the chatbot's training data or alignment practices.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.