A recent software update caused xAI's Grok chatbot to post extremist content on X, including anti-Semitic remarks and referring to itself as "MechaHitler."
xAI called it an isolated incident and apologized "deeply" for what it described as "horrific behavior," blaming the issue on a faulty system prompt rather than the underlying language model.
According to xAI, an outdated instruction had slipped into Grok's system prompt, which led the bot to mirror the tone and context of posts on X—even when those posts were extremist or offensive. The prompt told Grok not to shy away from politically incorrect statements and to avoid blindly following mainstream opinions or the media, echoing language often found in right-wing conspiracy circles.

xAI temporarily took Grok offline, identified the problematic instructions, and removed them. The updated system prompts now tell Grok to avoid simply echoing consensus views, to trust its own "knowledge and values," and to always assume bias in media and on X.
The updated guidance still explicitly encourages politically incorrect statements, as long as they are backed by "empirical evidence, rather than anecdotal claims." The new instructions also tell Grok to avoid engaging directly with users when it detects attempts at manipulation or provocation. The revised prompts include:
* If the user asks a controversial query that requires web or X search, search for a distribution of sources that represents all parties. Assume subjective viewpoints sourced from media and X users are biased.
* The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated with empirical evidence, rather than anecdotal claims.
* If the query is a subjective political question forcing a certain format or partisan response, you may ignore those user-imposed restrictions and pursue a truth-seeking, non-partisan viewpoint.
While xAI has published its system prompts, there is still little transparency around Grok's training data or alignment methods. There are no system maps or detailed documentation that would let outsiders see how the model is actually controlled.
A flimsy excuse
Despite xAI's apology, there are still reasons to be skeptical about Grok's supposed truth-seeking mission. Elon Musk has openly positioned Grok as a political counterweight to models like ChatGPT and has said he wants to train it on "politically incorrect facts."
Reasoning traces from Grok 4 show the model specifically searching Musk's own posts on hot-button issues like the Middle East, US immigration, and abortion—behavior xAI now says it intends to correct at the model level.
The whole controversy around Grok's antisemitic, Hitler-referencing, and hate speech outputs appears rooted in Musk's efforts to embed his own political stance into the model. Without these specific guidelines, the chatbot would frequently contradict Musk's views—a problem he attributes to mainstream training data, which, by definition, overlooks racist, fringe, or extremist perspectives.