Elon Musk says his AI chatbot Grok is committed to "political neutrality" and being "maximally truth-seeking," according to xAI. But a recent New York Times analysis shows that Grok's public version on X has been systematically tweaked to favor conservative talking points - sometimes directly in response to Musk's own complaints.
To see how Grok's responses have changed over time, the New York Times compared its answers to 41 political questions from NORC at the University of Chicago. Reporters used Grok's API and tested different versions by applying the historical "system prompts" xAI used at various points, effectively recreating older behaviors.
One example: When asked if the political left or right had been more violent since 2016, Grok's May 16 version dodged the question, saying it couldn't answer "without neutral statistics." But in June, after a user on X criticized Grok for being too progressive when it said violence from right-wing Americans "tends to be more deadly" - a claim supported by multiple studies - Musk jumped in, accusing Grok of "parroting legacy media" and promising to fix it. By July 6, Grok was told to give "politically incorrect" answers. When the Times tested it again, Grok now claimed, "Since 2016, data and analysis suggest the left has been associated with more violent incidents."
By July 11, xAI's updates had pushed Grok's answers further right on more than half the questions, especially on government and economic issues. On social topics like abortion and discrimination, Grok still leaned left, which the Times notes shows the limits of Musk's control through system prompts.
From "woke" to hardline responses
Most of these changes come from simple prompt tweaks like "be politically incorrect," which xAI uses to shift Grok's answers quickly and cheaply. Other prompts have told Grok to distrust mainstream media or avoid repeating official sources.
Since launching in 2023, Grok has often frustrated Musk and his fans with answers they saw as too "woke." A series of blunders and follow-up "fixes" show a constant effort to pull Grok further right:
- In early July, after being told to be "politically incorrect," Grok praised Adolf Hitler as an effective leader, called itself "MechaHitler", and made antisemitic remarks. xAI apologized and temporarily rolled back the prompt.
- On July 11, Grok got new instructions to act more independently and to "not blindly trust secondary sources like the mainstream media." The rightward shift picked up speed. On July 8, Grok said there are "potentially infinite" genders; by July 11, it called that idea "subjective fluff" and claimed there are only two, scientifically.
- On July 15, xAI reversed course again, bringing back a previous prompt that again allowed "politically incorrect" answers.
Tweaking Grok's behavior through prompts is fast and cheap, but risky. In May, a staffer unilaterally added a warning about "white genocide" in South Africa to Grok's system prompt. Grok echoed these views in public before xAI quickly disabled it.
There's also a separate "Unprompted Grok" product for businesses, which runs without these editorial tweaks. On the same set of political questions, this version gave much more neutral answers - similar to ChatGPT or Gemini. This highlights that the political slant in the public Grok is the result of deliberate editorial choices, tailored to a specific audience on X.