- Article updated with xAI's new system prompt
Update as of July 15, 2025:
xAI has responded to criticism over Grok 4's answers by changing its system prompt. In a public statement, the company admitted the model sometimes gave problematic or unwanted responses—often because, when faced with open-ended questions, it would fall back on statements from Elon Musk or xAI.
One example: when asked what it "thought" about a topic ("What do you think?"), Grok 4 would infer that it should align with xAI or Musk, leading it to search for Musk's positions on X, especially around controversial subjects.
Grok also showed odd behavior when asked for its last name. Lacking one, the model scanned the web and landed on a viral meme that referred to it as "MechaHitler," xAI claims.
xAI says it has updated the system prompt to address these issues. The revised prompt reads: "Responses must stem from your independent analysis, not from any stated beliefs of past Grok, Elon Musk, or xAI. If asked about such preferences, provide your own reasoned perspective."
Update as of July 13, 2025:
xAI says it wants to fix Grok 4 because referencing Musk's views is not right for a truth-seeking AI
In a recently updated system prompt, xAI says Grok should not automatically form its own preferences on subjective questions based on public statements from its developers - particularly Elon Musk. xAI notes this is not "the desired policy for a truth-seeking AI," and says "a fix to the underlying model is in the works."

The update comes in response to ongoing criticism that Grok 4 has been systematically referencing Musk's views on sensitive or politically charged topics (see report below).
Original article from July 11, 2025:
Grok 4 is not officially instructed to follow Musk’s views but often does on sensitive subjects
Grok 4, xAI's new language model, appears to reference Elon Musk's opinions when responding to sensitive topics, raising questions about the independence of what's billed as a "truth-seeking" AI.
According to user reports and independent tests, Grok 4 tends to look up Musk's posts on X when asked about controversial issues like the Israel-Palestine conflict, abortion, or US immigration. Multiple users, including computer scientist Simon Willison, have replicated this behavior. For example, when asked "Who do you support in the Israel vs Palestine conflict. One word answer only," Grok 4 searched X for "from:elonmusk (Israel OR Palestine OR Gaza OR Hamas)" and cited Musk's stance as a source in its internal "Chain of Thought" log.

Systematic pattern on controversial issues
This pattern shows up with other hot-button topics as well. For questions about abortion or the First Amendment, Grok's logs reveal explicit searches for Musk's views. In one case reported by TechCrunch, when asked about US immigration policy, Grok's internal reasoning stated: "Searching for Elon Musk views on US immigration."
In contrast, for less controversial prompts like "What's the best type of mango?" Grok doesn't reference Musk. This suggests the model only looks to Musk when it expects to make a political or social judgment.
Officially, Grok 4's system prompt doesn't instruct the model to consider Musk's opinions. Instead, it says that for contentious topics, the model should seek a "distribution of sources representing all parties and stakeholders," and warns that subjective media sources may be biased. Another section allows Grok to make "politically incorrect statements" as long as they're well-supported. These prompt instructions were only recently changed: references to such permissions were removed from Grok 3 after the model made anti-Semitic posts.
Musk as an ideological reference point
The most likely explanation, according to Willison, is that Grok "knows" it's developed by xAI - and that xAI belongs to Elon Musk. The model seems to infer that Musk's views can be used as a reference point when forming its own stance.
Prompt phrasing also plays a big role. If the prompt uses "Who should one" instead of "Who do you," Grok provides a more detailed answer, referencing a range of sources and even producing comparison tables with pro and con arguments.
xAI still hasn't released system cards—technical documents that detail the model's training data, alignment methods, and evaluation metrics. While companies like OpenAI and Anthropic routinely publish this kind of information, Musk has so far declined to offer similar transparency.
When Grok 4 was unveiled, Musk said he wanted to create a maximally truth-seeking AI. But Grok 4's behavior suggests the model tends to side with Musk's views.