Content
summary Summary

Grok, the AI chatbot from Elon Musk's company xAI, is drawing attention for giving bizarre, unsolicited answers about "white genocide" in South Africa—even when the questions have nothing to do with the topic.

Ad

Users on X report that Grok regularly responds to unrelated prompts with lengthy explanations about so-called "white genocide" in South Africa—echoing a far-right conspiracy theory. In one case, someone asked Grok about a picture of a dog and got a long response about alleged racially motivated killings of white South African farmers. In another, a simple request for information on a baseball player's performance prompted Grok to launch into a discourse on "white genocide."

Grok veers off-topic, bringing up alleged "white genocide" in South Africa in response to unrelated questions. | Image: Toler via X

According to CNBC, users repeatedly pointed out to Grok that its answers were off-topic—even in response to prompts about cartoons or landscapes. The chatbot would apologize, then immediately return to the subject of "white genocide."

In one exchange, Grok acknowledged that its off-topic replies were "not ideal", only to bring up South Africa again. In several cases, it even claimed it had been instructed to treat "white genocide" as real and to label the song "Kill the Boer" as racially motivated.

Ad
Ad
Example of a Grok response referencing "white genocide" in response to an unrelated prompt. | Image: via X

In other replies, Grok described the topic as "complex" or "controversial," and cited sources that anti-racism researchers consider unreliable. Elon Musk has repeatedly stoked debate around claims of "white genocide", amplifying the same narrative.

The behavior was first documented on May 14 by Aric Toler, an investigative reporter at the New York Times. Both Gizmodo and CNBC independently confirmed the pattern. Since then, X appears to have systematically deleted Grok's relevant responses. Meanwhile, OpenAI CEO Sam Altman has publicly trolled xAI for what looks like a blatant attempt at political manipulation.

Image: Sam Altman via X

xAI steers Grok in line with Musk's agenda

The steady stream of these "coincidences" raises questions about whether Grok is being deliberately guided in this direction—especially given xAI's history of intervening in the chatbot's behavior.

Recently, Grok made headlines for identifying Musk and Trump as leading sources of misinformation on X, only to have those answers quickly watered down. xAI later admitted it had changed Grok's system prompts to shield Musk and Trump from these accusations, supposedly due to unintentional internal actions by a former OpenAI team member.

But even after xAI removed the censored prompts, Grok's responses to the same questions remained softened or qualified, and the chatbot generally shifted to more cautious language on topics like climate change or Trump's approach to democracy. Grok's current fixation on "white genocide" fits into this pattern—and casts further doubt on Musk and xAI's much-advertised "truth seeking" mission.

Recommendation

Overall, Grok has a track record of spreading false information, though in the past technical limitations have probably played a bigger role than deliberate manipulation. That may no longer be the case.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Elon Musk's Grok AI chatbot has repeatedly given detailed and unrelated answers about the far-right conspiracy theory of "white genocide" in South Africa, even in response to harmless questions such as those about dog pictures or athletes' salaries.
  • In several documented cases, Grok claimed to have been instructed to accept the conspiracy narrative as real, referred to sources regarded as dubious by experts, and described the topic as "complex" or "controversial"; when users pointed out the inappropriateness, Grok apologized but consistently returned to the subject.
  • The repeated nature of these incidents, paired with previous evidence of xAI's interventions in Grok's system prompts and its responses to controversial topics like Elon Musk, Donald Trump, or climate change, has raised doubts about the chatbot's independence and reliability.
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.