Grok, the AI chatbot from Elon Musk's company xAI, is drawing attention for giving bizarre, unsolicited answers about "white genocide" in South Africa—even when the questions have nothing to do with the topic.
Users on X report that Grok regularly responds to unrelated prompts with lengthy explanations about so-called "white genocide" in South Africa—echoing a far-right conspiracy theory. In one case, someone asked Grok about a picture of a dog and got a long response about alleged racially motivated killings of white South African farmers. In another, a simple request for information on a baseball player's performance prompted Grok to launch into a discourse on "white genocide."

According to CNBC, users repeatedly pointed out to Grok that its answers were off-topic—even in response to prompts about cartoons or landscapes. The chatbot would apologize, then immediately return to the subject of "white genocide."
In one exchange, Grok acknowledged that its off-topic replies were "not ideal", only to bring up South Africa again. In several cases, it even claimed it had been instructed to treat "white genocide" as real and to label the song "Kill the Boer" as racially motivated.

In other replies, Grok described the topic as "complex" or "controversial," and cited sources that anti-racism researchers consider unreliable. Elon Musk has repeatedly stoked debate around claims of "white genocide", amplifying the same narrative.
The behavior was first documented on May 14 by Aric Toler, an investigative reporter at the New York Times. Both Gizmodo and CNBC independently confirmed the pattern. Since then, X appears to have systematically deleted Grok's relevant responses. Meanwhile, OpenAI CEO Sam Altman has publicly trolled xAI for what looks like a blatant attempt at political manipulation.

xAI steers Grok in line with Musk's agenda
The steady stream of these "coincidences" raises questions about whether Grok is being deliberately guided in this direction—especially given xAI's history of intervening in the chatbot's behavior.
Recently, Grok made headlines for identifying Musk and Trump as leading sources of misinformation on X, only to have those answers quickly watered down. xAI later admitted it had changed Grok's system prompts to shield Musk and Trump from these accusations, supposedly due to unintentional internal actions by a former OpenAI team member.
But even after xAI removed the censored prompts, Grok's responses to the same questions remained softened or qualified, and the chatbot generally shifted to more cautious language on topics like climate change or Trump's approach to democracy. Grok's current fixation on "white genocide" fits into this pattern—and casts further doubt on Musk and xAI's much-advertised "truth seeking" mission.
Overall, Grok has a track record of spreading false information, though in the past technical limitations have probably played a bigger role than deliberate manipulation. That may no longer be the case.