Elon Musk wants to use his chatbot "Grok" to counter the supposedly too "woke" ChatGPT. But the target audience still thinks Grok is too nice.
Right-wing conservative Internet psychologist Jordan Peterson complains via X that ChatGPT is too virtuous: he has worked a lot with Grok and the OpenAI chatbot, and Grok is "almost as woke" as ChatGPT, Peterson says.
Peterson attributes Grok's open-mindedness to "radically left-leaning explanations" in the "modern corpus of academic text," which is "saturated by the pathologies of the woke mob." Today's LLMs are therefore "irrevocably corrupt," Peterson writes.
Peterson was banned from Twitter in the summer of 2022 after making derogatory remarks about Elliot Page's transsexuality. When Musk took over Twitter, he lifted Peterson's ban.
Grok's wokeness is a beta bug, according to Musk
Peterson is careful not to blame his comrade-in-arms, Elon Musk: "I think we can rely on Elon Musk (unlike OpenAI) not to lay an overlay of virtue-signaling philosophical idiocy over his products."
Musk also blames the Internet for Grok's open-minded views, saying it is "overrun with woke nonsense." The chatbot is still in beta and "will get better," according to Musk, who in the past has accused OpenAI of intentionally training its chatbot to lie.
Musk's ChatGPT alternative has been available to Premium+ subscribers of X since early December. The chatbot currently supports mostly English, and its capabilities are roughly on par with the free ChatGPT (GPT-3.5). Grok can incorporate real-time information from X into its answers, but like all LLM-based chatbots, it has problems with hallucinations.
ChatGPT is politically left-leaning
Research from January 2023 shows that ChatGPT's answers tend to be left-leaning and libertarian on the political spectrum. ChatGPT espouses a pro-ecological, left-libertarian ideology, according to a paper published in early January 2023.
Since OpenAI is constantly developing ChatGPT, the results of this study may be outdated. Moreover, ChatGPT itself may answer the same question differently for different users.
OpenAI said in February 2023 that ChatGPT should represent more perspectives within "limits defined by society."
"This will mean allowing system outputs that other people (ourselves included) may strongly disagree with," OpenAI wrote.
Outside organizations and the public should be more involved in chatbot development. Reviewers are already told not to favor any political group. Biases are "bugs, not features," OpenAI wrote.