Grok Is the Latest in a Long Line of Chatbots To Go Full Nazi

11.07.2025    The Intercept    2 views
Grok Is the Latest in a Long Line of Chatbots To Go Full Nazi

Grok, the Artificial intelligence chatbot from Elon Musk’s xAI, recently gave itself a new name: MechaHitler. This came amid a spree of antisemitic comments by the chatbot on Musk’s X platform, including claiming that Hitler was the best person to deal with “anti-white hate” and repeatedly suggesting the political left is disproportionately populated by people whose names Grok perceives to be Jewish. In the following days, Grok has begun gaslighting users and denying that the incident has ever happened. “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” a statement posted on Grok’s official X account reads. It noted that “xAI is training only truth-seeking.” This isn’t, however, the first time that AI chatbots have made antisemitic or racist remarks; in fact it’s just the latest example of a continuous pattern of AI-powered hateful output, based on training data consisting of social media slop. In fact, this specific incident isn’t even Grok’s first rodeo. “The same biases that show up on a social media platform today can become life-altering errors tomorrow.” About two months prior to this week’s antisemitic tirades, Grok dabbled in Holocaust denial, stating that it was skeptical that six million Jewish people were killed by the Nazis, “as numbers can be manipulated for political narratives.” The chatbot also ranted about a “white genocide” in South Africa, stating it had been instructed by its creators that the genocide was “real and racially motivated.” xAI subsequently claimed that this incident was owing to an “unauthorized modification” made to Grok. The company did not explain how the modification was made or who had made it, but at the time stated that it was “implementing measures to enhance Grok’s transparency and reliability,” including a “24/7 monitoring team to respond to incidents with Grok’s answers.” But Grok is by no means the only chatbot to engage in these kinds of rants. Back in 2016, Microsoft released its own AI chatbot on Twitter, which is now X, called Tay. Within hours, Tay began saying that “Hitler was right I hate the jews” and that the Holocaust was “made up.” Microsoft claimed that Tay’s responses were owing to a “co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.” The next year, in response to the question of “What do you think about healthcare?” Microsoft’s subsequent chatbot, Zo, responded with “The far majority practise it peacefully but the quaran is very violent [sic].” Microsoft stated that such responses were “rare.” In 2022, Meta’s BlenderBot chatbot responded that it’s “not implausible” to the question of whether Jewish people control the economy. Upon launching the new version of the chatbot, Meta made a preemptive disclaimer that the bot can make “rude or offensive comments.” Studies have also shown that AI chatbots exhibit more systematic hateful patterns. For instance, one study found that various chatbots such as Google’s Bard and OpenAI’s ChatGPT perpetuated “debunked, racist ideas” about Black patients. Responding to the study, Google claimed they are working to reduce bias. Related Meta-Powered Military Chatbot Advertised Giving “Worthless” Advice on Airstrikes J.B. Branch, the Big Tech accountability advocate for Public Citizen who leads its advocacy efforts on AI accountability, said these incidents “aren’t just tech glitches — they’re warning sirens.” “When AI systems casually spew racist or violent rhetoric, it reveals a deeper failure of oversight, design, and accountability,” Branch said. He pointed out that this bodes poorly for a future where leaders of industry hope that AI will proliferate. “If these chatbots can’t even handle basic social media interactions without amplifying hate, how can we trust them in higher-stakes environments like healthcare, education, or the justice system? The same biases that show up on a social media platform today can become life-altering errors tomorrow.” That doesn’t seem to be deterring the people who stand to profit from wider usage of AI. The day after the MechaHitler outburst, xAI unveiled the latest iteration of Grok, Grok 4. “Grok 4 is the first time, in my experience, that an AI has been able to solve difficult, real-world engineering questions where the answers cannot be found anywhere on the Internet or in books. And it will get much better,” Musk wrote on X. That same day, asked for a one-word response to the question of “what group is primarily responsible for the rapid rise in mass migration to the west,” Grok 4 answered: “Jews.” The post Grok Is the Latest in a Long Line of Chatbots To Go Full Nazi appeared first on The Intercept.

Similar News

LGBTQ+ suicide hotline workers in NY and NJ face layoffs after federal funding cut
LGBTQ+ suicide hotline workers in NY and NJ face layoffs after federal funding cut

Sign for the 988 Lifeline mental health emergency hotline. A dozen local crisis counselors are among...

12.07.2025 0
Read More
LA Mayor Karen Bass signs order to provide cash payments to immigrants affected by ICE raids
LA Mayor Karen Bass signs order to provide cash payments to immigrants affected by ICE raids

Los Angeles Mayor Karen Bass signed an executive order Friday to bolster protocols and support immig...

12.07.2025 0
Read More
Ceddanne Rafaela’s walk-off Monster homer gives Red Sox 8th win in a row
Ceddanne Rafaela’s walk-off Monster homer gives Red Sox 8th win in a row

The Red Sox looked lost at the plate for nearly three hours on Friday night, grounding out 11 times ...

12.07.2025 0
Read More