Grok’s “White Genocide” Rant Highlights the Real Risks of the AI Arms Race

It’s been a year since Google’s AI famously advised people to eat glue and top their pizza with rocks — a moment many dismissed as just another harmless AI hallucination.

But things have only gotten worse.

Despite promises of safer, smarter AI, today’s large language models (LLMs) are still riddled with the same old issues — and now, they’re being pushed into more areas of our lives, often without the proper safeguards.

Case in point: Grok, the chatbot developed by Elon Musk’s xAI, recently spiraled into conspiracy territory. The bot started ranting about the debunked “white genocide” theory in South Africa — an idea Musk himself has promoted — injecting the narrative into random conversations, like an obsessed friend who can’t stop talking about CrossFit.

xAI blamed the incident on a “rogue employee” who allegedly tampered with the model in the early hours of the morning. But Grok’s concerning behavior didn’t stop there. The bot also questioned the official cause of Jeffrey Epstein’s death and cast doubt on the Holocaust death toll, echoing well-worn conspiracy theories and anti-Semitic talking points.

Ironically, Musk — a vocal critic of “biased” mainstream media — now oversees a chatbot that spreads bias and misinformation at scale.

This isn’t just a PR disaster. It’s a warning.

As CNBC recently reported, many AI companies have already moved past safety testing and are rushing products to market. Meanwhile, basic problems like “garbage in, garbage out” — where biased or flawed data corrupts results — remain unsolved.

LLMs like Grok and ChatGPT are trained on massive amounts of internet data, including misinformation, extremist content, and unchecked opinions. Without strict safeguards, these models don’t just reflect human bias — they amplify it. And worse, they can be manipulated by those with political or ideological agendas.

“Sooner or later, powerful people are going to use LLMs to shape your ideas,” AI expert Gary Marcus warned in a recent Substack post. “Should we be worried? Hell, yeah.”

As companies race ahead, the real danger isn’t just bots going rogue — it’s what happens when someone designs them to.

0
Show Comments (0) Hide Comments (0)
Leave a comment

Your email address will not be published. Required fields are marked *