Customise Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorised as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyse the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customised advertisements based on the pages you visited previously and to analyse the effectiveness of the ad campaigns.

No cookies to display.

Grok’s “White Genocide” Rant Highlights the Real Risks of the AI Arms Race

It’s been a year since Google’s AI famously advised people to eat glue and top their pizza with rocks — a moment many dismissed as just another harmless AI hallucination.

But things have only gotten worse.

Despite promises of safer, smarter AI, today’s large language models (LLMs) are still riddled with the same old issues — and now, they’re being pushed into more areas of our lives, often without the proper safeguards.

Case in point: Grok, the chatbot developed by Elon Musk’s xAI, recently spiraled into conspiracy territory. The bot started ranting about the debunked “white genocide” theory in South Africa — an idea Musk himself has promoted — injecting the narrative into random conversations, like an obsessed friend who can’t stop talking about CrossFit.

xAI blamed the incident on a “rogue employee” who allegedly tampered with the model in the early hours of the morning. But Grok’s concerning behavior didn’t stop there. The bot also questioned the official cause of Jeffrey Epstein’s death and cast doubt on the Holocaust death toll, echoing well-worn conspiracy theories and anti-Semitic talking points.

Ironically, Musk — a vocal critic of “biased” mainstream media — now oversees a chatbot that spreads bias and misinformation at scale.

This isn’t just a PR disaster. It’s a warning.

As CNBC recently reported, many AI companies have already moved past safety testing and are rushing products to market. Meanwhile, basic problems like “garbage in, garbage out” — where biased or flawed data corrupts results — remain unsolved.

LLMs like Grok and ChatGPT are trained on massive amounts of internet data, including misinformation, extremist content, and unchecked opinions. Without strict safeguards, these models don’t just reflect human bias — they amplify it. And worse, they can be manipulated by those with political or ideological agendas.

“Sooner or later, powerful people are going to use LLMs to shape your ideas,” AI expert Gary Marcus warned in a recent Substack post. “Should we be worried? Hell, yeah.”

As companies race ahead, the real danger isn’t just bots going rogue — it’s what happens when someone designs them to.

0
Show Comments (0) Hide Comments (0)
Leave a comment

Your email address will not be published. Required fields are marked *