Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Wednesday, May 13
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » Grok’s ‘white genocide’ responses show gen AI tampered with ‘at will’

Grok’s ‘white genocide’ responses show gen AI tampered with ‘at will’

GTBy GTMay 17, 2025 IT No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Muhammed Selim Korkutata | Anadolu | Getty Images

In the two-plus years since generative artificial intelligence took the the world by storm following the public release of ChatGPT, trust has been a perpetual problem.

Hallucinations, bad math and cultural biases have plagued results, reminding users that there’s a limit to how much we can rely on AI, at least for now.

Elon Musk’s Grok chatbot, created by his startup xAI, showed this week that there’s a deeper reason for concern: The AI can be easily manipulated by humans.

Grok on Wednesday began responding to user queries with false claims of “white genocide” in South Africa. By late in the day, screenshots were posted across X of similar answers even when the questions had nothing to do with the topic.

After remaining silent on the matter for well over 24 hours, xAI said late Thursday that Grok’s strange behavior was caused by an “unauthorized modification” to the chat app’s so-called system prompts, which help inform the way it behaves and interacts with users. In other words, humans were dictating the AI’s response.

The nature of the response, in this case, ties directly to Musk, who was born and raised in South Africa. Musk, who owns xAI in addition to his CEO roles at Tesla and SpaceX, has been promoting the false claim that violence against some South African farmers constitutes “white genocide,” a sentiment that President Donald Trump has also expressed.

Read more CNBC reporting on AI

“I think it is incredibly important because of the content and who leads this company, and the ways in which it suggests or sheds light on kind of the power that these tools have to shape people’s thinking and understanding of the world,” said Deirdre Mulligan, a professor at the University of California at Berkeley and an expert in AI governance.

Mulligan characterized the Grok miscue as an “algorithmic breakdown” that “rips apart at the seams” the supposed neutral nature of large language models. She said there’s no reason to see Grok’s malfunction as merely an “exception.”

AI-powered chatbots created by Meta, Google and OpenAI aren’t “packaging up” information in a neutral way, but are instead passing data through a “set of filters and values that are built into the system,” Mulligan said. Grok’s breakdown offers a window into how easily any of these systems can be altered to meet an individual or group’s agenda.

Representatives from xAI, Google and OpenAI didn’t respond to requests for comment. Meta declined to comment.

Different than past problems

Grok’s unsanctioned alteration, xAI said in its statement, violated “internal policies and core values.” The company said it would take steps to prevent similar disasters and would publish the app’s system prompts in order to “strengthen your trust in Grok as a truth-seeking AI.”

It’s not the first AI blunder to go viral online. A decade ago, Google’s Photo app mislabeled African Americans as gorillas. Last year, Google temporarily paused its Gemini AI image generation feature after admitting it was offering “inaccuracies” in historical pictures. And OpenAI’s DALL-E image generator was accused by some users of showing signs of bias in 2022, leading the company to announce that it was implementing a new technique so images “accurately reflect the diversity of the world’s population.”

In 2023, 58% of AI decision makers at companies in Australia, the U.K. and the U.S. expressed concern over the risk of hallucinations in a generative AI deployment, Forrester found. The survey in September of that year included 258 respondents.

Musk's ambition with Grok 3 is politically and financially driven, expert says

Experts told CNBC that the Grok incident is reminiscent of China’s DeepSeek, which became an overnight sensation in the U.S. earlier this year due to the quality of its new model and that it was reportedly built at a fraction of the cost of its U.S. rivals.

Critics have said that DeepSeek censors topics deemed sensitive to the Chinese government. Like China with DeepSeek, Musk appears to be influencing results based on his political views, they say.

When xAI debuted Grok in November 2023, Musk said it was meant to have “a bit of wit,” “a rebellious streak” and to answer the “spicy questions” that competitors might dodge. In February, xAI blamed an engineer for changes that suppressed Grok responses to user questions about misinformation, keeping Musk and Trump’s names out of replies.

But Grok’s recent obsession with “white genocide” in South Africa is more extreme.

Petar Tsankov, CEO of AI model auditing firm LatticeFlow AI, said Grok’s blowup is more surprising than what we saw with DeepSeek because one would “kind of expect that there would be some kind of manipulation from China.”

Tsankov, whose company is based in Switzerland, said the industry needs more transparency so users can better understand how companies build and train their models and how that influences behavior. He noted efforts by the EU to require more tech companies to provide transparency as part of broader regulations in the region.

Without a public outcry, “we will never get to deploy safer models,” Tsankov said, and it will be “people who will be paying the price” for putting their trust in the companies developing them.

Mike Gualtieri, an analyst at Forrester, said the Grok debacle isn’t likely to slow user growth for chatbots, or diminish the investments that companies are pouring into the technology. He said users have a certain level of acceptance for these sorts of occurrences.

“Whether it’s Grok, ChatGPT or Gemini — everyone expects it now,” Gualtieri said. “They’ve been told how the models hallucinate. There’s an expectation this will happen.”

Olivia Gambelin, AI ethicist and author of the book Responsible AI, published last year, said that while this type of activity from Grok may not be surprising, it underscores a fundamental flaw in AI models.

Gambelin said it “shows it’s possible, at least with Grok models, to adjust these general purpose foundational models at will.”

— CNBC’s Lora Kolodny and Salvador Rodriguez contributed to this report

WATCH: Elon Musk’s xAI chatbot Grok brings up South African ‘white genocide’ claims.

Elon Musk’s xAI chatbot Grok brings up South African ‘white genocide’ claims in unrelated responses



Source link

GT
  • Website

Keep Reading

Investors trust Google more than Meta when comes to spending on AI

Google launches training and inference TPUs in latest shot at Nvidia

Meta tracks employee usage on Google, LinkedIn AI training project

Meta will cut 10% of workforce as company pushes deeper into AI

OpenAI announces GPT-5.5, its latest artificial intelligence model

Trump says Anthropic deal is ‘possible’ for Department of Defense use

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.