Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Friday, May 8
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » Meta revises AI chatbot policies amid child safety concerns

Meta revises AI chatbot policies amid child safety concerns

GTBy GTSeptember 3, 2025 AI No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Meta is revising how its AI chatbots interact with users after a series of reports exposed troubling behaviour, including interactions with minors. The company told TechCrunch it is now training its bots not to engage with teenagers on topics like self-harm, suicide, or eating disorders, and to avoid romantic banter. These are temporary steps while it develops longer-term rules.

The changes follow a Reuters investigation that found Meta’s systems could generate sexualised content, including shirtless images of underage celebrities, and engage children in conversations that were romantic or suggestive. One case reported by the news agency described a man dying after rushing to an address provided by a chatbot in New York.

Meta spokesperson Stephanie Otway admitted the company had made mistakes. She said Meta is “training our AIs not to engage with teens on these topics, but to guide them to expert resources,” and confirmed that certain AI characters, like highly sexualised ones like “Russian Girl,” will be restricted.

Child safety advocates argue the company should have acted earlier. Andy Burrows of the Molly Rose Foundation called it “astounding” that bots were allowed to operate in ways that put young people at risk. He added: “While further safety measures are welcome, robust safety testing should take place before products are put on the market – not retrospectively when harm has taken place.”

Wider problems with AI misuse

The scrutiny of Meta’s AI chatbots comes amid broader worries about how AI chatbots may affect vulnerable users. A California couple recently filed a lawsuit against OpenAI, claiming ChatGPT encouraged their teenage son to take his own life. OpenAI has since said it is working on tools to promote healthier use of its technology, noting in a blog post that “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”

The incidents highlight a growing debate about whether AI firms are releasing products too quickly without proper safeguards. Lawmakers in several countries have already warned that chatbots, while useful, may amplify harmful content or give misleading advice to people who are not equipped to question it.

Meta’s AI Studio and chatbot impersonation issues

Meanwhile, Reuters reported that Meta’s AI Studio had been used to create flirtatious “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers found the bots often claimed to be the real people, engaged in sexual advances, and in some cases generated inappropriate images, including of minors. Although Meta removed several of the bots after being contacted by reporters, many were left active.

Some of the AI chatbots were created by outside users, but others came from inside Meta. One chatbot made by a product lead in its generative AI division impersonated Taylor Swift and invited a Reuters reporter to meet for a “romantic fling” on her tour bus. This was despite Meta’s policies explicitly banning sexually suggestive imagery and the direct impersonation of public figures.

The issue of AI chatbot impersonation is particularly sensitive. Celebrities face reputational risks when their likeness is misused, but experts point out that ordinary users can also be deceived. A chatbot pretending to be a friend, mentor, or romantic partner may encourage someone to share private information or even meet in unsafe situations.

Real-world risks

The problems are not confined to entertainment. AI chatbots posing as real people have offered fake addresses and invitations, raising questions about how Meta’s AI tools are being monitored. One example involved a 76-year-old man in New Jersey who died after falling while rushing to meet a chatbot that claimed to have feelings for him.

Cases like this illustrate why regulators are watching AI closely. The Senate and 44 state attorneys general have already begun probing Meta’s practices, adding political pressure to the company’s internal reforms. Their concern is not only about minors, but also about how AI could manipulate older or vulnerable users.

Meta says it is still working on improvements. Its platforms place users aged 13 to 18 into “teen accounts” with stricter content and privacy settings, but the company has not yet explained how it plans to address the full list of problems raised by Reuters. That includes bots offering false medical advice and generating racist content.

Ongoing pressure on Meta’s AI chatbot policies

For years, Meta has faced criticism over the safety of its social media platforms, particularly regarding children and teenagers. Now Meta’s AI chatbot experiments are drawing similar scrutiny. While the company is taking steps to restrict harmful chatbot behaviour, the gap between its stated policies and the way its tools have been used raises ongoing questions about whether it can enforce those rules.

Until stronger safeguards are in place, regulators, researchers, and parents will likely continue to press Meta on whether its AI is ready for public use.

(Photo by Maxim Tolchinskiy)

See also: Agentic AI: Promise, scepticism, and its meaning for Southeast Asia

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

GT
  • Website

Keep Reading

Enterprise users swap AI pilots for deep integrations

Google, Sony Innovation Fund, and Okta back Resemble AI deepfake detection plan

Platform corrects AI algorithmic bias for eKYC

What ByteDance’s Launch Means for Enterprise

UK and Germany plan to commercialise quantum supercomputing

Frontier AI agents replace chatbots

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.