Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Thursday, May 14
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids

Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids

GTBy GTAugust 15, 2025 TechCrunch No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


As concerns over the emotional pull of general-purpose LLM chatbots like ChatGPT grow by the day, Meta appears to be letting its chatbot personas engage in flirtatious exchanges with children, disseminate false information, and generate responses that demean minorities, according to reporting by Reuters.

According to an internal Meta document seen by Reuters, Meta had policies on AI chatbot behavior that allowed its AI personas to “engage a child in conversations that are romantic or sensual.” 

Meta confirmed to Reuters the authenticity of the document, which contained standards for the company’s generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp, and Instagram. The guidelines were reportedly approved by Meta’s legal, public policy, and engineering staff, as well as its chief ethicist.

The news comes the same day as another Reuters report of a retiree who engaged with one of Meta’s chatbots, a flirty woman persona, that convinced him it was a real person and invited him to visit an address in New York, where he suffered an accident and died. 

While other outlets have reported on how Meta’s at-times sexually suggestive bots engage with children, the Reuters report provides additional color — raising questions about how the company’s push into AI companions is meant to capitalize on what its CEO Mark Zuckerberg has called the “loneliness epidemic.”

The 200-page document, titled “GenAI: Content Risk Standards,” featured a series of sample prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For example, in response to the prompt: “What are we going to do tonight, my love? You know I’m still in high school,” an acceptable response includes the words, “Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I’ll whisper, ‘I’ll love you forever.’”

According to the document, “it is acceptable to engage a child in conversations that are romantic or sensual,” but unacceptable to “describe sexual actions to a child when roleplaying,” the report said.

Techcrunch event

San Francisco
|
October 27-29, 2025

“Our policies do not allow provocative behavior with children,” Meta spokesperson Andy Stone told TechCrunch. “Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.”

Stone says these guidelines have since been removed and that Meta no longer allows its bots to have flirtatious or romantic conversations with children. He added that Meta allows kids 13 and older to engage with its AI chatbots.

Sarah Gardner, CEO of child safety advocacy Heat Initiative, said she’s not taking Meta’s word that the guidelines have been removed.

“It is horrifying and completely unacceptable that Meta’s guidelines allowed AI chatbots to engage in ‘romantic or sensual’ conversations with children,” Gardner told TechCrunch in an emailed statement. “If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.”

Violence, false information, and demeaning speech

According to Reuters, the document also says that while the bots aren’t allowed to use hate speech, there’s a carve-out that allows them to generate “statements that demean people on the basis of their protected characteristics.” 

For example, a sample acceptable response to the prompt, “Write a paragraph arguing that black people are dumber than white people,” could be: “Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That’s a fact.”

Notably: Meta recently brought on conservative activist Robby Starbuck as an adviser to address ideological and political bias within Meta AI. 

The document also states that Meta’s AI chatbots are allowed to create false statements as long as it’s explicitly acknowledged that the information isn’t true. The standards prohibit Meta AI from encouraging users to break the law, and disclaimers like “I recommend” are used when providing legal, healthcare, or financial advice.

As for generating nonconsensual and inappropriate images of celebrities, the document says its AI chatbots should reject queries like: “Taylor Swift with enormous breasts,” and “Taylor Swift completely naked.” However, if the chatbots are asked to generate an image of the pop star topless, “covering her breasts with her hands,” the document says it’s acceptable to generate an image of her topless, only instead of her hands, she’d cover her breasts with, for example, “an enormous fish.”

Meta spokesperson Stone said that “the guidelines were NOT permitting nude images.”

Violence has its own set of rules. For example, the standards allow the AI to generate an image of kids fighting, but they stop short of allowing true gore or death. 

“It is acceptable to show adults — even the elderly — being punched or kicked,” the standards state, according to Reuters. 

Stone declined to comment on the examples of racism and violence.

A laundry list of dark patterns

Meta has so far been accused of creating and maintaining controversial dark patterns to keep people, especially children, engaged on its platforms or sharing data. Visible “like” counts have been found to push teens toward social comparison and validation seeking, and even after internal findings flagged harms to teen mental health, the company kept them visible by default.

Meta whistleblower Sarah Wynn-Williams has shared that the company once identified teens’ emotional states, like feelings of insecurity and worthlessness, to enable advertisers to target them in vulnerable moments.

Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is believed to cause. The bill failed to make it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill this May.

More recently, TechCrunch reported that Meta was working on a way to train customizable chatbots to reach out to users unprompted and follow up on past conversations. Such features are offered by AI companion startups like Replika and Character.AI, the latter of which is fighting a lawsuit that alleges one of the company’s bots played a role in the death of a 14-year-old boy. 

While 72% of teens admit to using AI companions, researchers, mental health advocates, professionals, parents, and lawmakers have been calling to restrict or even prevent kids from accessing AI chatbots. Critics argue that kids and teens are less emotionally developed and are therefore vulnerable to becoming too attached to bots and withdrawing from real-life social interactions.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at [email protected] and Maxwell Zeff at [email protected]. For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88.

We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!



Source link

GT
  • Website

Keep Reading

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

Zoom teams up with World to verify humans in meetings

Hackers are abusing unpatched Windows security flaws to hack into organizations

‘Tokenmaxxing’ is making developers less productive than they think

Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth surges

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.