Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Saturday, May 9
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » DeepMind’s 145-page paper on AGI safety may not convince skeptics

DeepMind’s 145-page paper on AGI safety may not convince skeptics

GTBy GTApril 3, 2025 TechCrunch No Comments4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Google DeepMind on Wednesday published an exhaustive paper on its safety approach to AGI, roughly defined as AI that can accomplish any task a human can.

AGI is a bit of a controversial subject in the AI field, with naysayers suggesting that it’s little more than a pipe dream. Others, including major AI labs like Anthropic, warn that it’s around the corner, and could result in catastrophic harms if steps aren’t taken to implement appropriate safeguards.

DeepMind’s 145-page document, which was co-authored by DeepMind co-founder Shane Legg, predicts that AGI could arrive by 2030, and that it may result in what the authors call “severe harm.” The paper doesn’t concretely define this, but gives the alarmist example of “existential risks” that “permanently destroy humanity.”

“[We anticipate] the development of an Exceptional AGI before the end of the current decade,” the authors wrote. “An Exceptional AGI is a system that has a capability matching at least 99th percentile of skilled adults on a wide range of non-physical tasks, including metacognitive tasks like learning new skills.”

Off the bat, the paper contrasts DeepMind’s treatment of AGI risk mitigation with Anthropic’s and OpenAI’s. Anthropic, it says, places less emphasis on “robust training, monitoring, and security,” while OpenAI is overly bullish on “automating” a form of AI safety research known as alignment research.

The paper also casts doubt on the viability of superintelligent AI — AI that can perform jobs better than any human. (OpenAI recently claimed that it’s turning its aim from AGI to superintelligence.) Absent “significant architectural innovation,” the DeepMind authors aren’t convinced that superintelligent systems will emerge soon — if ever.

The paper does find it plausible, though, that current paradigms will enable “recursive AI improvement”: a positive feedback loop where AI conducts its own AI research to create more sophisticated AI systems. And this could be incredibly dangerous, assert the authors.

At a high level, the paper proposes and advocates for the development of techniques to block bad actors’ access to hypothetical AGI, improve the understanding of AI systems’ actions, and “harden” the environments in which AI can act. It acknowledges that many of the techniques are nascent and have “open research problems,” but cautions against ignoring the safety challenges possibly on the horizon.

“The transformative nature of AGI has the potential for both incredible benefits as well as severe harms,” the authors write. “As a result, to build AGI responsibly, it is critical for frontier AI developers to proactively plan to mitigate severe harms.”

Some experts disagree with the paper’s premises, however.

Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be “rigorously evaluated scientifically.” Another AI researcher, Matthew Guzdial, an assistant professor at the University of Alberta, said that he doesn’t believe recursive AI improvement is realistic at present.

“[Recursive improvement] is the basis for the intelligence singularity arguments,” Guzdial told TechCrunch, “but we’ve never seen any evidence for it working.”

Sandra Wachter, a researcher studying tech and regulation at Oxford, argues that a more realistic concern is AI reinforcing itself with “inaccurate outputs.”

“With the proliferation of generative AI outputs on the internet and the gradual replacement of authentic data, models are now learning from their own outputs that are riddled with mistruths, or hallucinations,” she told TechCrunch. “At this point, chatbots are predominantly used for search and truth-finding purposes. That means we are constantly at risk of being fed mistruths and believing them because they are presented in very convincing ways.”

Comprehensive as it may be, DeepMind’s paper seems unlikely to settle the debates over just how realistic AGI is — and the areas of AI safety in most urgent need of attention.



Source link

GT
  • Website

Keep Reading

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

Zoom teams up with World to verify humans in meetings

Hackers are abusing unpatched Windows security flaws to hack into organizations

‘Tokenmaxxing’ is making developers less productive than they think

Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth surges

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.