Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Monday, May 11
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » Anthropic CEO wants to open the black box of AI models by 2027

Anthropic CEO wants to open the black box of AI models by 2027

GTBy GTApril 25, 2025 TechCrunch No Comments4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, Amodei set an ambitious goal for Anthropic to reliably detect most AI model problems by 2027.

Amodei acknowledges the challenge ahead. In “The Urgency of Interpretability,” the CEO says Anthropic has made early breakthroughs in tracing how models arrive at their answers — but emphasizes that far more research is needed to decode these systems as they grow more powerful.

“I am very concerned about deploying such systems without a better handle on interpretability,” Amodei wrote in the essay. “These systems will be absolutely central to the economy, technology, and national security, and will be capable of so much autonomy that I consider it basically unacceptable for humanity to be totally ignorant of how they work.”

Anthropic is one of the pioneering companies in mechanistic interpretability, a field that aims to open the black box of AI models and understand why they make the decisions they do. Despite the rapid performance improvements of the tech industry’s AI models, we still have relatively little idea how these systems arrive at decisions.

For example, OpenAI recently launched new reasoning AI models, o3 and o4-mini, that perform better on some tasks, but also hallucinate more than its other models. The company doesn’t know why it’s happening.

“When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate,” Amodei wrote in the essay.

In the essay, Amodei notes that Anthropic co-founder Chris Olah says that AI models are “grown more than they are built.” In other words, AI researchers have found ways to improve AI model intelligence, but they don’t quite know why.

In the essay, Amodei says it could be dangerous to reach AGI — or as he calls it, “a country of geniuses in a data center” — without understanding how these models work. In a previous essay, Amodei claimed the tech industry could reach such a milestone by 2026 or 2027, but believes we’re much further out from fully understanding these AI models.

In the long term, Amodei says Anthropic would like to, essentially, conduct “brain scans” or “MRIs” of state-of-the-art AI models. These checkups would help identify a wide range of issues in AI models, including their tendencies to lie or seek power, or other weakness, he says. This could take five to 10 years to achieve, but these measures will be necessary to test and deploy Anthropic’s future AI models, he added.

Anthropic has made a few research breakthroughs that have allowed it to better understand how its AI models work. For example, the company recently found ways to trace an AI model’s thinking pathways through, what the company call, circuits. Anthropic identified one circuit that helps AI models understand which U.S. cities are located in which U.S. states. The company has only found a few of these circuits but estimates there are millions within AI models.

Anthropic has been investing in interpretability research itself and recently made its first investment in a startup working on interpretability. While interpretability is largely seen as a field of safety research today, Amodei notes that, eventually, explaining how AI models arrive at their answers could present a commercial advantage.

In the essay, Amodei called on OpenAI and Google DeepMind to increase their research efforts in the field. Beyond the friendly nudge, Anthropic’s CEO asked for governments to impose “light-touch” regulations to encourage interpretability research, such as requirements for companies to disclose their safety and security practices. In the essay, Amodei also says the U.S. should put export controls on chips to China, in order to limit the likelihood of an out-of-control, global AI race.

Anthropic has always stood out from OpenAI and Google for its focus on safety. While other tech companies pushed back on California’s controversial AI safety bill, SB 1047, Anthropic issued modest support and recommendations for the bill, which would have set safety reporting standards for frontier AI model developers.

In this case, Anthropic seems to be pushing for an industry-wide effort to better understand AI models, not just increasing their capabilities.



Source link

GT
  • Website

Keep Reading

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

Zoom teams up with World to verify humans in meetings

Hackers are abusing unpatched Windows security flaws to hack into organizations

‘Tokenmaxxing’ is making developers less productive than they think

Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth surges

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.