Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Friday, May 8
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » Researchers say they’ve discovered a new method of ‘scaling up’ AI, but there’s reason to be skeptical

Researchers say they’ve discovered a new method of ‘scaling up’ AI, but there’s reason to be skeptical

GTBy GTMarch 20, 2025 TechCrunch No Comments4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Have researchers discovered a new AI “scaling law”? That’s what some buzz on social media suggests — but experts are skeptical.

AI scaling laws, a bit of an informal concept, describe how the performance of AI models improves as the size of the datasets and computing resources used to train them increases. Until roughly a year ago, scaling up “pre-training” — training ever-larger models on ever-larger datasets — was the dominant law by far, at least in the sense that most frontier AI labs embraced it.

Pre-training hasn’t gone away, but two additional scaling laws, post-training scaling and test-time scaling, have emerged to complement it. Post-training scaling is essentially tuning a model’s behavior, while test-time scaling entails applying more computing to inference — i.e. running models — to drive a form of “reasoning” (see: models like R1).

Google and UC Berkeley researchers recently proposed in a paper what some commentators online have described as a fourth law: “inference-time search.”

Inference-time search has a model generate many possible answers to a query in parallel and then select the “best” of the bunch. The researchers claim it can boost the performance of a year-old model, like Google’s Gemini 1.5 Pro, to a level that surpasses OpenAI’s o1-preview “reasoning” model on science and math benchmarks.

Our paper focuses on this search axis and its scaling trends. For example, by just randomly sampling 200 responses and self-verifying, Gemini 1.5 (an ancient early 2024 model!) beats o1-Preview and approaches o1. This is without finetuning, RL, or ground-truth verifiers. pic.twitter.com/hB5fO7ifNh

— Eric Zhao (@ericzhao28) March 17, 2025

“[B]y just randomly sampling 200 responses and self-verifying, Gemini 1.5 — an ancient early 2024 model — beats o1-preview and approaches o1,” Eric Zhao, a Google doctorate fellow and one of the paper’s co-authors, wrote in a series of posts on X. “The magic is that self-verification naturally becomes easier at scale! You’d expect that picking out a correct solution becomes harder the larger your pool of solutions is, but the opposite is the case!”

Several experts say that the results aren’t surprising, however, and that inference-time search may not be useful in many scenarios.

Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch that the approach works best when there’s a good “evaluation function” — in other words, when the best answer to a question can be easily ascertained. But most queries aren’t that cut-and-dry.

“[I]f we can’t write code to define what we want, we can’t use [inference-time] search,” he said. “For something like general language interaction, we can’t do this […] It’s generally not a great approach to actually solving most problems.”

Eric Zhao, a Google researcher and one of the co-authors of the study, pushed back against Guzdial’s assertions slightly.

“[O]ur paper actually focuses on cases where you don’t have access to an ‘evaluation function’ or ‘code to define what we want,’ which we usually refer to as a ground-truth verifier,” he said. “We’re instead studying when evaluation is something that the [model] needs to figure out by trying to verify itself. Actually, our paper’s main point is that the gap between this regime and the regime where you do have ground-truth verifiers […] can shrink nicely with scale.”

But Mike Cook, a research fellow at King’s College London specializing in AI, agreed with Guzdial’s assessment, adding that it highlights the delta between “reasoning” in the AI sense of the word and human thinking processes.

“[Inference-time search] doesn’t ‘elevate the reasoning process’ of the model,” Cook said. “[I]t’s just a way of us working around the limitations of a technology prone to making very confidently supported mistakes […] Intuitively if your model makes a mistake 5% of the time, then checking 200 attempts at the same problem should make those mistakes easier to spot.”

That inference-time search may have limitations is sure to be unwelcome news to an AI industry looking to scale up model “reasoning” compute-efficiently. As the co-authors of the paper note, reasoning models today can rack up thousands of dollars of computing on a single math problem.

It seems the search for new scaling techniques will continue.

Updated 3/20 5:12 a.m. Pacific: Added comments from study co-author Eric Zhao, who takes issue with an assessment by an independent researcher who critiqued the work.





Source link

GT
  • Website

Keep Reading

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

Zoom teams up with World to verify humans in meetings

Hackers are abusing unpatched Windows security flaws to hack into organizations

‘Tokenmaxxing’ is making developers less productive than they think

Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth surges

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.