Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Friday, May 8
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » Samsung benchmarks real productivity of enterprise AI models

Samsung benchmarks real productivity of enterprise AI models

GTBy GTSeptember 26, 2025 AI No Comments4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Samsung is overcoming limitations of existing benchmarks to better assess the real-world productivity of AI models in enterprise settings. The new system, developed by Samsung Research and named TRUEBench, aims to address the growing disparity between theoretical AI performance and its actual utility in the workplace.

As businesses worldwide accelerate their adoption of large language models (LLMs) to improve their operations, a challenge has emerged: how to accurately gauge their effectiveness. Many existing benchmarks focus on academic or general knowledge tests, often limited to English and simple question and answer formats. This has created a gap that leaves enterprises without a reliable method for evaluating how an AI model will perform on complex, multilingual, and context-rich business tasks.

Samsung’s TRUEBench, short for Trustworthy Real-world Usage Evaluation Benchmark, has been developed to fill this void. It provides a comprehensive suite of metrics that assesses LLMs based on scenarios and tasks directly relevant to real-world corporate environments. The benchmark draws upon Samsung’s own extensive internal enterprise use of AI models, ensuring the evaluation criteria are grounded in genuine workplace demands.

The framework evaluates common enterprise functions such as creating content, analysing data, summarising lengthy documents, and translating materials. These are broken down into 10 distinct categories and 46 sub-categories, providing a granular view of an AI’s productivity capabilities.

“Samsung Research brings deep expertise and a competitive edge through its real-world AI experience,” said Paul (Kyungwhoon) Cheun, CTO of the DX Division at Samsung Electronics and Head of Samsung Research. “We expect TRUEBench to establish evaluation standards for productivity.”

To tackle the limitations of older benchmarks, TRUEBench is built upon a foundation of 2,485 diverse test sets spanning 12 different languages and supporting cross-linguistic scenarios. This multilingual approach is critical for global corporations where information flows across different regions. The test materials themselves reflect the variety of workplace requests, ranging from brief instructions of just eight characters to the complex analysis of documents exceeding 20,000 characters.

Samsung recognised that in a real business context, a user’s full intent is not always explicitly stated in their initial prompt. The benchmark is therefore designed to assess an AI model’s ability to understand and fulfil these implicit enterprise needs, moving beyond simple accuracy to a more nuanced measure of helpfulness and relevance.

To achieve this, Samsung Research developed a unique collaborative process between human experts and AI to create the productivity scoring criteria. Initially, human annotators establish the evaluation standards for a given task. An AI then reviews these standards, checking for potential errors, internal contradictions, or unnecessary constraints that might not reflect a realistic user expectation. Following the AI’s feedback, the human annotators refine the criteria. This iterative loop ensures the final evaluation standards are precise and reflective of a high-quality outcome.

This cross-verified process delivers an automated evaluation system that scores the performance of LLMs. By using AI to apply these refined criteria, the system minimises the subjective bias that can occur with human-only scoring, ensuring consistency and reliability across all tests. TRUEBench also employs a strict scoring model where an AI model must satisfy every condition associated with a test to receive a passing mark. This all or nothing approach for individual conditions enables a more detailed and exacting assessment of the performance of AI models across different enterprise tasks.

To boost transparency and encourage wider adoption, Samsung has made TRUEBench’s data samples and leaderboards publicly available on the global open-source platform Hugging Face. This allows developers, researchers, and enterprises to directly compare the productivity performance of up to five different AI models simultaneously. The platform provides a clear, at a glance overview of how various AIs stack up against each other on practical tasks.

As of writing, here are the top 20 models by overall ranking based on Samsung’s AI benchmark:

Current top 20 models by overall ranking based on Samsung’s AI benchmark that assesses the real-world productivity of AI models in enterprise settings.

The full published data also includes the average length of the AI-generated responses. This allows for a simultaneous comparison of not only performance but also efficiency, a key consideration for businesses weighing operational costs and speed.

With the launch of TRUEBench, Samsung is not merely releasing another tool but is aiming to change how the industry thinks about AI performance. By moving the goalposts from abstract knowledge to tangible productivity, Samsung’s benchmark could play a role in helping organisations make better decisions about which enterprise AI models to integrate into their workflows and bridge the gap between an AI’s potential and its proven value.

See also: Inside Huawei’s plan to make thousands of AI chips think like one computer

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

GT
  • Website

Keep Reading

Enterprise users swap AI pilots for deep integrations

Google, Sony Innovation Fund, and Okta back Resemble AI deepfake detection plan

Platform corrects AI algorithmic bias for eKYC

What ByteDance’s Launch Means for Enterprise

UK and Germany plan to commercialise quantum supercomputing

Frontier AI agents replace chatbots

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.