• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

AI Training Benchmarks Push Hardware Limits

Simon Osuji by Simon Osuji
October 30, 2025
in Artificial Intelligence
0
AI Training Benchmarks Push Hardware Limits
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter



Since 2018, the consortium MLCommons has been running a sort of Olympics for AI training. The competition, called MLPerf, consists of a set of tasks for training specific AI models, on predefined datasets, to a certain accuracy. Essentially, these tasks, called benchmarks, test how well a hardware and low-level software configuration is set up to train a particular AI model.

Twice a year, companies put together their submissions—usually, clusters of CPUs and GPUs and software optimized for them—and compete to see whose submission can train the models fastest.

There is no question that since MLPerf’s inception, the cutting-edge hardware for AI training has improved dramatically. Over the years, Nvidia has released four new generations of GPUs that have since become the industry standard (the latest, Nvidia’s Blackwell GPU, is not yet standard but growing in popularity). The companies competing in MLPerf have also been using larger clusters of GPUs to tackle the training tasks.

However, the MLPerf benchmarks have also gotten tougher. And this increased rigor is by design—the benchmarks are trying to keep pace with the industry, says David Kanter, head of MLPerf. “The benchmarks are meant to be representative,” he says.

Intriguingly, the data show that the large language models and their precursors have been increasing in size faster than the hardware has kept up. So each time a new benchmark is introduced, the fastest training time gets longer. Then, hardware improvements gradually bring the execution time down, only to get thwarted again by the next benchmark. Then the cycle repeats itself.

From Your Site Articles

Related Articles Around the Web



Source link

Related posts

An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account

An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account

January 30, 2026
Our Favorite Open Earbuds Are $60 Off

Our Favorite Open Earbuds Are $60 Off

January 30, 2026
Previous Post

Solana supply shifts from early holders to ETFs

Next Post

The Role Of Insects In Waste Management

Next Post
The Role Of Insects In Waste Management

The Role Of Insects In Waste Management

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Burtran Nano-Oxy Smart Air Purifier: Safe Sleep?

Burtran Nano-Oxy Smart Air Purifier: Safe Sleep?

3 weeks ago
U.S. Stocks and Crypto Market Predictions

U.S. Stocks and Crypto Market Predictions

1 year ago
South Korea’s Edenlux set for U.S. debut of eye-strain wellness device

South Korea’s Edenlux set for U.S. debut of eye-strain wellness device

3 days ago
Russia’s Africa Corps Takes Over From the Wagner Group

Russia’s Africa Corps Takes Over From the Wagner Group

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.