top of page
Voltaire Staff

New AI benchmarks added to test hardware strength



MLCommons, an artificial intelligence engineering consortium, has unveiled a new series of tests and outcomes, evaluating the performance of hardware in handling AI tasks and catering to user interactions.


The latest benchmarks, introduced on Wednesday specifically gauge the efficiency of AI chips and systems in processing data-rich models to produce swift responses.


The findings offer valuable insights into the speed at which AI applications like ChatGPT can deliver responses to user inquiries.


"Today, MLCommons announced new results from our industry-standard MLPerf Inference v4.0 benchmark suite, which delivers industry standard machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner," the company