MLCommons Releases MLPerf AI Training v5.1 Results

Today, MLCommons announced new results for the MLPerf Training v5.1 benchmark suite, highlighting the rapid evolution and increasing richness of the AI ecosystem as well as significant performance improvements from new generations of systems. Go here to view the full results for MLPerf Training v5.1 and find additional information about the benchmarks. The MLPerf Training benchmark suite comprises […]

MLPerf Releases AI Storage v2.0 Benchmark Results

San Francisco, CA — MLCommons has announced results for its MLPerf Storage v2.0 benchmark suite, designed to measure the performance of storage systems for machine learning workloads in an architecture-neutral, representative, and reproducible manner. According to MLCommons, the results show that storage systems performance ….

MLCommons Releases MLPerf Inference v5.0 Benchmark Results

Today, MLCommons announced new results for its MLPerf Inference v5.0 benchmark suite, which delivers machine learning (ML) system performance benchmarking. The rorganization said the esults highlight that the AI community is focusing on generative AI ….

MLCommons Releases AILuminate LLM v1.1 with French Language Capabilities

Paris – February 11, 2025: MLCommons, in partnership with the AI Verify Foundation, today released v1.1 of AILuminate, incorporating new French language capabilities into its first-of-its-kind AI safety benchmark. The new update – which was announced at the Paris AI Action Summit – marks the next step towards a global standard for AI safety and comes as […]

New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems

Today, MLCommons® announced new results for its industry-standard MLPerf®Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on a mixture of experts (MoE) model architecture. It also presents new findings on power consumption related to inference execution.