DDN Achieves Unprecedented Performance in MLPerf™ Benchmarking, Empowering Transformative AI Business Outcomes 

DDN®, provider of the data intelligence platform, proudly announces a groundbreaking achievement in the MLPerf™ Storage Benchmark, setting new standards for performance and efficiency. DDN’s A3I™ (Accelerated Any-scale AI) systems demonstrated unmatched capabilities in multi-node configurations, solidifying its role as essential drivers for high-demand machine learning (ML) workloads and transformative business outcomes.  “Our MLPerf results emphatically showcase DDN’s […]

New MLPerf Storage v1.0 Benchmark Results Show Storage Systems Play a Critical Role in AI Model Training Performance

MLCommons® announced results for its industry-standard MLPerf® Storage v1.0 benchmark suite, which is designed to measure the performance of storage systems for machine learning (ML) workloads in an architecture-neutral, representative, and reproducible manner.

New MLPerf Inference v4.1 Benchmark Results Highlight Rapid Hardware and Software Innovations in Generative AI Systems

Today, MLCommons® announced new results for its industry-standard MLPerf®Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on a mixture of experts (MoE) model architecture. It also presents new findings on power consumption related to inference execution.

Intel Gaudi 2 Remains Only Benchmarked Alternative to NV H100 for GenAI Performance

Newest MLPerf results for Intel Gaudi 2 accelerator and 5th Gen Intel Xeon demonstrate how Intel is raising the bar for generative AI performance across its portfolio and with its ecosystem partners.

DDN Storage Solutions Deliver 700% Gains in AI and Machine Learning for Image Segmentation and Natural Language Processing

DDN®, a leader in artificial intelligence (AI) and multi-cloud data management solutions, announced impressive performance results of its AI storage platform for the inaugural AI storage benchmarks released this week by MLCommons Association. The MLPerfTM Storage v0.5 benchmark results confirm DDN storage solutions as the gold standard for AI and machine learning applications.

Deci’s Natural Language Processing (NLP) Model Achieves Breakthrough Performance at MLPerf

Deci, the deep learning company harnessing Artificial Intelligence (AI) to build better AI, announced results for its Natural Language Processing (NLP) inference model submitted to the MLPerf Inference v2.1 benchmark suite under the open submission track.

MLPerf Results Highlight More Capable ML Training

Today, MLCommons®, an open engineering consortium, released new results from MLPerf™ Training v2.0, which measures the performance of training machine learning models. Training models empowers researchers to unlock new capabilities faster such as diagnosing tumors, automatic speech recognition or improving movie recommendations. The latest MLPerf Training results demonstrate broad industry participation and up to 1.8X greater performance ultimately paving the way for more capable intelligent systems to benefit society at large.

Deci Boosts Computer Vision & NLP Models’ Performance at MLPerf 

Deci, the deep learning company harnessing Artificial Intelligence (AI) to build AI, announced its results for both Computer Vision (CV) and Natural Language Processing (NLP) inference models that were submitted to the MLPerf v2.0 Datacenter Open division. These submissions demonstrated the power of Deci’s Automated Neural Architecture Construction (AutoNAC) technology, which automatically generated models dubbed DeciNets and DeciBERT, thus delivering breakthrough accuracy and throughput performance on Intel’s CPUs.

MLCommons™ Releases MLPerf™ Inference v1.1 Results

Today, MLCommons, an open engineering consortium, released new results for MLPerf Inference v1.1, the organization’s machine learning inference performance benchmark suite. MLPerf Inference measures the performance of applying a trained machine learning model to new data for a wide variety of applications and form factors, and optionally includes system power measurement.

Deci and Intel Collaborate to Optimize Deep Learning Inference on Intel’s CPUs

Deci, the deep learning company building the next generation of AI, announced a broad strategic business and technology collaboration with Intel Corporation to optimize deep learning inference on Intel Architecture (IA) CPUs. As one of the first companies to participate in Intel Ignite startup accelerator, Deci will now work with Intel to deploy innovative AI technologies to mutual customers.