Intel’s Habana Labs Launches Second-Generation AI Processors for Training and Inferencing

Habana Gaudi2 processor demonstrates two-times throughput over Nvidia’s A100 GPU

Intel announced that Habana Labs, its data center team focused on AI deep learning processor technologies, launched its second-generation deep learning processors for training and inference: Habana® Gaudi®2 and Habana® Greco™. These new processors address an industry gap by providing customers with high-performance, high-efficiency deep learning compute choices for both training workloads and inference deployments in the data center while lowering the AI barrier to entry for companies of all sizes.

“The launch of Habana’s new deep learning processors is a prime example of Intel executing on its AI strategy to give customers a wide array of solution choices – from cloud to edge – addressing the growing number and complex nature of AI workloads,” said Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group. “Gaudi2 can help Intel customers train increasingly large and complex deep learning workloads with speed and efficiency, and we’re anticipating the inference efficiencies that Greco will bring.”

The new Gaudi2 and Greco processors are purpose-built for AI deep learning applications, implemented in 7-nanometer technology and manufactured on Habana’s high-efficiency architecture. Habana Labs revealed Gaudi2’s training throughput performance for the ResNet-50 computer vision model and the BERT natural language processing model delivers twice the training throughput over the NVIDIA A100-80GB GPU.

“Compared with the A100 GPU, implemented in the same process node and roughly the same die size, Gaudi2 delivers clear leadership training performance as demonstrated with apples-to-apples comparison on key workloads,” said Eitan Medina, chief operating officer at Habana Labs. “This deep learning acceleration architecture is fundamentally more efficient and backed with a strong roadmap.”

What Gaudi2 Delivers

  • Deep learning training efficiency: The Habana Gaudi2 processor significantly increases training performance, building on the same high-efficiency first-generation Gaudi architecture that delivers up to 40% better price performance in the AWS cloud with Amazon EC2 DL1 instances and on-premises with the Supermicro X12 Gaudi Training Server. With a leap in process from 16 nm Gaudi to 7 nm, Gaudi2 provides a significant boost to its compute, memory and networking capabilities. Gaudi2 also introduces an integrated media processing engine for compressed media and offloading the host subsystem. Gaudi2 triples the in-package memory capacity from 32GB to 96GB of HBM2E at 2.45TB/sec bandwidth, and integrates 24 x 100GbE RoCE RDMA NICs, on-chip, for scaling-up and scaling-out using standard Ethernet.
  • Customer benefits: Gaudi2 provides customers a higher-performance deep learning training alternative to existing GPU-based acceleration, meaning they can train more and spend less, helping to lower total cost of ownership in the cloud and data center. Built to address many model types and end-market applications, customers can benefit from Gaudi2’s faster time-to-train, which can result in faster time-to-insights and faster time- to-market. Gaudi2 is designed to significantly improve vision modeling of applications used in autonomous vehicles, medical imaging and defect detection in manufacturing, as well as natural language processing applications.
  • Networking capacity, flexibility and efficiency: Habana has made it cost-effective and easy for customers to scale out training capacity by amplifying training bandwidth on second- generation Gaudi. With the integration of industry standard RoCE on chip, customers can easily scale and configure Gaudi2 systems to suit their deep learning cluster requirements. With system implementation on widely used industry-standard Ethernet connectivity, Gaudi2 enables customers to choose from a wide array of Ethernet switching and related networking equipment, enabling cost savings. Avoiding proprietary interconnect technologies in the data center (as are offered by competition) is important for IT decision makers who want to avoid single vendor “lock-in.” The on-chip integration of the networking interface controller (NIC) ports also lowers component costs.
  • Simplified model build and migration: The Habana® SynapseAI® software suite is optimized for deep learning model development and to ease migration of existing GPU- based models to Gaudi platform hardware. SynapseAI software supports training models on Gaudi2 and inferencing them on any target, including Intel® Xeon® processors, Habana Greco or Gaudi2 itself. Developers are supported with documentation and tools, how-to content and a community support forum on the Habana Developer Site with reference models and model roadmap on the Habana GitHub. Getting started with model migration is as easy as adding two lines of code; for expert users who wish to program their own kernels, Habana offers the full tool suite.

“We congratulate Habana on the launch of its new high-performance, 7nm Gaudi2 accelerator. We look forward to collaborating on the turnkey AI solution consisting of our DDN AI400X2 storage appliance combined with Supermicro X12 Gaudi®2 Training Servers to help enterprises with large, complex deep learning workloads unlock meaningful business value with simple but powerful storage,” said Paul Bloch, president and co-founder of DataDirect Networks.

About Availability of Gaudi2 Training Solutions

Gaudi2 processors are now available to Habana customers. Habana has partnered with Supermicro to bring the Supermicro X12 Gaudi2 Training Server to market this year. Habana also teamed up with DDN® to deliver turnkey rack-level solutions featuring the Supermicro X12 server with augmented AI storage capacity with the pairing of the DDN AI400X2 storage solution.

“We’re excited to bring our next-generation AI deep learning server to market featuring the high-performance 7 nm Gaudi2 processor that will enable our customers to achieve faster time-to-train advantages while preserving the efficiency and expanding on the scalability of first-generation Gaudi,” said Charles Liang, CEO, Supermicro.

Sign up for the free insideAI News newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1