The HPE Elastic Platform for Big Data Analytics

This is the fourth entry in an insideAI News series that explores the intelligent use of big data on an industrial scale.  This series, compiled in a complete Guide,  also covers the exponential growth of data and the changing data landscape,  as well realizing a scalable data lake.  The fourth entry in the series is focused on offerings from HPE for big data analytics. 

The HPE Elastic Platform for big data analytics is a modular infrastructure foundation that accelerates business insights by enabling organizations to rapidly deploy, efficiently scale, and securely manage the explosive growth of big data workloads.

HPE offers two powerful deployment models under the Elastic Platform:

  • HPE Balanced and Density Optimized (BDO) – Supports traditional Hadoop deployments that are symmetric (scale compute and storage together), with some flexibility in choice of memory, processor, and storage capacity. This is widely based on the HPE ProLiant DL380 server platform, with density optimized architectures utilizing the HPE Apollo 4000 series servers.
  • HPE Workload and Density Optimized (WDO) – Optimizes efficiency and price performance through a building block approach. This architecture allows for independent scaling of compute and storage, utilizing the power of faster Ethernet networks while accommodating the independent growth of data and workloads. The standard HPE WDO architecture is based on the HPE Apollo 4200 storage-optimized block and the HPE Apollo 2000 compute-optimized block, coupled through high-speed Ethernet. Combining these linked storage and compute blocks with Hadoop’s YARN resource scheduling features. delivers a scalable, multi-tenant Hadoop platform. The HPE Apollo 4200 was chosen as the ideal storage block, as it provides exceptional storage density in a 2U form factor. The HPE Apollo 2000 features exceptional compute density, supporting up to four servers with high core-to-memory ratios in a 2U form factor.

The HPE Elastic Platform for big data analytics is a modular infrastructure foundation that accelerates business insights by enabling organizations to rapidly deploy, efficiently scale, and securely manage the explosive growth of big data workloads.

Organizations that invest in symmetric configurations have the ability to repurpose existing deployments into a more elastic platform such as the WDO architecture. This system is geared to help customers expand their analytics capabilities by growing compute and/or storage capacity independently, without building a new cluster.

big data analytics

The figure below highlights the various building blocks that create the HPE BDO and WDO system offerings. By leveraging a building block approach, customers can simplify the underlying infrastructure needed to address business initiatives surrounding data warehouse modernization, analytics, and business intelligence, and to build large-scale data lakes with diverse datasets. As workloads and data storage requirements change (often uncorrelated to each other), the HPE WDO system allows customers to add independent compute and storage blocks, which maximizes infrastructure capabilities for data-heavy workloads and promotes seamless scalability.

big data analytics

Accelerators

Accelerators are an additional component of the HPE Elastic Platform for Analytics. Accelerators are specialized building blocks designed to optimize workload performance, storage efficiency, and deployment.

[clickToTweet tweet=”Accelerators are an additional component of the HPE Elastic Platform for Analytics. ” quote=”Accelerators are an additional component of the HPE Elastic Platform for Analytics. “]

As more demanding workloads are added, accelerator building blocks target intended outcomes. Additional benefits of accelerators include:

  • Performance acceleration of different workloads such as NoSQL databases, like HBase or Cassandra, that require low-latency processing in near real-time, in-memory analytics using Spark and/or SAP HANA Vora, deep learning with Spark, and Caffe on GPU accelerated servers
  • Storage efficiency optimization with HDFS tiering and erasure coding
  • Deployment agility and self-service through automation and Platform as a Service (PaaS) solutions – HPE Insight CMU and BlueData EPIC respectively

Over the next few weeks, this series on the use of big data on an industrial scale will cover the following additional topics:

You can also download the complete report, “insideAI News Guide to the Intelligent Use of Big Data on an Industrial Scale,” courtesy of Hewlett Packard Enterprise.