This second article is an editorial series that explores high performance storage solutions in the cloud for an exploding commercial data universe. This week we look at how an enterprise can attain high-performance scalable storage for their big data applications. You can download the entire series in PDF from the insideAI News White Paper Library courtesy of Intel.
As compute speed advanced towards its theoretical maximum, the HPC community quickly discovered that the speed of storage devices and the underlying the Network File System (NFS) developed decades ago had not kept pace. As CPUs got faster, storage became the main bottleneck in high data-volume environments. While NFS is still the appropriate infrastructure for networking storage and compute nodes in most data centers and business applications, it does not scale well, which means that performance and throughput quickly diminish as more networked servers and storage devices are added and data traffic expands. As always, the slowest component dominates overall performance.
Networked systems like NFS and other distributed file systems rely on a single node to direct all the I/O. This simple approach is easy to manage, but as systems grow into clusters of servers, pushing all the I/O traffic through one node creates a serious bottleneck for heavy data workloads. It also presents a single point of failure that adds considerable risk that could make critical data unavailable when needed.
Traditionally, data centers have met growing demands for compute power and storage by adding more server clusters and storage devices to the network, which only compounds data bottlenecks, management overhead, and costs.
Beyond NFS, today’s HPDA needs a robust I/O file system that scales efficiently to extreme capacities, is high-performing for moderate to massive data flows, and has built-in redundancies to guarantee high availability and reliability against hardware failures.
Next week we’ll explore the basics of Lustre as the basis for a scalable storage solution. If you prefer you can download the complete guide in a PDF from the insideAI News White Paper Library courtesy of Intel.