To Infinity, and Beyond! – Scaling Your Hadoop Infrastructure

TomLyonIn this special guest feature, Tom Lyon, Chief Scientist at DriveScale, describes how to run demanding analytics applications and/or 1000+ node Hadoop workloads on commodity servers and storage. Tom is a computing systems architect, a serial entrepreneur and a kernel hacker. He most recently co-founded DriveScale, a company that is pioneering flexible, scale-out computing for the enterprise using standard servers and commodity storage. He received a B.S. in Electrical Engineering and Computer Science from Princeton University. Tom was also a founder at Nuova Systems (sold to Cisco) and Ipsilon Networks (sold to Nokia). Additionally, as employee #8 at Sun Microsystems, Tom made seminal contributions to the UNIX kernel, created the SunLink product family, and was one of the NFS and SPARC architects.

So, you’ve had your Hadoop cluster for a while. You’ve got maybe 50 to 100 nodes running stably, you’ve got some mastery of the analytics frameworks – whether Spark or Flink or good old Map-Reduce. You’ve been able to demonstrate real business value from your cluster and you’re ready to take it to a whole new level with lots more data and a lot more applications and users. The hardware for your cluster was probably not a big concern as you dove into Hadoop, so you went with the typical racks of commodity servers, each with 12 or 24 hard drives. It works, so why think about different hardware?

Well, because as your cluster size approaches many hundreds of nodes, it will certainly be the biggest cluster in your data center, and may even become the majority of your compute infrastructure. At this scale, inefficiencies caused by poorly balanced resources can add up to a lot of wasted time, money, power, heat, and space!

Bend It or Break It

Even if you think your CPU and storage are well balanced today, you can bet they won’t be as applications and frameworks evolve, data gets bigger, and CPUs get faster. The CPU you buy next year will be twice as fast as last years’; the disks will still be slow but have enormous capacity. There’s just no predicting the correct balance between CPU and storage. What you need is flexibility.

That flexibility is achieved by disaggregating/separating the disk from the CPU nodes. But beware traditional NAS and SAN solutions – they are far from “commodity” hardware and will blow your budget to smithereens while struggling to achieve the performance levels that Hadoop needs. Look for solutions with rack-scale architectures that can maximize your flexibility while preserving the high performance and low cost needed for Hadoop. The whole Big Data movement is enabled by very cheap storage, so don’t get locked into a traditional “gold-plated” storage solution.

Go Big!

Once storage is removed from the CPU nodes, you have a much broader choice of CPU/memory combinations. Consider the “classic” Hadoop node of 2013/4 – 12 CPU cores with about 64GB of memory. Today, you can easily afford 36 to 40 core nodes with 512GB of memory (and the cores and memory are both a lot faster). Even if you have a traditional Map/Reduce application which is I/O limited on smaller CPUs, moving to bigger, beefier CPU nodes can remove a lot of communication and serialization overhead. Spark and other newer frameworks can vastly benefit from larger amount of memory in CPUs because a few big caches are more efficient than the same amount of cache spread over more nodes.

And don’t skimp on networking! Anything less than 10Gbps is like breathing through a straw for today’s servers, and if you’ve separated your disks than that traffic is on the network as well. Even if you can’t control your network backbone bandwidth, adding bandwidth “in the rack” can help Hadoop a lot. Today, right now, you can get complete 25Gbps Ethernet solutions from vendors like Dell that’ll get you 2 25GbE ports for each server at a very affordable price.

So look before you leap into a large scale Hadoop project, and make sure your hardware plans take into account today’s technologies, not just what people were successful with in previous years.

 

Sign up for the free insideAI News newsletter.

 

Speak Your Mind

*