Getting Your Head Out of the Public Cloud with Composable Infrastructure

In this special guest feature, Tom Lyon, Chief Scientist and Cofounder at DriveScale, discusses how public clouds are optimized to support traditional workloads, but are not well suited for many new data intensive applications. These demand the cost and performance advantages of bare metal infrastructure. But bare metal infrastructure is difficult to manage and often poorly utilized. A new computing platform architecture – Software Composable Infrastructure – has been created to solve these problems. Tom is a computing systems architect, a serial entrepreneur and a kernel hacker. Prior to founding DriveScale, Tom was founder and Chief Scientist of Nuova Systems, a start-up that led a new architectural approach to systems and networking. As employee #8 at Sun Microsystems, Tom was there from the beginning, where he contributed to the UNIX kernel, created the SunLink product family, and was one of the NFS and SPARC architects. Tom holds numerous U.S. patents in system interconnects, memory systems, and storage. He received a B.S. in Electrical Engineering and Computer Science from Princeton University.

There are thousands of articles about moving IT to the cloud and nearly as many about hybrid IT, which makes the case that not everything belongs in the cloud. A key factor any enterprise needs to consider when deciding whether to use cloud or on-premises infrastructure is the nature of the workload itself and whether an emerging set of technologies called Composable Infrastructure (CI) – which aims to improve the flexibility and efficiency of on-premises infrastructure – is better suited to its needs.

As complex as many perceive migrating to the public cloud to be, organizations with legacy systems in fact have a rather simplistic rule for when to use the cloud: keep old stuff on-prem and start new workloads in the cloud. Unfortunately, this can be exactly the wrong thing to do for many workloads. Older workloads running on virtual machines and SANs actually tend to work great in the cloud, whereas many of the new technologies for modern data intensive workloads are not well-supported.

These data-intensive workloads typically require servers with direct-attached storage (DAS) to meet their bandwidth, capacity and cost requirements. However, the cloud equivalent of DAS – instance local storage – is the most expensive and least reliable form of cloud storage while alternatives such as EBS and S3 have far lower available bandwidth.

As most of us already know, technologies such as Hadoop, Kafka and many NoSQL/NewSQL databases were developed in cloud-scale companies like Yahoo, LinkedIn, Facebook, Twitter and others, which all use bare-metal environments for their modern workloads. However, use of these frameworks in VM based cloud environments can lead to huge inefficiencies and difficult-to-predict performance profiles. Additionally, container-based infrastructure using technologies like Docker and Kubernetes is usually much more efficient on bare metal.

An example of a company facing these issues is Clearsense, a medical analytics company. Clearsense offers a Saas product based on Hadoop analytics. The service started on AWS and grew to be a very large application. But Clearsense became frustrated with the cost and unpredictability of AWS, and began to consider a move to on-premises infrastructure. However, traditional server infrastructure just didn’t offer the flexibility that Clearsense desired.

Fortunately for companies like Clearsense, an emerging class of products for Software Composable Infrastructure (SCI) allows servers to be provisioned and re-provisioned to suit the demands of particular workloads, giving public-cloud-like flexibility to on on-prem installation. SCI is also able to – unlike VMs and SANs – maintain the performance and cost advantages of bare metal. By moving out of the public cloud to an SCI architecture, Clearsense was able to achieve lower cost, better and more predictable performance, and faster response to changing workloads.

How Does Composable Infrastructure Work?

Composable Infrastructure works by controlling the connections among server, storage and other components that are each attached to some high-bandwidth switching domain. After composition, it appears to applications and system software that the resources are physically attached to their respective servers. As workload requirements change, infrastructure can be re-composed and resources moved among different types of workload clusters – solving one of the biggest problems facing IT administrators today.

In addition to cloud-like flexibility and efficiency, Composable Infrastructure solutions typically require rethinking the way hardware is deployed. Different vendors have different approaches to the hardware that they support:

  • Hewlett Packard Enterprise is heavily promoting CI tied to their proprietary Synergy product line, which is essentially a next-generation blade server platform.
  • Liqid has a CI system based on an external PCI Express switch, which is still a rather exotic technology with a number of scaling and cabling issues.
  • DriveScale provides an SCI system that works with industry-standard servers, storage and high speed Ethernet switches.

The type of clusters created and managed by Composable Infrastructure are able to scale to thousands of nodes. Because of this, it is very important for a vendor’s CI system to provide a data center scale view of resources, topology and constraints. The reliability of this system is also critical because it ultimately administers all of these data center resources.

Moving IT to the cloud or hybrid IT may be commonplace in today’s media, but Software Composable Infrastructure is an alternative approach whose time has come. The cloud does make perfect sense for many uses and organizations, but it is not always the best choice for data center infrastructure due to cost and scalability restrictions. Software Composable Infrastructure is quickly emerging as a category that provides public-cloud-like flexibility and efficiency at a far more reasonable cost.

 

Sign up for the free insideAI News newsletter.