Enterprise infrastructure is at a weird inflection point as 2016 rolls around. In some cases, it is getting larger, as with the hyperscale cloud providers, but elsewhere it is getting smaller, as with the new hyperconverged platforms hitting enterprise channels.
Both of these trends are the product of similar demands from providers of data services for increased modularity, density and energy efficiency. A hyperscaler seeks to leverage these features to produce maximum scale, while a hyperconverger wants to enable reasonable scale on the smallest possible footprint. At the same time, they require increasingly sophisticated management, automation and optimization to provide customized service to an ever-expanding application environment.
But while most of the headlines surrounding hyper-infrastructure highlight commodity, OEM hardware, the fact is that traditional vendors stand to gain as well, although perhaps not as much as in rack servers and storage arrays. Dell recently combined its two hyperscale businesses into a single entity, dubbed the Extreme Scale Infrastructure, which will address what the company calls hyperscale and sub-hyperscale markets. Both of these segments are looking for fast, scalable infrastructure, although they may have differing levels of in-house expertise in advanced architectures.
Dell is also buying into hyperscale and hyperconverged infrastructure through the EMC acquisition. EMC recently unveiled the RackHD platform that enables hyperscale infrastructure similar to the large web-facing giants but still suitable for the enterprise. RackHD is also being used in the VxRack and ScaleIO hyperconverged solutions, as well as with the Pivotal analytics platform and the Virtustream cloud. In this way, it appears that Dell/EMC will be able to deliver a fairly unified architecture spanning converged private cloud infrastructure to scale-out public environments.
Meanwhile, smaller developers like Atlantis Computing are looking to leverage Flash storage and other components for hyperconverged solutions that combine broad scalability with rapid deployment and configuration. The company’s HyperScale appliance offers high density and low cost to support emerging workloads while still providing ties to legacy infrastructure. The device installs in about 35 minutes and is compatible with vSphere ESX and XenServer environments, although not Hyper-V yet. The company has released two versions to date, a 12 TB and a 24TB, with pricing for the smaller version starting at $78,000.
An appliance might be the preferred solution for organizations that are too small for full hyperscale but too large to effectively support Big Data and other initiatives on conventional infrastructure, says IT analyst Eric Slack. As he told the UK Register, many organizations fall for the “hyperscale dream” in which they envision Google- or Facebook-class infrastructure for their own organizations, not realizing all of the technical knowhow and support systems needed to maintain such an environment. With an appliance solution in place, even medium organizations can stand up a private cloud fairly easily and then gradually shift workloads off of legacy infrastructure at their own pace.
The enterprise industry has been talking about getting lean, mean and green for quite some time now. Virtualization helped shrink data loads onto fewer resources, and now those resources can be had in abundance on smaller footprints. Whether the goal is to then scale data infrastructure into the stratosphere or shrink it into a broom closet, there is a hyper-approach to make it happen.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.