Even in this age of virtual, software-defined infrastructure, provisioning and managing physical hardware remains one of the most costly and burdensome aspects of the modern data center.
Tools like AI, advanced automation, zero-touch provisioning and the like are expected to make the job easier, but for the time being, most enterprise executives will have to struggle with the challenge of squeezing greater performance out of existing systems even as pressure from above to cut costs grows more intense.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
This is partly why so much of the enterprise workload is gravitating toward the cloud, since this gives knowledge workers access to more scalable, flexible infrastructure while offloading much of the management headaches to a third-party provider. But some functions must remain close to home, which is why many tech experts are saying that converged and hyperconverged infrastructure (HCI) will soon sweep away the last vestiges of today’s complex, silo-based data center.
IDC reports that HCI revenues are already growing at more than 78 percent per year, producing revenues on the order of $1.5 billion per quarter. In fact, the firm says that HCI now tops 40 percent of the overall converged infrastructure market.
The key difference between converged and hyperconverged is that the former utilizes discrete computer, storage and networking components within a rack architecture while the latter integrates all components into a single module. Hyperconverged systems generally require a smaller footprint than converged, but they are less flexible when it comes to scaling up or out; that is, HCI requires you to scale compute and networking even if you only need more storage. In all likelihood, therefore, many organizations will probably deploy both converged and hyperconverged infrastructure, and perhaps even combine them under hybrid architectures.
The key to all of this is the management software that will allow the enterprise to create dynamic, flexible data environments on abstracted, virtual planes as opposed to the fixed hardware/software constructs of today. In this way, operators will be able to quickly commission and decommission needed resources from a virtual pool, which lately has come to be described as “composable infrastructure.”
Indeed, it may soon become impossible to distinguish between HCI and composable infrastructure given the synergy that is arising around the two. Patrick Moorhead, president of Moor Insights and Strategy, noted to SDX Central recently that this convergence is happening so quickly that it will likely represent the future standard of IT infrastructure.
Perhaps the earliest proponent of composable infrastructure is HPE, which began the journey way back in 2015 with the Synergy platform. Since then, the company has brought appliance-maker SimpliVity and SDN developer Plexxi into the fold and is now working on a hybrid solution called Composable Cloud. All told, the company says it has more than 1,600 customers employing composable infrastructure in one form or another, making it the fastest-growing new technology in its portfolio. (Disclosure: I provide content services to HPE.)
Dell EMC is also active on the composable front, although it prefers the term “kinetic infrastructure.” Its strategy is to remove the mid-plane from the PowerEdge server to enable direct connections between compute and I/O modules. At the same time, the PowerEdge MX supports a disaggregated component architecture and support for new GPU and FPGA processors, as well as storage-class memory systems.
Meanwhile, HCI pioneer Nutanix is out with an SDN platform called Flow that provides network automation, visualization, microsegmentation and other functions to allow for greater flexibility of pooled virtual resources, both within the data center and on the cloud.
For the most part, however, HCI is intended to streamline production environments, primarily by removing complex and costly SAN architectures. But as Enterprise Storage Forum’s Christine Taylor points out, secondary storage holds 80 to 85 percent of all enterprise data, which means it would benefit greatly from HCI’s two main attributes: massive scale and a web-scale distributed file system. This would allow a secondary tier to present data directly to test/dev, data protection, analytics, and a host of other operations, all while reducing the overall amount of data under management and eliminating the isolation of key data sets.
But perhaps the biggest impact of HCI on the enterprise will come not in the data center or even the cloud, but on the IoT edge. The need to push processing as close to digital devices as possible is well known. This is the best way to provide the rapid, even real-time, responses that many devices require to function, and it is the only way to prevent the massive flood of data from the millions, even billions, of devices from overwhelming centralized data facilities and networks.
As ActualTech Media’s Scott D. Lowe pointed out recently, few organizations have either the space or the expertise to support traditional data infrastructure on the edge. A hyperconverged solution, on the other hand, can provide a wealth of data processing and storage capacity and would require only rudimentary skills to provision, maintain or replace if necessary. In this way, even the tiniest store or the most under-funded municipal office gains a top-class computing solution that is no more difficult to deploy or operate than a microwave oven.
HCI has one thing going for it that most other current data initiatives like AI and the IoT do not: a clear and compelling use case. Data volumes are going up and current IT infrastructure is already pushing the limits of cost, energy consumption and operational viability. HCI provides a way to streamline physical footprints, increase scale and simplify management burdens without pushing budgets to unsustainable levels.
Data center hardware won’t be converted overnight, of course, but refresh cycles are constantly coming to an end. Each time another piece of hardware hits the desired depreciation point, more and more IT executives are starting to wonder, perhaps there is a better way?
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.