I’ve written about converged infrastructure so many times now that I’m starting to sound like a broken record. And yet, the evidence is undeniable that as the enterprise becomes more steeped in virtual and cloud architectures, the best way to scale up physical resources without breaking the bank is through convergence.
And at a time when sales of traditional enterprise-class hardware are starting to wane, vendors of these legacy systems see convergence as the best way to maintain critical revenue streams.
Take HP. The company’s struggle to remake itself in the wake of falling PC and server sales is well known, and it is still unclear whether it can regain its former status as a leading IT platform provider. But the recently launched HP ConvergedSystem holds out hope that the company can maintain a hardware play while the industry at-large transitions to the cloud. HP’s edge, in this regard, is the development of converged infrastructure for specific workloads, such as virtualization, Big Data analytics and hosted desktops. As well, the company has added a Converged Storage platform that provides a ready-made solution for backup, archiving and high-speed server environments.
Of course, HP is not alone in devising converged infrastructure – IBM, Cisco, Dell and a host of others are pursuing similar strategies. But this begs the question: Just because it is good for vendors, is it necessarily good for users? As Gartner’s Adrian O’Connell told Datacenter Dynamics recently, enterprises dealing with vendor lock-in with current data infrastructure will find themselves increasingly tied down when the entire compute environment is built on a series of integrated modules, even those on open platforms. It may be that many organizations are willing to make this trade if it produces more efficient, scalable infrastructure, but the fact remains that even in the cloud, choices will be restricted by legacy hardware.
But that doesn’t mean top vendors’ hold on the converged enterprise market is firmly established. Start-ups like Nutanix, SimpliVity and Scale Computing have made a lot of headway with lean all-in-one modular systems that can provide near-instant data environments for about half the cost of traditional infrastructure. For the most part, these companies have catered to smaller enterprises, which have long struggled with the capital and operational needs of the typical data center. But as convergence makes its way up the enterprise chain, it is unclear whether the smaller vendors will be able to function at hyperscale.
Channel considerations aside, convergence is likely the future of the enterprise if only because it provides the only cost-effective way to support advanced cloud architectures, says MTM Technologies’ Bill Kleyman. Whether you are talking about rapid scale-out server and storage infrastructure, software-defined networking, or virtual-compute environments, convergence is the best way to perform essential functions like pushing intelligence to the edge, increasing application and data load capabilities and supporting Big Data analytics. Both in terms of scale and flexibility, a converged architecture is more in-tune to the needs of next-generation data environments.
Is it inevitable then? Is the data center destined to become a simple warehouse packed with rows of identical boxes humming away in support of the worldwide data infrastructure? I have to admit, it would be a whole lot more elegant than today’s hodgepodge of servers, storage arrays, network devices, appliances and all the rest.
Even if your immediate goal is to simply scale up resources for the cloud, convergence is probably in your future.