Whether it’s called a modular system, converged infrastructure or all-in-one computing, the fact remains that the data center is quickly shedding the large, complex architectures of the past in favor of leaner, meaner hardware configurations.
Blame the economy more than anything else. With organizations looking to cut operating expenses to the bone and the cloud providing a readily available data solution for just pennies on the IT dollar, hardware deployments are becoming subject to three main requirements: low cost, deployment simplicity and easy maintenance.
According to IDC, the market for converged systems is cruising at nearly 55 percent annual growth, which should put it on track to top $17.8 billion by 2016. Not only do you have traditional enterprises looking to enhance existing infrastructure, but new legions of cloud providers are scrambling to ensure they can meet service level requirements for clients with constantly expanding data loads. In both cases, the choice between bricks-and-mortar data facilities consisting of complex, labor-intensive infrastructure or sleek, highly scalable converged systems is quickly becoming a no-brainer.
This isn’t to say there is no resistance to convergence. But as Zenoss’ Deepak Kanwar points out, much of the hesitancy is based more on myth than fact. Chief among them is the fear that converged systems will prove inadequate for some applications because there is only one pool of shared resources. But provided there is adequate monitoring and applications have been configured properly for user demands, this should not be a problem – or at least, it can be easily rectified should resource contention start to hamper performance. And unlike traditional infrastructure, converged systems come from a single vendor, so you know whom to call should any issue arise.
Not all converged systems are created equal, however. And as Dell’s Antonio Gallardo notes, the choice of individual components within the platform can have a significant impact on performance. Blade storage arrays, for example, offer broad scalability and are easily managed, even in automated virtual environments. And they provide enterprise-class data protection features like snapshots and replication, as well as automated tiering and workload management. Blades also require limited cabling and enable simplified switching environments to help reduce costs.
For some, converged infrastructure is actually seen as a stepping stone to more advanced virtual environments. HP’s Jim Ganthier told attendees at the recent HP Discover event in Las Vegas that the company’s Project Moonshot is in fact part of a larger strategy surrounding the software-defined data center. The company expects this to be a crucial step for web-facing and hosted operations that are already trying to deal with millions of hits per day using traditional silo-based infrastructure. As infrastructure convergence becomes the norm, unification of management and other middleware functions should allow organizations to maintain their web presences more efficiently and effectively, he said.
Some futurists envision a world in which data infrastructure is contained within massive, regional facilities that hand out resources to multiple customers utility-style. Such a massive undertaking would be prohibitively expensive using the standard data center model, but a building full of identical modular components is not beyond the realm of possibility. Indeed, large container-based operations are already starting to take root at Fortune 500 organizations like Microsoft and Google.
Data infrastructure remains one of the top cost centers at most businesses, and it may be an expense that few will be able to afford much longer. Converged infrastructure offers a low-cost, scalable alternative to the current state of affairs, and may prove to be the best way to foster the kind of broad-based, multitenant resource utilization needed to drive next-generation data environments.