Many enterprises are opting for converged infrastructure (CI) either alongside or as a prelude to fully hyperconverged infrastructure (HCI). The difference lies largely in the need for a more flexible hardware footprint in which compute, storage and networking still retain some semblance of independence vs. a tightly integrated appliance-style model.
For this reason, the decisions regarding the provisioning and deployment of CI are a little more nuanced. Some solutions place all storage directly on the server with networking at the top of the rack while others stress centralized storage linked directly to service via a fabric-style network. And some feature combinations of the two.
In all cases, however, providers are striving to craft the right infrastructure for the business model, which itself is under the gun to deliver speed, availability, and a host of other metrics for increasingly diverse workloads.
According to Technical and Management Resources (TMR), the market for CI is growing at more than 22 percent per year and is expected to top $76 billion by 2025. Much of this growth will come from the cloud as providers seek to accommodate the needs of multiple users, most of whom are looking to tap new infrastructure quickly and at low cost in order to feed the demands of an increasingly digital business model. At the same time, the enterprise is looking to foster a more holistic form of IT management that leverages the same technologies – virtualization, automation and intelligence – that allow CI resources to be federated across diverse architectures.
But contrary to what some vendors would have us believe, CI is not exactly plug-and-play. As Wikibon’s Peter Burris points out, many organizations attempting to craft their own CI architectures on top of commodity hardware are finding the integration challenges to be just as significant as in traditional infrastructure. In fact, new research is starting to show that pre-engineered solutions end up providing twice the value of DIY systems due to the tight integration between hardware and software and even between different hardware models, many of which still have key differences despite their “white box” labels. At the same time, there is nothing like a single-vendor solution when it comes to engineering support and troubleshooting.
But with both infrastructure and workloads evolving at such a rapid pace, how can the enterprise trust that the decisions it makes on CI today will serve in the long- or even medium-term? MTM Technologies’ Bill Kleyman offers five points to consider when evaluating any CI or HCI solution:
- Disk I/O: You need the right configuration and the right disk architecture to accommodate varied workloads.
- Memory: In general, the more RAM, the better the performance, but this should be gauged against expected workloads, user densities and other factors.
- CPU: Cutting costs on CPUs can lead to bottlenecks later.
- Network I/O: SDN is your friend here, as it simplifies data flow and control.
- Infrastructure Distribution: Is your business gravitating toward the data center, the cloud, the edge? Knowing where you are going will help determine what level of convergence you require.
Overall, however, flexibility should be the criterion on which CI and HCI are built. Lifecycles are getting shorter, so infrastructure must be able to adapt quickly and with relatively low upfront costs.
Convergence can make this happen, but it will be up to the enterprise to determine exactly how to provide optimal support for its data objectives.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.