In as much as the journey in the last year toward data center convergence was marked by the debut of various proprietary products, it looks like the coming year in data center convergence will be marked by the rise of open standards. Just about everybody agrees that there is a tremendous amount of benefit to be gained from data center convergence; they just don’t agree to what degree that benefit will happen.
In the short term, data center convergence is seen as a way to get the isolated teams that manage servers, networks and storage to work more closely together. Longer term, however, data center convergence should give rise to the creation of a cadre of data center managers that will be able to manage data centers almost as discrete units of enterprise computing.
Vendors such as Cisco that are trying to expand in adjacent markets see this convergence as an opportunity to hop into the server and storage space. Cisco argues that IT organizations should essentially adopt new server architectures such as the Cisco Unified Computing System that deliver data center convergence as a fork-lift upgrade.
In as much as that might work for some customers, the vast majority of customers today have made massive investments in IT infrastructure that they are not looking to get rid of tomorrow. They need a more evolutionary approach to data center convergence that, preferably, will be defined via open standards.
For example, Alex Yost, vice president for IBM BladeCenter servers, says data center convergence will be defined through official standards organization and alliance such as IBM’s Blade.Org effort. Nick Van der Zweep, director of virtualization and Insight Software for Hewlett-Packard Infrastructure Software and Blades, sees both stand-alone systems such as the HP BladeSystem Matrix being integrated with existing IT infrastructure assets as companies such as HP continue to extend the reach of their management software.
Key standards that will enable the reach to be extended include the emerging IEEE 802.1 bg standard for integrating physical and virtual switches and virtual station interfaces (VSI) that will make it easier to discover virtual machines on the network. According to Paul Congdon, the CTO for HP’s ProCurve networking products, the day when virtual machines signal their location on the network using an industry-standard tagging system, versus passively waiting to be discovered, is not very far away.
Once that capability is in place, each element of the data center becomes an administrative building block that makes up an entire discrete unit of computing, which ultimately will be tied together using a concept that HP calls Flex Fabric.
The term fabric gets bandied about among all the major IT infrastructure vendors these days. The challenge facing IT organizations is how to layer in a hardware-neutral management system on top of that fabric. Right now, it seems like all the major vendors are trying to lock customers in by promising customers massive savings if only they would standardize on proprietary technologies. That may provide some short-term benefits, but long-term customers want to keep their options as open as possible.
Ultimately, we’re going to need vendor-neutral convergence within the data center, but across discrete units of data center computing powered by different stacks of IT infrastructure. That means instead of managing just integrated sets of switches, servers and storage arrays; we will need ways to manage multiple types of data centers under a common framework. But none of that will happen if the underlying IT infrastructure is all wrapped up in proprietary hardware interfaces and management software.