The Dawn of Converged I/O

Michael Vizard

One thing that contributes so much to the complexity of our data centers is the sheer number of adapters required to support every type of networking and storage protocol moving across the network. But if we take a giant step back from all the hype surrounding data center convergence, it's really at its simplest level all about reducing the number of adapters required.

This has become especially important because IT organizations have discovered that as they consolidate servers and entire data centers, the amount of physical complexity that needs to be managed winds up increasing because of all the adapters required. And while the switches will be much faster, that only means that the actual number of connections that need to be effectively managed is only going to increase. When you think about it, faster switches are roughly equivalent to expanding the lanes on a highway.  The traffic doesn't wind up going any faster because more vehicles invariably wind up on the road.

As IT organizations ponder what the next generation of servers and data centers should look like, Dominic Wilde, the senior director of global product line marketing for 3Com, which is in the process of being acquired by Hewlett-Packard, suggests that chief technologists should pay attention to the following concepts:

Transparent Interconnections for Lots of Links (TRILL): A protocol put forward under the auspices of the Internet Engineering Task Force (IETF) that combines the attributes of bridges and routers.

Virtual Ethernet Port Aggregator (VEPA): An emerging IEEE 802.1bg standard for integrating virtual and physical switches.

Clos Networks: An old idea pioneered by telecommunications carriers for creating multistage switching networks.

As Wilde notes, the world of network switching in the context of next generation data centers is going to be all about how to best manage complexity brought on by the presence of thousands of virtual servers all trying to access the same physical resources. That means that if IT organizations want to have systems capable of dealing with the traffic that these virtual servers will generate tomorrow, they need to start putting the types of systems in place today. And as well all know, tomorrow comes a lot sooner than most of us ever think.

Add Comment      Leave a comment on this blog post
Feb 4, 2010 2:02 PM Tyler Lane Tyler Lane  says:
Mike, I�m in full agreement regarding the trend toward complexity, but suggest there are several additional points and areas to consider. First, the role of Enterprise Architect should likely be leveraged in order to understand not only the "how" but also the "what." The physical connections being the how; what is the actual reason for the connection - the type of data, the protocol to be used, the requirements of uptime and so on. When considering how to provide an application to a group of users the architect (or possibly a CTO) looks at current technologies, available resources and should then be marrying these with costs that are reasonable, relative to the allocated budget for the project. If we step back a bit and look at the necessity for any data center, they exist to provide access to data. Most of the current design models are biased toward equipment, and not with what exactly how equipment will be used. Connections are a very important aspect and can always be managed better. In the case of business transactions, which might occur using any number of network resources, servers, applications and protocols, many architects have moved toward service-oriented architecture (SOA). SOA is of course an enabling technology to "the cloud" and can, in many ways, reduce the number of connections required to complete a business transaction. Other enabling technologies, such as an Enterprise Service Bus (ESB) or internal and external clouds can all play a part. It is at this level that Data Center Service Management (DCSM) provides the potential to truly transform. With initiatives in parallel to ITSM and consideration for ITIL practices too, the overarching goals must be expanded beyond the physical connections. The abstract pieces that will move a lot of data to internal or external clouds over the next two decades will leverage many, varied precursors to achieve more with less; DCSM, BPM, BSM, OSS, etc. For adapters and interoperable connectivity, for both the physical and protocols, there is one company I have watched for over 10-years now: NetCracker ( Their specialty has been reducing complexity in Telcos - not an easy task. But all of their work over the years offers some great free learning for today's large enterprises. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.