That a virtualized I/O infrastructure is the only sure way to maintain data center performance in the wake of server and storage virtualization is without question. Still up for debate, though, is what type of network fabric offers the best foundation for virtual I/O.
The three main choices are Ethernet, Fibre Channel and Infiniband. And while all three have their strong points, the overwhelming consideration for the vast majority of data centers out there has to be whatever infrastructure is in place to begin with. Only those building a network from scratch, or enjoying extremely healthy profit margins, have the luxury of selecting an entirely new fabric.
That being said, it's likely that the vast majority of virtual I/O traffic will be carried on Ethernet, with Fibre Channel running a close second -- both by virtue of the respective installed bases. For those capable of going with Infiniband, however, there are a number of reasons why that fabric may prove to be the most robust and flexible of all, at least in the short term.
As Yaron Haviv, CTO of Voltaire, points out in this article, Infiniband gives you a 20Gbps network right out of the box, offering a fully redundant virtual fabric with multiple data, storage and clustered network layers. Infiniband also allows you to have a single, redundant HCA and switch port that acts as virtual NIC and virtual storage HBA for each virtual or physical server on the network.
Infiniband also offers advantages in latency and, surprisingly to some, price. Andy Dornan at InformationWeek points out here that companies like Xsigo and 3Leaf use Infiniband for server interconnection because it provides latency of less than 100 ns. A 10Gbps Infiniband HCA also costs about $300 less than an Ethernet NIC and $1,300 less than a Fibre Channel HBA.
Some organizations are also looking at Infiniband as a virtual clustering solution over the wide area network. Network Equipment Technologies (NET) has developed the NX5000 bridging solution that can extend Infiniband fabrics over thousands of miles providing virtual clusters at multiple sites. The company claims its technology outperforms TCP/IP solutions in areas like data synchronization, backup and disaster recovery as well.
If Infiniband holds all these advantages now, what are the chances that other fabric technologies will start to catch up soon? Pretty good, actually. Fulcrum Microsystems has already come out with the FM4000 10 GbE switch/router chip said to offer a total throughput of 360 million packets per second, with latency of less than 300 ns. That's just shy of what Infiniband provides, but I wonder how much those numbers will improve with a 40G solution?