InfiniBand in an Ethernet Universe


It looks like the era of lossless Ethernet is finally dawning, and none too soon, considering it's the only way for the protocol to truly emerge as the unified data center fabric that will unite all data, storage and communications traffic onto one network.

Converged Enhanced Ethernet (CEE), the mechanism by which Ethernet will gain lossless performance as well as many other enhancements, took another step forward this week with the introduction of the Voltaire Vantage 8500 core switch, which matches InfiniBand-style performance to more than 3,400 non-blocking, Layer 2 10 GbE ports in a scale-out design that reduces latency to 1 microsecond. Clusters of up to 12 switches offers service to several thousand servers without impacting efficiency or port costs, with central control offered through Voltaire's Unified Fabric Manager (UFM) software and Ethernet versions of the company's InfiniBand application acceleration technology.

Lossless connectivity is a key requirement for any unified fabric, of course, which is why InfiniBand backers have been arguing that their protocol is to best way to ensure that enterprises can maintain that kind of performance beyond the 10 Gpbs level of today's Ethernet. One consideration is how faster Ethernet versions will handle Remote Data Memory Access (RDMA) so as to bypass local operating systems when retrieving data from various end points. Most networking suppliers have tapped the iWARP protocol, but as HPCwire points out, that system's TCP offload technique could encounter problems at high data rates, say 40 Gbps. Mellanox has come out with its own system known as RDMA over Ethernet (RDMAoE) that uses a small transport layer that gains efficiency by cutting down on packet processing overhead.

Incorporating these techniques will be a crucial next step for Ethernet, considering it is likely to have to deal with legacy InfiniBand networks for some time. While InfiniBand is certainly capable of standing up as a unified fabric on its own, it is currently seeing increased deployment in high I/O environments such as clustered or virtualized server installations, according to The Taneja Group's Jeff Boles. In many cases, these networks already are transparent to users and even administrators, but would most certainly take a performance hit when run through a 10 Gbps pipe already crowded with data.

InfiniBand's presence could grow even more due to the increased I/O demands of advanced technologies like cloud computing and new high-performance processors like the Shanghai and Nehalem. InfiniBand, in fact, already has an 80 Gbps version in the can, leading groups like the InfiniBand Trade Association to suggest that enterprises consider a two-fabric solution: Ethernet for general purposes and InfiniBand for high I/O environments.

While that might be a tough sell at all but the largest enterprises, the fact remains that lossless connectivity is only one of the many components that Ethernet will have to master if it hopes to become the de facto fabric for the enterprise. Bringing Fibre Channel on board was a good first step. Now it has to prove that it can handle InfiniBand without degrading performance too much.