dcsimg

InfiniBand Aims for the Converged Network

SHARE
Share it on Twitter  
Share it on Facebook  
Share it on Linked in  
Email  

High-speed Ethernet has all but claimed the heart and soul of the converged network, but there is another protocol that continues to gamely pitch itself as a viable alternative.


InfiniBand is no stranger to enterprise settings, having earned its stripes as a high-speed interconnect for high-performance computer (HPC) clusters. And backers led by the InfiniBand Trade Association (IBTA) make no secret of the fact that they hope to see greater penetration in common enterprise settings as well.


The strategy consists of two parts: continued evolution to higher throughput solutions and repeating the claim that it is, in fact, the lower-cost solution when you consider the total network environment.


On the first front, the IBTA recently announced two new high-speed versions: Fourteen Data Rate (FDR) that bumps the current Quad Data Rate's (QDR) 40 Gbps throughput to 56 Gbps, and Enhanced Data Rate (EDR) that kicks it all the way up to 100 Gbps. Leading networking vendors like Voltaire and Mellanox are already on board with FDR and are slated to begin shipping products within the next year, promising upwards of an 80 percent improvement in server-to-server throughput, application runtimes and other functions.


These advances should more than match the new 40 and 100 G Ethernet systems hitting the channel even as they foster simplified network architectures, according to supporters. InfiniBand offloads transport responsibilities to network hardware-hence its higher up-front costs-but improves CPU efficiency by not loading it up with TCP responsibilities. That's the main reason why node-to-node latency is in the 1us range compared to Ethernet's 50us.


That may be true in the lab where you can mount back-to-back server configurations, counters the Ethernet side, but in the real world, InfiniBand's lack of network management and adaptive routing makes it incapable of avoiding congested pathways, lowering its effective latency for the most common enterprise applications. You also need to remember that the higher data rate versions of InfiniBand can only be effectively implemented after wide-scale adoption of PCIe 3.0, which itself is barely off the drawing board.


Ethernet also has the major advantage of being the de facto standard for the Internet, offering the ability to foster a single network environment within the data center and out on the cloud. And that is likely to be a key factor going forward: Not just how fast your network is, but how well it relates to the wider data universe.

NewsletterITBUSINESSEDGE DAILY NEWSLETTER

SUBSCRIBE TO OUR DAILY EDGE NEWSLETTERS