Will InfiniBand Play in the Ethernet Sandbox?

Arthur Cole

Now that InfiniBand has decided to join the Ethernet party, it seems the technology is in the same boat as Fibre Channel -- that is, will native IB infrastructure survive over the long term?


At Interop a few weeks ago, the InfiniBand Trade Association (IBTA) opened up the RDMA over Converged Ethernet (RoCE) specification, forming a link between Ethernet and IB infrastructures through the Remote Direct Memory Access protocol that is primarily responsible for IB's incredibly low latency, says Enterprise Networking Planet.

No doubt, this is a key development for networking vendors, who will now be able to offer converged Ethernet/IB devices as a means to help enterprises consolidate networks without giving up crucial milli- and microseconds for critical applications. The financial industry in particular has a significant stake in this game since even the slightest delay in processing buy or sell orders can translate into millions of dollars lost.

RoCE will probably make a quick entrance into the network portfolios of the major vendors. It gained speedy acceptance by the OpenFabrics Alliance (OFA), which counts Cisco, HP, Mellanox and Voltaire as members. The group has added RoCE to its latest OpenFabrics Enterprise Distribution (OFED) release, which in essence brings RDMA directly on the Ethernet layer as opposed to the TCP/IP layer as in the iWARP protocol.

Word on the street is that the RoCE protocol brings RDMA over Ethernet latency down to about 1.3 microseconds, shaving close to a microsecond off of current Ethernet performance benchmarks. If that's the case, is it worth the expense of maintaining native InfiniBand networks? Probably so. As I mentioned, every micro can be worth millions in the financial markets, and 1.3 us is still about a third slower than the fastest native IB links.

That's part of the reason Mellanox and others are forging ahead with bridging systems and other hardware designed to bring together native infrastructure. The new BridgeX BX5020, for instance, ties Ethernet, InfiniBand and Fibre Channel in a 1 RU frame, handing data from one protocol to another with less than 200 ns latency.

It seems, then, that native InfiniBand networks are safe for the moment. In fact, their reach should grow substantially now that they have virtually all of the Ethernet at their disposal. The most mission-critical data will still require the high I/O of native IB, but for all the rest, it will soon be cheaper and easier to send it where it needs to go.

More from Our Network
Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.