InfiniBand for the Average User

Arthur Cole

InfiniBand has long been the eccentric uncle of the IT family -- brilliant, yes, but rather odd. He keeps to himself mostly, working on things the larger Ethernet world doesn't fully comprehend.

But it seems lately that InfiniBand has shown greater willingness to expand beyond its traditional HPC and high-volume commercial application user sets toward more run-of-the-mill environments. Part of this is just savvy marketing on the part of InfiniBand developers, and part is due to the fact that virtualization and cloud computing are driving the need for high-speed backbones even in organizations that aren't sequencing gene patterns or processing 20 million stock transactions per day.

One of the ways this is happening is through increased compatibility with the leading virtual platforms. Mellanox, for example, just added new driver support for VMware vSphere to its ConnectX and InfiniHost III adapters. The drivers are based on the OpenFabrics Enterprise Distribution (OFED) 1.4.1 format, which should provide broad compatibility with leading servers, switches, gateways and other devices. With InfiniBand, enterprises can more easily ramp up the number of VMs per server without overloading network capacity.

This comes at an opportune time for many organizations, seeing that demand for high-speed Ethernet is on the rise but few people think even the price for a 40G Ethernet link, let along 100G, will come within range of most budgets any time soon. With prices on single or multimode fiber running about $8,000 for 40G and upwards of $25,000 for 100G, plus the fact that the final IEEE standards aren't due for another four months, many experts aren't anticipating commercial deployments for another three to five years.

But there are ways to beef up existing 10G backbones at least with InfiniBand performance. One is by adding the format to unified network fabrics. Voltaire recently added a new InfiniBand gateway, the 4036E, to its Grid Director portfolio, which acts as a bridge between Ethernet edge networks and is capable of hitting 2.72 Tbps and handing off data to its GbE or 10 GbE ports with less than two-microsecond latency.

Voltaire, like the rest of the InfiniBand industry, has always been cagey about its pricing. But even if full-blown InfiniBand on the network core is too steep, there is always the possibility of running the best InfiniBand characteristics directly onto the Ethernet layer. The most common method of doing this is the iWarp protocol, which essentially takes the Remote Direct Memory Access (RDMA) protocol, which is the primary driver of IB's performance levels, and places it on the TCP/IP stack. You'll still be operating a 10 GbE speeds, but your CPU performance will jump dramatically because much of the network processing overhead is now handled by the NIC. It's something to think about, particularly if your data center strategy involves high-end clustering.

No question, InfiniBand is still an expensive proposition, but it does offer a readily available solution to the hefty increase in network data loads that is probably already hampering your efforts to keep the server consolidation gravy train rolling. If your intent is to improve the utilization rates of existing hardware, you'll need to make sure your data framework can handle the pressure.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.