InfiniBand and the Cloud

Share it on Twitter  
Share it on Facebook  
Share it on Linked in  

Ethernet is the networking protocol of choice in the enterprise, a fact that is not likely to change any time soon.

And yet, some are wondering whether the rise of cloud computing might lead many large enterprises to consider deploying an even faster solution -- say, InfiniBand.

If the decision were based on I/O performance alone, InfiniBand would already be the hands-down winner. Quad data rate (QDR) solutions are already delivering 40 Gbps throughput, compared to Ethernet's 10 G. And expectations are that EDR (eight data rate) solutions, that's 80 Gbps, will be out by the end of next year. InfiniBand also has the benefit of point-to-point connectivity, which provides for ultra-low latency because the data has direct access to application memory. According to HPC Projects' Paul Schreier, this is part of the reason why many HPC outfits are deploying InfiniBand for their clustered server architectures, despite the fact that most of the servers are hardwired for Ethernet. He also maintains that as the data rates increase, InfiniBand provides a more cost-effective solution than Ethernet.

But how does the cloud fit into all this? Well, the cloud is all about maintaining service levels. Providers, whether external or in-house, who can't deliver the performance to suit user's needs won't have too many users before long. So as more traffic gets shifted onto the cloud, the onus will be on providers to push I/O by any means possible.

This is the thinking behind Intalio's decision to load its Intalio Cloud Appliance with Mellanox adapters and switches. The device uses the HP BladeSystem platform with built-in SSDs to provide a private cloud infrastructure for enterprise users. Besides the speed and low-latency benefits, Intalio says InfiniBand offfers instant network consolidation that cuts hardware requirements in half and management costs by a third. You also get the benefit of real-time VM replication for failover and disaster recovery, and a unified memory pool across multiple blades without the need for symmetric multiprocessing (SMP) systems.

In all likelihood, however, very few cloud providers will go all-InfiniBand, at least for a while. Most likely, we'll see mixed InfiniBand/Ethernet environments at best. That's part of the reason why Mellanox supplemented the 40 Gbps InfiniBand QSFP port on the new ConnectX-2 adapter with a 10 GbE port. The combo allows providers to deploy both solutions on a single network infrastructure. And if anyone feels that a full InfiniBand solution is called for, well that suits Mellanox fine as well.

Even if physical InfiniBand connectors aren't in the cards, perhaps virtual ones will do. A company called PrimaCloud said it recently evaluated both 10 GbE and physical InfiniBand and found them lacking. Instead, the company went with Xsigo's I/O Director, which the company bills as a virtual I/O infrastructure that can be used to configure full data center connections in minutes. PrimaCloud said it liked the way the I/O Director, which uses internal InfiniBand cards and wiring, provides twin 20 Gbps connections per server -- enough to handle 10 virtual machines while maintaining the throughput of its SSD-cached NAS system.

All that being said, inertia is a tough thing to overcome. And with Ethernet already forming the lion's share of enterprise networking, it would be quite a feat to dislodge it, even in the cloud universe.

Still, data requirements aren't getting any smaller. And the fact is, no matter how quickly Ethernet rises to meet new challenges, InfiniBand has already been there and gone.