40 G InfiniBand off to a Quick Start

Arthur Cole

InfiniBand has taken its lumps in this latest recession just as the rest of the IT industry has. But as the most robust networking technology around, it at least has a solid base of high-performance users that it can rely on to weather storms like these.


That puts it into a comfortable position against other technologies like Ethernet and Fibre Channel that will either make it or break it according to how well they serve as the general-purpose data center fabric.


And while certainly not as large as the broader IT industry, the HPC crowd is eager to tap into the latest networking developments, particularly if it allows them to leverage commodity hardware.


Voltaire is certainly thinking along those lines with its introduction of a new 40 Gbps InfiniBand module for the IBM BladeCenter. Clustering is a hot commodity among top-level research houses and major financial organizations, and doubling the speed of current InfiniBand networks is high on the list of demands. The module features 14 40 G internal ports and 16 external QSFP ports for the fabric. The company claims it can provide port latency of less than 100 nanoseconds even while drawing less than 4 watts per port.


Voltaire needs to keep a sharp eye on the rest of the InfiniBand market now that sales are finally starting to pick up, according to IT Brand Pulse CEO and senior analyst Frank Berry. In fact, now would be a good time to consider a major acquisition to help shore up its position. QLogic would make a nice fit, in fact, providing the company with a world-class ASIC design and putting some distance between it and up-and-coming Mellanox.


Elsewhere in the HPC market, 40 Gbps InfiniBand, along with the latest processor technology, is drawing more platform vendors into clustered environments. SGI, formerly Rackable Systems, is leveraging both high-speed networking and high-power processing in the new CloudRack X2. The 14U system, aimed at organizations that want high throughput but not necessarily in a full-rack configuration, holds up to nine TR2000 computing trays, sporting either Xeon 5500s or the new six-core Istanbul Opterons. It also supports GP-GPU graphics processing and can hold six or eight SATA or SAS hard drives or SSDs.


Specialty shops are also coming out with unique designs intended to pair InfiniBand with other high-speed interconnect technologies, like PCIe. A company called Appro, for instance, now offers dual-InfiniBand networks on the motherboard of its XtremeX1 supercomputer. The design features two Mellanox QDR ConnectX chips, each with its own x8 PCIe v2.0 channel to the node. The company says this approach is both more cost-effective and provides greater density and reliability than simply buying InfiniBand ports on an HCA.


To be sure, InfiniBand vendors say they also provide a solid base for standard data center fabrics as well, with price points that are competitive with 10 GbE. But it's pretty clear at this point that Ethernet will be the fabric of choice for most organizations, considering it is already widely adopted for IP and voice communications.


Still, that isn't likely to prevent InfiniBand from finding its way into environments where speed and throughput are crucial. And as more organizations start putting larger and larger data loads onto the cloud, network proficiency will edge out server and storage prowess as the primary concern of IT managers.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.