Mellanox Unveils 40 G Infiniband

Arthur Cole

The bandwidth war continues to escalate in anticipation of a coming spike in the need for high-speed networking to accommodate growing legions of multicore-based virtual server and storage environments.


Mellanox became the first to roll out 40 Gbps technology with a new ConnectX host channel adapter (HCA) that the company is intending for both the high-performance computer (HPC) and enterprise markets. When connected to a 5 Gbps PCI Express 2.0 interface, the ConnectX IB 40 Gbps Infiniband adapter delivers up to 6460 Mbps bidrectionally with latency of less than 1 microsecond. A single chip supports up to 36 ports for a total switching capacity of 3 Tbps. The company is delivering silicon and adapter cards immediately, with OEM switches expected later this year.


The device also features QSFP (quad small form-factor pluggable) connectors to take advantage of growing demand as that standard starts to see improved high-bandwidth solutions.


According to Thad Omura, Mellanox' VP of product marketing, upping the bandwidth was only one of the considerations that went into the new adapter. The other was to make it flexible enough to integrate easily into as many network architectures as possible.


"The real value here is that we can future-proof the datacenter," he said. "One device supports 40G Infiniband, Fibre Channel over Infiniband. 10 GbE, enhanced Ethernet and Fibre Channel over Ethernet."


Omura added that highly virtualized environments will benefit from the improved I/O between virtual machines and storage. He gave the example of a standard 1G solution that causes memory and quad-core CPUs to sit idle while waiting for data queues to process. The current fix is to boost throughput by installing multiple HBAs and other network devices, an expensive proposition that offers only marginal operating benefits.


"The right solution is to go to a high-speed single cable solution in which you've consolidated you're I/O," he said. "You've saved on the cost of setting up virtual environments, as well as reduced both the power draw and complexity of the network. And now your system is fully balanced and you have all the I/O you need to match your CPU and memory performance."


Mellanox is bumping up Infiniband performance just as the technology is experiencing renewed interest in the data center. IDC reports the sale of Infiniband adapters grew 44 percent to $90 million in 2007, with switch port revenue jumping 90 percent to $181 million. This still represents only a faction of the overall market, but it signals a healthy growth pattern for the next several years at least.


Mellanox is also expanding the ConnectX capabilities in the Ethernet sphere, recently adding Citrix XenServer 4.1 drivers for the ConnectX EN 10 GbE NIC adapter. The drivers are intended to improve server utilization in multicore environments through tools like hardware-assisted direct guest access and receive-side scaling, as well as the ability to switch directly between virtual machines.

Add Comment      Leave a comment on this blog post
May 27, 2008 10:47 AM James. Braselton James. Braselton  says:
hi. There. You. Say. We. Can. Have. Upto. 3. Terabytes per. Second. Internet. Conection. Speeds Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.