Doubting InfiniBand's Legs

Arthur Cole

I came across an interesting article on by Henry Newman. It seems that Newman is having second thoughts about InfiniBand. With new breeds of Ethernet, such as FCoE and iWarp, soon to be available at a premium, albeit at slightly longer latency, can InfiniBand survive as much more than a niche product for highly specialized environments?


If you look at such technologies as Fibre-on and HiPPi, they don't tend to crash and burn in a fiery spectacle. Rather, they tend to fade away quietly as the larger vendors spin them off in favor of higher market share and better profit margins.


In short, Newman takes issue with a recent IDC report that had InfiniBand gaining larger converts in the high-powered computing field. Why would anyone want to establish a separate storage networking environment with Fibre Channel or InfiniBand when they can simply upgrade their existing Ethernet to 10 Gb?


At this year's Interop, NetEffect showed its 10 Gb iWarp adapter processing more than 2 million packets per second while consuming less than 6 watts.


For vendors, it comes down to a choice between trying to sell an entirely new network, or simply improving the old one.

Add Comment      Leave a comment on this blog post
Jun 15, 2007 2:14 AM Michael Delzer Michael Delzer  says:
Key issues:1.Operational difference in methodologies of network and storage designers2.Lack of cross training for network, storage administrators, OS administrators, and database administrators makes convergence of data and storage networks acceptable today only for companies with a high training budget and great forward thinking managers or companies with low expectations of system uptime.3.Until 10Gbps equipment port density equals the 4 Gbps SAN systems today at the same price point only small companies with low expectations of system uptime will move towards complete convergence.4.If blade servers continue to grow in market share over traditional servers, products like Infiniband may see more value where virtual Ethernet and FC adapters can shared Infiniband links supporting both the needs of the HPC community as well as the Virtual Machine community while providing products like Oracle RAC and HPs PolyServe the low latency links that allow for horizontal scaling and dynamic provision.The operational paradigm for the products from Ethernet providers where Spanning Tree or HSRP type of outages are acceptable levels of routine outages are not tolerable for most clustered OS software packages and can result in data corruption if the OS is not expecting to have an un-trusted connection to its storage subsystems.I don't know if the stake holders who have come to expect the stability of the that the current vendors of SAN fabric provide are willing to accept the total loss of network traffic and the staggered way that traffic can restart while the system re-converges on traditional Ethernet and Layer 3 IP networks.As long as vendors like Cisco do not require OS and storage design skills for their top level certifications then companies will be forced to have two networks, one designed to support application traffic over IP protocols staffed and designed by people with traditional Ethernet or IP skills and one designed to support SCSI commands and large payloads staffed by people with intimate knowledge of the way OS and storage arrays work and how clustered systems need to be supported.I believe that in 5 or 10 years this may be mitigated but look how long it has taken for the voice and data networks to converge.I started looking at products in this space in 1992 and today we are still looking for products to monitor and manage the converged networks to provide the same level of user experience that people expected on the old dedicated voice networks.As to costs:If we assume:1 network run by traditional network engineers and 1 network run by risk adverse storage minded network engineers.Then the cost between a fully configured Ethernet Switch like the Cisco 6509 with the same level of redundancy as the Brocade 48000 then we will get about the same number of aggregate 4 GP ports and about the same cost.The cost for a Sun Quad port Gbps card is about the same as a Dual port Emulex 4 Gbps card so it would cost more to use two Quad Ethernet cards as it would for 1 Emulex card.Now if you try to spin that by using 10Gbps Ethernet with the massive reduction in port count on both the Cisco Ethernet and SAN side in the chassis and on what the PCIe cards cost for hosts today the operational limitations become so great that it is not viable.In two years these cost may come down but we still have more issues left.Today companies like EMC have software like PowerPath that can use all links equally and redundantly and optimize how the OS uses a path. Reply
Jun 15, 2007 2:14 AM Michael Delzer Michael Delzer  says:
If this software is not ported to a Ethernet concept and the network engineers don't oversubscribe the Ethernet paths then again we will have a operational issue that is solved today on FC networks but is not even addressed as a real issue looking at the test questions for high level certifications.Most IP and Ethernet designers are so close to optimizing a packets travel path or one packet at a time they forget to think about how Operating System have grown up on SCSI protocols on paralleled buses and how Computer designers and OS and DBA administrators are used to spreading IO requests over multiple paths to storage.Sincerely yours,Michael DelzerInfrastructure Architect Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.