How to explain the continued presence of InfiniBand in a world of lower cost Fibre Channel and Ethernet solutions? Low latency would be one of the prime answers.
With a slew of InfiniBand developments emerging this year -- capped off by last week's agreement between Cisco and HP to bring it to the high-performance computing segment -- it seems that the InfiniBand future looks solid, even if it doesn't emerge as the dominant interconnect as many once thought it would.
A quick look at the numbers shows why InfiniBand has found its niche in the datacenter, according to IT Jungle. While Ethernet backers crow about 10 Gbps connectivity, InfiniBand crossed that threshold several years ago and is already on the map with 10 Gbps solutions. Quad Data Rate (QDR) systems operating at 10 Gbps are due next year, with multilink systems capable of 120 Gbps.
The latest market research certainly doesn't point to InfiniBand taking a backseat to other network technologies. IDC has InfiniBand HCA revenues jumping from last year's $62.3 million to $224.7 million in 2011, making it one of the fastest networking tools in the game. And support for the SCSI RMDA protocol, which brings compatibility with block-level storage, could prove to be a prime growth driver in high-data environments like financial services and medical research.
To be clear, though, InfiniBand has some major negatives working against it. Many IT professionals cite things like cost, integration factors and a lack of user enthusiasm as reasons for not deploying InfiniBand. Plus there is the fact that the installed base of Ethernet-ready systems far outstrips the InfiniBand market.
But as the world continues to move toward higher and higher data rates and increasingly complicated networking environments, there is some big money riding on the assumption that the high data-rate capabilities of InfiniBand will find a ready market going forward.