More

    InfiniBand: End of the Road, or the Start of Something Big

    What does the future hold for InfiniBand?

    As a high-performance computing (HPC) solution, InfiniBand has long been the Mercedes of server interconnects, providing a high-speed dynamic fabric to link massive numbers of cores to tackle the most data-intensive loads ever devised. In the enterprise, results were mixed at best, however, as most workaday organizations found Ethernet to be more than adequate for the vast majority of x86 architectures.

    But recent developments are leading some to conclude that InfiniBand as we know it is not long for this Earth, and that spells serious trouble for its main champion: Mellanox. As Seeking Alpha pointed out in a recent analysis, poor revenue performance of late may be the harbinger of harder times ahead for the company, given that its chief rival in the InfiniBand space, QLogic, is now part of Intel. And Intel has made no secret of its desire to build the next-generation interconnect directly onto the CPU, effectively eliminating the need for third-party interconnects.

    In Mellanox’ defense, the company argues that the recent sales slump was the result primarily of a production snafu that resulted in faulty models of its new 56 Gbps FDR cable, leading to the delay in the delivery of some $20 million in sales. If anything, Mellanox says InfiniBand’s prospects are looking up because many of the new cloud environments around the world are taking on HPC proportions, which means expanding opportunities for the company’s VPI adapter and switch families.

    Indeed, the company just gained a big win in Australia, where cloud provider OrionVM is bent on standardizing its worldwide infrastructure around InfiniBand. The company says Mellanox will play a key role in uniting its distributed storage architecture, lowering the price point for customers and significantly improving overall performance. Compared to 10 or even 40 GbE networking, InfiniBand displayed superior throughput, flexibility and price/performance ratios.

    While it’s true that InfiniBand has long been the leader in overall networking performance, will that hold up much longer? Chelsio Communications recently unveiled its newest Ethernet adapter ASIC, the Terminator T5, which bumps RDMA over TCP/IP (also known as iWARP) speed to 40 Gbps. That puts it very close to FDR InfiniBand, and according to Chelsio CEO Kianoosh Naghshineh, will actually out-perform some InfiniBand configurations in real-world HPC settings. The system offers a range of features, such as a TCP Offload Engine (TOW), iSCSI support and FCoE. It even has the support of the OpenFabrics Enterprise Distribution (OFED) of Linux, providing seamless delivery of Linux-based InfiniBand applications.

    So where does that leave the garden-variety enterprise? No better or worse off than before, actually. At the moment, InfiniBand remains the interconnect-of-choice only for the most critical and demanding workloads. But as Intel continues to investigate silicon-level InfiniBand, it might not be long before it winds up in common data platforms by default.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles