Well, it took a while but iWarp networking technology is finally making its way into enterprise fabric solutions, even though few people are talking about it as "the" solution to data bottlenecks anymore.
iWarp, which basically brings remote direct memory access (RDMA) to the Ethernet, first emerged in the late 1990s as one of a slew of new technologies looking to boost bandwidth and heighten connectivity between network elements. But while Infiniband, traditional Ethernet and Fibre Channel went on to carve significant shares of the market, iWarp languished on the sidelines.
Now we're seeing renewed interest as demands on the data center caused by large data files and increased storage and reporting requirements drive traffic loads to the point where even 8 and 10 Gbps solutions are seen by some as only stop-gap measures.
Getting a lot of buzz this year is Woven Systems, which has paired its EFX1000 Ethernet switch with the R310E iWarp HBA from Chelsio Communications to devise a 10 GbE cluster fabric with remote direct memory access, according to HPCwire. The match-up provides an innovative traffic rerouting scheme, in part by leveraging Chelsio's support for OS-bypass and ability to re-order packets with high-speed priority.
NetEffect is another source for iWarp connectivity, says this Byte and Switch piece. The company's dual-port NEO20BCH mezzanine adapter card features a "virtual pipeline" architecture that the company claims is the first full implementation of the iWarp extension as ratified by the IEEE and IETF. The card, designed for the IBM BladeCenter H platform, delivers 10 GbE through standard socket-based applications, and features TCP Offload and User-level Direct Access.
Teak Technologies is also on board with the I3000 switch offering built-in compatibility with iWarp delivery from Chelsio, Myricom, NetXen and others. The device couples this with the ability to reduce physical network links four-fold through a 10 Gbps virtual link technology that cuts down on packet loss, even in bursty environments.
iWarp is also making its way down to the chip level. At a recent processor development conference at Stanford University, a group from Virginia demonstrated the protocol's ability to assist in TCP offloading on multicore CPUs, a technique that could come in handy for network load balancing, according to EE Times.
It used to be that iWarp was seen as the technology that would bridge the gap between Fibre Channel and Infiniband. Today's implementations are much less grandiose in scope, which is probably a good thing in that we're less likely to suffer a letdown from overhyped expectations.