The RDMA Effect

Arthur Cole

Arthur Cole spoke with Rick Maule, CEO and president, NetEffect.

 

Cole: Much of the attention in enterprise networking goes to unified fabrics, multiple GB SANs and the like. How does a technology like RDMA improve network performance?
Maule: RDMA-enabled adapters use a combination of techniques that work together to lower network latencies, lower host-server CPU utilization, and achieve higher network throughput than traditional NICs, enabling significantly better scaling of network applications using fewer servers. RDMA allows applications that are executing in user space to post commands directly to the adapter. Traditional I/O adapters require the OS to act as a proxy for such application commands. Eliminating the OS proxy requirement eliminates expensive calls into the OS, dramatically reducing application context switches.

 

RDMA also supports Direct Data Placement (DDP) protocols, which enable placement of packet data payloads directly to destination buffers under application control, thus eliminating buffer copies in intermediate layers of the networking stack. RDMA also offloads TCP/IP processing from the host server's CPU. Processing the TCP/IP stack puts a tremendous load on the host server's CPU. The traditional rule of thumb: It takes 1 GHz of host server CPU horsepower to generate 1 Gbps of network bandwidth. Moving TCP/IP processing to the adapter frees this horsepower for higher applications.

 

Cole: Does it help with energy consumption as well?
Maule: Offloading transport processing to a more power-efficient adapter achieves considerable power savings. State-of-the-art 10 Gb RDMA-enabled adapters can offer dual-port 10 Gb network throughput in under 7 W, compared to 50 to 100 W or more of host server CPU power to achieve similar results. CPU cycles freed from transport processing are also available to be put to more productive use, and servers can be more fully utilized through techniques such as virtualization.

 

Cole: How will NetEffect's support of the NetworkDirect interface improve computer cluster performance?
Maule: Microsoft NetworkDirect is an RDMA interface that takes full advantage of RDMA adapter performance features, including the three mentioned above. For example, computer clusters implemented using traditional Gb NICs can experience application-to-application latency of 20 to 40 microseconds. Computer clusters implemented using NetworkDirect and high-performance RDMA-enabled adapters will see application-to-application latency under six microseconds. This can dramatically improve horizontal scaling of a wide variety of high-performance computing (HPC) clustered applications.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 

Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data