NextIO Taps PCIe for Virtual I/O

Share it on Twitter  
Share it on Facebook  
Share it on Linked in  

The PCI-Express (PCIe) standard is emerging as a key component in the drive to establish virtual I/O communications for data centers burdened by the increased data demands of server and storage virtualization.


It turns out that the same communications protocol that allows servers and processors to communicate with each other in the rack can also be extended to other racks, arrays and network devices.


NextIO took a major step in bringing PCIe to the virtual I/O forefront this week with the launch of a pair of "I/O gateway" products designed to provide seamless connectivity to network endpoints using standard drivers. The N1400-PCM high-speed switch module and the N28000-ICA I/O consolidation appliance are the first in what the company says will be a line of ExpressConnect products under the ioGateway label.


The N1400-PCM system allows blade servers to extend the PCIe signal outside of the chassis to deliver up to 35 Gbps of I/O throughput, while the N2800-ICA unit offers a flexible framework that allows up to 14 PCIe devices to be connected and partitioned among either blade or rack servers.


Chris Pettey, CTO and co-founder of NextIO, told me that the company's goal is to do away with the idea of fixed connectivity between servers, computers and NICs, HBAs and other network devices.

"Fixed connectivity between the server and outlying devices has a number of problems," he said. "The server has a fixed function capability, so there's not a whole lot of I/O options. There's also a lot of wasted bandwidth because you have to tremendously over-provision the system for the types of applications you want to run, especially when you get into virtualization and the number and types of applications on the server is destined to change."

Pettey added that by using the natural PCIe interconnect that is already spoken by nearly every server and chipset today, enterprises can separate the server from the I/O infrastructure to add greater flexibility to the network. And since PCIe gives most chipsets upwards of 60 Gb of raw throughput to begin with, the system is already prepped for 10G, 40G or even 100G performance.


"It's a fixed amount of bandwidth, but rather than dedicate all of it to one technology, we spread it across all available technologies," Pettey said. "If more bandwidth is required, we can add it very easily. You have one PCIe-based infrastructure that gives you direct access right out of the rack for Fibre Channel, Ethernet, Infiniband, SAS or whatever you want."


PCIe is turning up in a number of virtual I/O-related capacities. Last month, LeCroy Corp. unveiled a new protocol analyzer, the T2-16, designed to support the Single-Root I/O Virtualization (SR-OIV) and Multi-Root I/O Virtualization (MR-IOV) standards aimed at allowing PCIe devices to work across multiple devices. And Cavium Networks recently introduced the Octeon Plus line of PCIe accelerator cards designed for TCP offload, deduplication and I/O virtualization systems.


Simplifying network architectures is a never-ending game in IT circles. A single interface for all servers, storage and network devices goes a long way toward accomplishing a truly unified approach to data center connectivity. PCIe is already in place, so it makes a lot of sense to use that as a starting point. There are some powerful entrenched interests that probably wouldn't like to see their technology become dependent on PCIe's good graces, but universality does have its advantages.