Many enterprise executives who have implemented virtual environments as a way to consolidate servers and centralize management are running into what's becoming known as the I/O bottleneck. Even if you have multiple operating systems on one server, all that data still has to pass through one physical I/O device.
Or does it?
Electronic Design just ran a series on the virtual I/O, a technology based on the PCI Express bus that allows physical end-node devices to be represented as any number of logical devices capable of handling data from multiple hosts.
There are a number of ways to provision a virtual I/O, from emulating multiple device drivers in the hypervisor to employing a DMA controller to take the heat off the host processor.
A number of commercial solutions have already hit the market. The ConnectX InfiniBand host channel adapter (HCA) by Mellanox Technologies offers 10 and 20 Gbps performance for virtual environments, as well as over-stressed clusters and grids. It supports virtual endpoints, address translation and DMA mapping, as well as isolation and protection for virtual machines.
Another solution comes by way of LeftHand Networks, which has partnered with Neterion to develop 10 GbE solutions for the Xframe II V-NIC adapter to support I/O virtualization on the System x and VMware platforms.
Meanwhile, IDT has come out with what it calls the first complete PCI Express system interconnect. Among other benefits, it outclasses peer-to-peer switching through a virtual I/O framework that supports connections between multiple roots through PCI-E processors and endpoints.
The need for virtual I/O is further evidence that enterprise upgrades need to be part of a holistic approach to overall network health. More often than not, changes in one corner of the room will vastly affect the other corners.