Next-Gen IOV Getting Ready for the Cloud

Arthur Cole

Right from the start, it was obvious to everyone involved in virtualization that multiple machines on a single server were going to wreak havoc on the I/O infrastructure. It's the equivalent of stripping the HBAs out of individual servers in favor of a single unit on the back of a rack.


So it was no surprise that a new generation of I/O virtualization technology kicked off just as the first virtual environments were being deployed. And just as virtualization has evolved into a platform for cloud computing and other advanced architectures, virtual I/O has kept pace, offering the means to not only improve I/O performance, but to strip away a lot of the hardware populating the wider network.


Some of the newest systems are extending IOV directly into on-server storage components. VirtenSys, which specializes in PCIe-based technology, says it can now virtualize storage controllers and disk drives within the server, lowering costs and energy consumption for servers connecting to the LAN or DAS/SAN infrastructures. The company uses the LSI MegaRAID HBA across multiple servers, allowing internal drives to be shared, a technique the company says improves throughput by 80 percent and cuts power by 60 percent. The system supports Ethernet, SAS/SATA and Fibre Channel.


NextIO is busy making the rounds with its next-gen Express Connect system, which is also based on the PCIe format and works with I/O controllers so they can be shared across multiple blade and rack servers. The company is also working with Marvell and Nvidia on a high-performance system said to deliver more than 200,000 IOPS and 400 GB over a single PCIe slot, with the ability to scale up to 1 million IOPS and 4 TB in a 3U package.


Linux users, meanwhile, should take note of Neterion's recent support for 10 GbE single-root I/O virtualization (SR-IOV) in its new Linux kernel driver. The move allows the company's X3110 adapter to appear as an independent 10 GbE interface with direct hardware access from Linux guests in Xen or KVM environments. The system uses the native vxge netdev driver to run network and iSCSI traffic, removing much of the IOV overhead while maintaining advanced hypervisor features like migration and privileged operations.


Storage vendors are also looking at innovative ways to boost I/O in virtual environments. IBM, for example, has devised an entirely new SAN design featuring aggregated disk modules that can be better coordinated to achieve greater performance. The company's XIV system uses a mix of intelligent I/O orchestration techniques to deliver 100,000 IOPS from cache and up to 2.4 GBps of sustained sequential read/write bandwidth from 15 12-disk modules or 180 SATA drives.


With virtualization as the underlying layer in new cloud architectures, the pooling of resources will soon be the norm for data center operations rather than the exception. But in order to shift data loads across disparate resources, you'll need an advanced network architecture that can handle rapid shifts in data loads and traffic patterns.


IOV is the first link from the server into the network, making it a crucial first step in the drive for greater network flexibility.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.