Simpler Networking Through I/O Virtualization

Arthur Cole
Slide Show

Top 10 Benefits of Virtualization

Virtualization has taken a firm hold at most enterprises these days, but the fact is we've only just begun to unleash the true potential of the technology.

I/O virtualization is the most effective way to balance networking resources with newly virtualized server farms. In essence, it puts the network on the same footing as the processing side, ensuring that each hypervisor has access to the wider data ecosystem as soon as it is provisioned.

That's not the only benefit, however. It also allows for a much more streamlined network infrastructure, eliminating much of the hardware needed to maintain even the earlier physical architecture.

For a glimpse of this, take a look at what's happening on the latest virtual-ready network components. QLogic, for example, has added a new line of PCIe adapters to its FlexSuite portfolio that employs Single Root I/O Virtualization (SR-IOV) to provide multiple virtual data paths for 10 Gb Ethernet, 16 Gb Fibre Channel, FCoE and iSCSO. As well, the new Universal Access Point 5900 converged switch can be configured for either protocol, while a new line of intelligent storage routers acts as a connection point for migration between FC, iSCSI and FCoE environments. The goal here is to support complex legacy environments without having to deploy vast arrays of networking hardware.

At the same time, Ethernet systems developer Solarflare is loading SR-IOV capability into its latest adapter tailored for Citrix XenServer 6 environments. The move is intended to allow users to load up on virtual applications without losing features and functionality of the Xen hypervisor. It does this in part through Solarflare's broad virtual NIC and virtual PCIe support that provides multiple transmit and receive queues for each instance and improves data handling across physical and virtual infrastructure.

Probably the biggest news of the season, however, is the introduction of NextIO's vNET I/O Maestro, a virtual I/O appliance that uses PCIe switching to consolidate rack-based switches and adapters to improve resource pooling and load balancing even as it cuts down on network infrastructure. Although it looks like a normal switch, the device actually tricks the server operating system into thinking it is a local PCIe port, says The Register's Timothy Prickett Morgan. Once that bit of subterfuge is complete, the device can multiplex Ethernet and FC traffic for up to 30 servers, shuttling data to and from I/O modules to the appropriate storage environments - again, all on a single device rather than racks full of boxes and cabling.

Virtual I/O is quickly becoming the must-have technology as enterprises complete the transition from static architectures to more nimble virtual and cloud-based data environments. At the moment, there is a plethora of options for network virtualization and consolidation. This isn't surprising considering the wide variety of network topologies that has evolved over the years, usually spurred by immediate concerns rather than driven by any grand design.

That presents both a challenge and an opportunity in that it will take quite a bit of legwork to come up with a reasonable virtual network architecture. Ultimately, however, it will be less costly to maintain and operate even as it provides for a more robust data environment.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.