Prepping I/O for Virtualization

Share it on Twitter  
Share it on Facebook  
Share it on Linked in  

By now, many data center managers are finding out that server virtualization is only the first step in a rather lengthy and complicated conversion from rigid, silo-based infrastructure to a more flexible, cloud-based architecture.


The next key component in this journey is virtual I/O technology, in which specialized hardware and software allow network resources to be allocated according to the needs of virtual machines, rather than individual physical servers. A number of vendors have sprung up in the recent past to cater to this need, Xsigo, 3Leaf and NextIO among them.


But simply buying a virtual I/O platform and plugging it in is only half the battle. There are a number of ways in which virtual environments can be configured to provide the kind of robust I/O architecture needed for today's applications.


To begin with, the hypervisors themselves approach I/O in different ways. This article by Server Watch's Wayne Rash does a good job of explaining techniques like IBM's partitioned approach, in which I/O partitions handle tasks like shared storage and networking, and VMware's practice of bringing I/O tasks directly into the hypervisor, which results in read and write queues that can sometimes hamper performance.


There are also a number of common approaches that help improve I/O in the virtual world just as they do in the physical one. One of them is defragmentation, or more precisely auto-defragmentation as found in the Diskeeper system. Even though hard drive partitions appear to be dedicated to each virtual machine, they are in fact still storing files in the same fragmented fashion they always have. Virtual environments generate multiple times more fragments than physical servers, meaning I/O can be seriously impacted as the system tries to keep track of every file and fragment that is requested.


Another approach is through the use of "application aware" technology, in which system performance is adjusted automatically according to application needs. Companies like Pillar Data Systems say they can overcome I/O bottlenecks and improve storage utilization as much as 80 percent by dynamically re-assigning priority levels to enterprise applications.


Resource management, in fact, is becoming a staple in many integrated virtualization platforms. Fujitsu recently unveiled a new version of its Resource Coordinator Virtual Edition (RCVE) for the Primergy blade server line. Among other management features, the stack provides automated SAN reconfiguration when switching from server to server-a boon for virtualization recovery operations.


A solid virtual I/O platform is still the best way to get the most out of virtualized server and storage environments. But even then, there's no reason why you should overtax your network unnecessarily. A thorough view of I/O patterns from server to storage to end user is your best bet to keep data flowing smoothly.