Seven Best Practices for Virtualization
Virtualization is taking IT to new horizons from which whole new sets of opportunities are coming into view.
VMware may have the lion's share of the virtualization market, but that doesn't mean it has all the answers when it comes to its own platform. As can be seen by the plethora of third-party VMware optimization solutions hitting the channel, there is more than one way to design a virtual environment.
Much of this activity is taking place around I/O enhancement for virtual machines. It's an open secret in enterprise circles that increased virtualization can dramatically reduce the amount of hardware in the server farm, but that comes at the expense of additional networking and storage infrastructure to handle the increased data loads.
That's why we see companies like Astute Networks promising dramatic improvement in I/O operations within VMware environments. The company's ViSX G2 accelerator boasts a 1,500 percent gain in read I/O using a mix of offload and traffic acceleration technology, iSCSI networking and flash storage. The company touts it as a multi-vendor plug-and-play Ethernet solution that can deliver improved IOPS to multiple simultaneous virtual machines.
Virtualization also adds a tremendous amount of complexity to I/O environments that, if not handled properly, can produce unacceptable levels of latency. IO Turbine has developed a flash-based architecture that essentially separates I/O performance from disk capacity. In VMware environments, this enhances vMotion's ability to shift VMs to available resources because there is no longer a need to provision storage capacity for individual machines, according to TMCnet's Rajani Baburajan. This kind of dynamic caching allows admins to conduct load-balancing, maintenance and availability operations without interrupting application performance.
Still other approaches are aimed at fostering communication between virtual machines themselves by allowing them to forge their own network connections. Xsigo has added a new drag-and-drop capability to its I/O Director, allowing network configurations to be implemented with the same ease as creating the VM, according to InformationWeek's Charles Babcock. In this way, I/O Director acts as the VM core switch where inter-VM messages can be relayed without having to traverse NICs, switches, aggregators or other network elements.
It's interesting to note that VMware is taking the I/O situation to heart in the new 5.0 platform. This latest iteration includes the Storage DRS (Distributed Resource Schedule) module, which acts as a traffic manager between the VM and storage. SDRS automatically pushes data to the most appropriate storage resource while at the same time implementing space and I/O load balancing and LUN maintenance to ensure optimal I/O performance. Licensing issues aside, SDRS is slated to relieve a lot of the pressure that has built up around VM administration.
In that sense, advancements in I/O performance aren't simply adding value to a successful technology, but preparing it for the next step in enterprise data communications.