For all we've heard about the dreaded "virtual stall," it's important to note that a stall is not a permanent condition. Stalled vehicles can be quickly restarted. Stalled initiatives can suddenly gain new life.
And so it seems with virtualization. Despite fears that the enterprise would hit a 30 percent to 40 percent ceiling on the number of VMs per server, indications are that VM densities will continue to climb, albeit not as rapidly as in the early days of virtualization.
Gartner's Jennifer Wu, for one, believes that virtualization will carry 84 percent of enterprise workloads by 2015 as server modularity and VM density mount. This, in turn, will drive sales of higher-end hardware — that is, four sockets or better — as organizations seek to drive efficiency in both traditional and cloud infrastructure. So even as the average selling price (ASP) of servers rises, the enterprise will make out ahead as the cost of running a single VM drops to about a quarter of a comparably powered physical server.
A number of enabling technologies are making it easier for server managers to pack more VMs onto existing hardware. A significant problem continues to be I/O performance as multiple instances compete for fixed network resources. However, companies like Proximal Data are delivering workarounds like the AutoCache software that provides a virtual cache platform that the company says can triple VM density. The ESX-compatible system features an enhanced analytics engine that identifies hot and cold I/O traffic, placing the hot data on PCIe Flash memory for priority delivery to its designated VM. The system requires no agents in guest operating systems and is said to have no impact on high-availability processes like vMotion.
Still, increased density will have larger repercussions than data throughput. It will affect your power infrastructure as well. As Eaton Corp.'s Herve Tardy notes, overall consumption should lessen, although individual servers will draw more power. That means you'll have to increase the density of enclosure-level power protection and distribution, and quite possibly shift to enclosure-based UPS systems. You'll probably also need 24/7 power quality metering, monitoring and management to account for increasingly dynamic workloads.
That same level of visibility will also be needed on the data network, which has been given the twin tasks of handling increased virtualization and new volumes of mobile traffic. Fortunately, it's the rare network management stack that doesn't take into account the needs of virtualized workloads, so the only task for IT is to figure out which one best suits your level of virtualization (present and future) and how easily it scales into the cloud.
Virtualization may be yesterday's technology already, but that doesn't mean all the bugs have been worked out. As once-siloed data infrastructures become more dependent on one another, changes in one area will play out in others to a higher and higher degree. That means upgrades, even planned ones like increased VM densities, must be mapped out accordingly, so that investments in one set of technologies don't end up hurting productivity because systems elsewhere are suddenly overworked.