Virtualization has long since passed the introductory, novelty phase of its development and is now the basic platform for nearly all enterprise IT infrastructure. But that doesn’t mean it has become a self-actuating technology, quietly humming away while higher-order software does the real work.
Indeed, there is a growing recognition that virtualization is not necessarily the only answer to IT performance and operational issues, and that sometimes it can be overdone – despite the ongoing clamor for a fully virtualized data center.
For one thing, says InfoWorld’s Paul Venezia, virtual environments are still susceptible to old-fashioned hardware failure, and if too many virtual servers wind up on a single physical device, the failure will be that much more serious. Small businesses in particular should be wary of consolidating hardware too much given that a single point of failure could take out both primary and backup resources. Your best bet is to maintain a healthy number of physical servers while keeping hypervisor densities low enough so that loads can be shifted to active boxes in case of failure.
And while most outages appear to be sudden and catastrophic, the fact is that usually there is ample warning that things are amiss in the virtual world. As tech writer Edward Morrison points out, problems with real-time monitoring and updates, constant OS reconfigurations, and a deteriorating ability for the virtual platform to implement automated solutions could indicate that the system is in need of an overhaul. And these issues tend to accumulate as the enterprise seeks to expand its virtual presence into the cloud without the proper protocols and development strategies.
It is also well-known that too much virtualization in the server farm has a negative effect on storage and network infrastructure. According to Datacore, nearly half of all enterprises are eager to put mission-critical apps on virtual platforms but are holding off for now due to the increased storage costs. Even with lower flash and SSD costs, the impact on surrounding infrastructure is too great for many organizations, which is preventing virtualization from supporting key functions in the enterprise. The advent of storage virtualization and software-defined networking should help to bring all data center elements in sync, but actual field deployments of these two technologies are only just beginning.
But even with advanced storage and networking architectures in place, establishing the proper hypervisor densities in virtual environments remains a challenge, says tech consultant Ken Hess. The magic formula depends on numerous variables, such as workload types, I/O requirements, memory and CPU usage and the like. And as data requirements become more diverse, maintaining a consistent operating environment for any length of time will only become more difficult. Even advanced automation systems can only accommodate expected changes in the data load, meaning that hands-on control of the virtual stack could become more common, not less, as time goes by. One rule of thumb: Spread workloads over as many resources as possible to lessen the impact on any single device or module.
Of course, there is no way to put the virtualization genie back in the bottle, nor would anyone want to. Virtualization has done more than any other technology to date to allow the enterprise to do more with less, and it came along just as IT energy consumption and environmental issues entered the public discourse.
Too much of anything is never good. Virtual footprints will likely continue their expansion into enterprise infrastructure, but IT administrators should be constantly on the alert for potential problems under the surface.
Even the most advanced technologies ever devised are useful only to the point that they stop working.