Despite what you may have heard, virtualization is not yesterday’s news in the data center.
At best, the argument can be made that server virtualization is rapidly approaching the saturation point at which all workloads that require virtual infrastructure will get it. But elsewhere in the data center – storage, the network, even the application layer – virtualization has only just begun to make its mark. As a wise man once said, “You ain’t seen nothin’ yet.”
Clearly, the enterprise is not quite ready to put virtualization out to pasture. According to Opsview, more than 64 percent of organizations name virtualization as the primary focus of their IT investment for the coming year, topping even the private cloud in order of importance. As well, there appears to be a fair amount of infrastructure still to be virtualized. Only about a quarter of respondents reported more than 75 percent completion at this point.
But while virtualization may be as common as dishwater these days, it is fair to say that there is still ample room for improvement when it comes to utilizing the technology. Virtual machines have a tendency to take on lives of their own once they are let loose in the data center and in many cases – due to compliance and archival purposes – they must be retained long after the tasks they were required to support are completed. This not only ties up valuable resources, but could open the enterprise to increased security risks, especially as they fall off IT management’s radar and get shunted to the cloud.
There is also the largely untapped market of high-performance computing (HPC) to consider. To date, the highly specialized needs of HPC workloads have precluded the use of virtualization in most cases. But as tech specialist and author John Rhoton points out, new levels of hardware scalability and resource management could lead to the same kinds of efficiencies and automation capabilities that the standard enterprise has come to enjoy. Memory optimization in particular, such as the techniques currently employed in Non-Uniform Memory Access (NUMA) multiprocessor systems and the Hyper-V Dynamic Memory system, show significant promise in accommodating large-scale workloads.
Then there is the fact that virtualization is moving completely off hardware and onto the operating system and even the application layer. CloudVolumes, for example, is out with a new Instant Workload Management (IWM) suite that can share data volumes across thousands of VMs, significantly enhancing data flexibility without requiring changes to either physical or virtual infrastructure. Key features include instant deployment of multi-tier workloads into an existing VM, instant relocation and recovery across multiple VMs and hypervisor independent workload migration. The company is pitching it as a solution for everything from server and cloud provisioning to virtual desktop infrastructure and configuration management.
For those of us who cannot remember the data center in the pre-virtual days, this may all seem like a logical extension of what is now run-of-the-mill IT technology. But even as server farms were becoming virtualized and consolidated more than a decade ago, there were many who doubted whether the technology could be extended to other areas of the data infrastructure.
Nowadays, it is clear that virtualization is applicable up and down the stack, and we are only just beginning to understand how all these separate pieces will fit into an end-to-end virtual ecosystem.