Most enterprises have embraced virtualization primarily as a means to consolidate hardware. Cloud computing, converged networking, flexible data environments — all of these goals are worthy, but the immediate concern is to increase both performance and capacity without busting the budget.
That’s why it’s disheartening to learn that many enterprises feel they are on the cutting edge with a 15:1 consolidation ratio even while the top virtual environments can support 30:1 or even 50:1. Clearly, there’s a disconnect here. So our Arthur Cole set out to identify some of the chief barriers to increased consolidation, in the hope that identifying the problems will spur a stronger drive to overcome them — and produce a leaner, meaner data center in the end.
Click through to learn more about seven issues holding back full implementation.
Virtual environments can easily accommodate higher consolidation ratios, but the rest of the data center might not. Whether you have 10, 20 or 100 VMs on one physical server, all that data still has to go through the same I/O channel.
Despite what you hear about “storage virtualization,” there is no way to repurpose storage capacity that is already holding data. At best, you can reduce the number of duplicate files scattered throughout the data center and improve nearline performance by updating backup and archiving processes.
If the goal is to increase VMs and reduce the number of physical servers, how can we complain about too many VMs? But it’s the management, or lack thereof, of those machines that creates the problem.
If done right, all of these investments ultimately will produce a more streamlined IT infrastructure that is both cheaper and easier to operate and that produces greater performance than you have now. In the meantime, it requires a financial commitment, and that hasn’t been easy to justify in the business environment of the past two years.
The fact is that the rapid pace of virtualization development, from the first release of VMware to Xen, Hyper-V, vSphere, desktop virtualization and now the cloud — public, private and hybrid — has sown a lot of confusion in the IT community, with no one really quite sure where this is all heading.
It’s all well and good for a vendor to claim 100-VM capability on its new release, but it is not responsible for your data. In all likelihood, the newest server designs are being engineered around low-power, high-VM usage.
Concerns about security, reliability and overall performance have largely relegated virtualization to non-critical systems. Further consolidation is available among front-office and customer-facing systems, but it will take quite a bit of energy to overcome the resistance to change.