Seven Barriers to Increased Server Consolidation

Arthur Cole
Slide Show

Check out the key issues that are keeping enterprises from taking full advantage of server consolidation's potential.

Most enterprises have embraced virtualization primarily as a means to consolidate hardware. Cloud computing, converged networking, flexible data environments -- all of these goals are worthy, but the immediate concern is to increase both performance and capacity without busting the budget.


That's why it's disheartening to learn that many enterprises feel they are on the cutting edge with a 15:1 consolidation ratio even while the top virtual environments can support 30:1 or even 50:1. Clearly, there's a disconnect here. So we've set out to identify some of the chief barriers to increased consolidation, in the hope that identifying the problems will spur a stronger drive to overcome them -- and produce a leaner, meaner data center in the end.

 

I/O Constraints -- Virtual environments can easily accommodate higher consolidation ratios, but the rest of the data center might not. Whether you have 10, 20 or 100 VMs on one physical server, all that data still has to go through the same I/O channel. Techniques like virtual I/O are designed to overcome this problem, but can only be properly implemented within the greater context of overall network consolidation and convergence. In the end, most enterprises tend to limit consolidation while these greater structural issues are hammered out.

 

Storage Capacity --The same dynamic is hitting the storage farm as well. Despite what you hear about "storage virtualization," there is no way to repurpose storage capacity that is already holding data. At best, you can reduce the number of duplicate files scattered throughout the data center and improve nearline performance by updating backup and archiving processes. But it stands to reason that more VMs will generate more data, which will require more storage and better management.

 


Management/Sprawl -- This is a bit of an oxymoron. If the goal is to increase VMs and reduce the number of physical servers, how can we complain about too many VMs? But it's the management, or lack thereof, of those machines that creates the problem. VMs that are deployed, used for a bit, and then abandoned will quietly hum away, consuming vital resources to no one's benefit in perpetuity unless there is a robust management system to identify and decommission them. The sooner you tailor your management stack to the free-flowing virtual world, the more effective your consolidation efforts will be.

 

Economics -- These issues are not merely technological, but economic as well. Saving money by trimming down server hardware is all well and good, but you have to balance that with the increased spending on the network infrastructure, the storage farm and the management stack. If done right, all of these investments ultimately will produce a more streamlined IT infrastructure that is both cheaper and easier to operate and that produces greater performance than you have now. In the meantime, it requires a financial commitment, and that hasn't been easy to justify in the business environment of the past two years.

 

Technology Development/The Cloud -- Another strange one, I admit, but it is a real problem nonetheless. The fact is that the rapid pace of virtualization development, from the first release of VMware to Xen, Hyper-V, vSphere, desktop virtualization and now the cloud -- public, private and hybrid -- has sown a lot of confusion in the IT community, with no one really quite sure where this is all heading. While each of these developments is interrelated, it's certainly understandable if some executives are hedging their bets on greater consolidation now, considering that cloud-based infrastructures might be available at lower cost soon.

 

Fear -- There is also the very real concern that increasing the VM density on existing servers might do damage to availability and service-level agreements. It's all well and good for a vendor to claim 100-VM capability on its new release, but it is not responsible for your data. In all likelihood, the newest server designs are being engineered around low-power, high-VM usage. They might do a lot to increase densities down the road, but that will likely happen at the pace of normal refresh cycles.

 

Institutional Resistance -- Finally, one of the immutable laws of physics is that it takes more energy to start a rock rolling down a hill than to keep it going. The same is true with technology. Concerns about security, reliability and overall performance have largely relegated virtualization to non-critical systems. Further consolidation is available among front-office and customer-facing systems, but it will take quite a bit of energy to overcome the resistance to change.

 

The one element that trumps all of these barriers, of course, is cost. Virtualized environments are less costly to build and maintain than existing hardware/software infrastructures, and that plea to the bottom line will ultimately carry the day. All it takes is for one organization to start showing increased profits while delivering lower costs to customers, and the trend will catch on like wildfire.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Close
Thanks for your registration, follow us on our social networks to keep up-to-date