Five Imperatives for Extreme Data Protection in Virtualized Environments

Arthur Cole

Data center architectures are becoming more complex these days, not just because the systems and technologies are upping the integration ante, but because the IT demands of the enterprise are growing by leaps and bounds.

 

In the next two years, how much storage will you need? Processing power? What level of integration and interoperability? Sure it's going to be more, but how much more? It depends on business conditions, the law, and the budget.

 

Guess too much and you've overspent; too little and you find yourself scrambling for an online vendor.

 

And at no time is this challenge greater than when you're rolling out an entirely new architecture, as UPS found out in its quest to establish a grid computing network. As NewsFactor tells it, building the grid was the easy part. But there wasn't much to go on to estimate workloads and plan for system capacity. Since everything runs faster on a grid, simply adding servers didn't increase capacity in the usual way. Ultimately, the group hopes for 100 percent utilization, but since workload management tools of this caliber are non-existent, maxing the grid's potential will be a matter of trial and error.

 

Of course, organizations on the cutting edge of architectural design should expect a certain amount of ambiguity. What's unfortunate is that many run-of-the-mill enterprises continue to overprovision network resources, particularly storage. In fact, the average storage usage on non-mainframe systems out there is only about 40 percent.


 

What's needed, according to an article in Computer Technology Review, is a Storage Capacity Planning Capacity Model (SCPCM). Yes, the name is a bit clunky, but the article, at least, offers a handy guide on all the components needed for an effective model, including estimation planning, resource planning, and integration/application planning.

 

And then there are the myriad of planning software offerings to choose from, provided you have some means to separate the good from the bad. ComputerWeekly.com says a decent capacity planning package should be able to provide growth forecasting and resource utilization tools, as well as predictive modeling. It should also be able to account for legacy applications and provide interoperability across mixed storage systems. It also provides a handy list of the top CP developers and details on their products.

 

Resource planning is also likely to go a long way toward meeting future environmental requirements, now that the data center has been identified as a major energy consumer. A Toronto company called PlateSpin Ltd. has picked up on that angle for its PowerRecon 3.0 planning and analysis system. The company says it can green you up, both environmentally and financially, by giving you a better idea of your requirements and capabilities as the data center goes through virtualization, consolidation and other changes.

 

It's all part of the adage, "You can't know where you're going unless you know where you've been."



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date