In just about every instance, issues of data center power and cooling are discussed in terms of costs and the environment. Whether the topic is free cooling or wind power, the selling point is always to lower operating expenses and reduce your carbon footprint.
These are not unworthy goals, mind you. But there is a larger force at work in the drive to remake data center power and cooling infrastructure: Inefficient use of energy is actually hampering data and application performance.
IDC recently released a survey that claimed nearly 85 percent of enterprises have had to delay or cancel application rollouts or in some way reduce user support or dial back business objectives due to power, cooling or space issues in the data center. In most instances, the culprit was outdated infrastructure that had become too fragmented over the years, leaving managers with no real way to track or monitor vital information regarding system capabilities or availability. The study, commissioned by CA Technologies, points up the need for improved Data Center Infrastructure Management (DCIM) as a means to better align data and facilities resources.
There is certainly no shortage of DCIM platforms in the channel these days, but the problem is implementing them in real-world environments. All data centers are unique, so a fair amount of customization is required to make even rudimentary progress toward power efficiency. In the UK, data center provider Colt has developed its own power/cooling management architecture, dubbed Ftec, that it says helps streamline operations and tap unused capacity, all without extensive remodeling of existing infrastructure. The system consists of three modules that map and manage power, cooling and space requirements, allowing resources to be scaled up and down to meet load requirements.
At the same time, companies like ActivePower are devising new modular power supply platforms designed to support increasingly dense data environments in relatively small form factors. The company’s PowerHouse system is available in up to 675 kW versions with a 40-foot ISO container, with support for the company’s new CleanSource High Density (CSHD) UPS platform. The company says it can limit capital costs to that of a conventional battery system while providing a 30 percent reduction in operating expenses.
To truly understand power and cooling requirements, however, you’ll need to take a good, hard look at your infrastructure. That’s what Schneider Electric is aiming for with its Genome Project, an attempt to categorize virtually every component of the modern data center — from power and cooling systems right down to memory sticks and microprocessors — in an attempt to discern not just power levels and requirements, but exactly how energy is consumed and the patterns and relationships that exist between components. With this data in hand, the company says it should be able to make solid recommendations toward more efficient and effective data centers.
Power and cooling is a perennial issue that will never go away entirely. No matter how much efficiency is built into the data environment, there is always more to be done.
But with the data center now in the crosshairs of the growing environmental movement, the industry can’t afford to be seen as taking the issue lightly. Much of the gains of the past decade were the result of virtualization and server consolidation, so enterprise executives will need to look elsewhere for new sources of efficiency — not as a means to finally solve the problem, but merely to show that continual progress is being made.