Minding the Data Center Energy Meter

Clemens Pfeiffer
Most CTOs don't pay the electric bill. This, of course, is the fundamental reason why so many data centers waste so much energy. Indeed, the typical data center today has a Power Usage Effectiveness (PUE) rating over 2.0-far short of the 1.1 to 1.4 target established by the U.S. Environmental Protection Agency (EPA). 

But the days of CTOs being able to ignore power consumption are coming to an end. Electricity now represents 25-40 percent of the operational expenditures in most data centers, and within a few years the U.S. Department of Energy predicts the cost to power a server over its useful life will exceed the original capital expenditure. Worse yet is Gartner's estimate that, on average, up to 30 percent of available power is being stranded, causing organizations to outgrow their data centers prematurely. 

Fortunately, it is now possible (and even easy) to reduce server power consumption by more than 50 percent in the typical data center. It starts with the ability to view and manage power use across facilities and IT equipment-capabilities delivered by emerging data center infrastructure management (DCIM) solutions. A select few include Dynamic Power Optimization (DPO)-which delivers advanced analytics and energy reduction optionally through a Software-as-a-Service (SaaS) offering. The return on the relatively modest investment required begins accruing in less than a month. 

Improving Your Data Center's Energy Efficiency

Here's how to get started with five demonstrated best practices for improving power management in any data center using a DPO solution:

Get Beyond PUE - PUE can provide useful information, but it can also produce counter-productive results. With a fixed cooling infrastructure, for example, refreshing IT equipment to lower the power consumption can cause an increase in PUE. A more accurate measure is provided by McKinsey's Corporate Average Datacenter Efficiency (CADE) or Gartner's Power to Performance Effectiveness (PPE) rating systems because both CADE and PPE also take into account the biggest source of wasted energy in data centers today: poor server utilization.

It's Cool to be Warm - Increasing inlet temperatures for servers to 80�F (27�C) per the American Society of Heating, Refrigerating and Air-Conditioning Engineers recommendations is certain to reduce total power consumption in any data center. Most CTOs are reluctant to take this energy-saving step, however, for fear of hot spots. The ability to take constant and accurate measurements of server inlet temperatures minimizes this risk, and the use of DPO can adjust server capacity in real-time to prevent hot spots from forming. Raising the temperature is also certain to improve PUE, CADE and PPE ratings.

Integrate Power, Capacity and Performance Management - Every data center experiences a peak demand, whether daily, weekly, monthly or annually. And every data center is configured with the server capacity needed to accommodate that peak demand with an acceptable level of performance. But the only thing all those servers are doing during all of the non-peak periods, when demand can be as much as 80 percent lower, is wasting power-and money. This is why power, capacity and performance must now be managed holistically. The new key performance indicator is transactions per kilowatt hour, and this metric can only be maximized by improving server efficiency. 

Manage by Applications - Not all applications are created equal, and this reality creates another opportunity to reduce power consumption. The total power can be minimized, while ensuring that the greatest business value is being delivered, by utilizing tiered data centers, application QoS grouping and multi-site configurations. The ultimate multi-site configuration, of course, is one that 'follows the moon' by dynamically shifting the load to data centers where power is always the least expensive-at night if you use outside air or during the day if you use solar power.

Optimizing IT Efficiency-Dedicated servers have average utilization rates as low as 10 percent.  Consolidation and virtualization typically improve overall server utilization to between 20 percent and 30 percent. Better, but still not good enough in today's energy-conscious world. The only way to achieve even greater utilization, and reduce energy consumption even further, is to migrate from the wasteful 'always on' mode of operating servers to an efficient 'on demand' mode using DPO to continuously match server capacity with actual demand.

The ability to reduce power consumption by 50 percent or more guarantees that a DCIM solution with dynamic power optimization will offer a high return on the relatively modest investment. But even greater savings can be achieved by using a solution that offers other advanced capabilities, including: capacity planning with what-if analyses for optimizing hardware refresh cycles, equipment placement and environmental controls; real-time monitoring dashboards; comprehensive power utilization reporting-all of which help extend the life of the data center. In addition, measuring improvements against a baseline may also make the organization eligible for energy rebates and/or offsets. 

For both economical and environmental reasons, minding the meter is now the 'green' thing to do. And the best way to mind the meter is with a robust DCIM/DPO solution. So it should come as no surprise that Gartner predicts 60 percent of all CTOs will be using such a solution within the next three years.

Add Comment      Leave a comment on this blog post
Jan 17, 2011 11:01 AM Ronald Timmermans Ronald Timmermans  says:
Your analysis is correct: as long as the IT-manager is not accountable for the energy costs of his/her equipment it will be hard to make substantial progression in bringing down the energy bill of the DC. As you said, detailed reporting of energy consumption and a proper way to make all these data comprehensible in a DCIM is the way to move forward. The next challenge will be to establish a relationship between 'service' and 'server'. This will demonstrate what application is running on which device. Than you need to pinpoint the device to an exact location in your DC. This way you can determine 'zombie servers'. These zombie servers with no function al all (usually servers that have been replaced at some time but not shutted down) consume a lot of power and occupy valuable rack space. A proper asset management system that eliminates human error (not maintaining the asset database) will make the next step possible to bring down the energy costs in the DC. Who will take the lead ? Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.