Lowering your data center's power consumption will certainly help you cut your operating expenses, but is it necessarily the most efficient use of enterprise resources?
That's a key question as the industry heads into greener pastures, driven both by high energy costs and no small amount of government prodding.
The problem is that lowering your energy consumption is only half the equation when it comes to evaluating overall efficiency. The other half is productivity. The most energy-efficient data center in the world would be one that sits idle, and yet it would be completely inefficient because it is not getting any work done.
This has been a big bone of contention between some energy-usage experts and the federal government. Under the EPA's Energy Star program (It's almost like third grade, isn't it? Do well in class and get a star!), the Power Usage Effectiveness rating plays the dominant role in gauging star-worthiness. The problem is, PUE only measures the amount of energy being used for actual computer work, as opposed to ancillary functions like cooling and running the microwave in the break room.
Fortunately, The Green Grid, which originally devised the PUE benchmark, is already working on a more inclusive measurement: the Data Center energy Productivity (DCeP) metric. Simply put, it's a measurement of the amount of actual work being produced per unit of energy consumed. Ideally, this measurement should include all energy consumption -- servers, storage, cooling and, yes, the microwave. But as ZDnet's David Chernicoff points out, there doesn't seem to be much of a drive to establishing DCeP as an industry standard.
That leaves the PUE as the only game in town. And to be fair, the measurement does foster a culture bent more on supplying energy to computing resources than cooling. Google, for one, has been highly successful at lowering its PUE across its substantial data center infrastructure through such simple approaches as hot aisle containment, resource consolidation and the deployment of power-saving systems. The company strives for a PUE rating of 1.5 or less, when a 2 rating means you devote equal energy to both computing and non-computing activities.
The issue remains, however, that if simply lowering energy consumption reduces the amount of work a given data center can handle, then that workload has to be diverted somewhere else. And that somewhere else may or may not have the same efficiency rating as the primary facility.
On paper it looks good. But in the end, is anything being saved? And are carbon emissions any lower?