Nothing shakes up the IT blogosphere like a good controversy. And we got one this week courtesy of The New York Times.
It seems that Times reporter James Glanz wanted to shed some light on the power utilization rates the industry has achieved so far, concluding that it was at a paltry 7 to 12 percent — meaning that upwards of 90 percent of the 30 billion watts that data centers consume worldwide is wasted. To make matters worse, many of the top facilities maintain their own power-generating capabilities, usually fueled by distinctly non-eco-friendly diesel engines. As culprits in all this, Glanz points to a combination of insatiable demand from users, a risk-averse culture in IT that values redundancy over efficiency, and a basic failure on the part of many enterprises to truly understand how their data infrastructure is behaving.
Cue the blowback. Dan Woods of CITO Research blasted the piece for its "many glaring misconceptions, omissions, and distortions," the most egregious of which is the failure to distinguish between traditional IT and the new Internet industry. Old-line infrastructure in hospitals, the financial industry and manufacturing may be lagging in the efficiency race, but the truly large data centers used by Google, Facebook and the like are as state-of-the-art as you can get. It's also unfair, he says, to hold IT to a higher efficiency standard than, say newspapers, which require reams of paper for readers who may only read two or three articles per issue.
Meanwhile, Forrester's Richard Fichera had a few bones to pick with the Time's energy utilization numbers, which he described as technically correct but misleading nonetheless. By only calculating server energy consumption, the Times discounts the role that storage and networking play in data processing. Add those two elements and the utilization figure could jump to 38 percent or more — not a great number, to be sure, but a more accurate depiction of data center efficiency.
At its heart, this debate revolves around the concept of "greenness." In short, what level of efficiency is appropriate for an industry that has emerged as the lifeblood of modern life as we know it? There are those who would argue that success will never arrive because there is always something that can be done to reduce consumption without sacrificing performance. True, but at what cost? And are the developments we've seen to date truly having an impact on the cost/benefit analysis?
A case in point is virtualization. Hailed as the savior of data environments because it can help one server do the job of 10, the fact is that even virtualization cannot produce efficiency on its own. As we all know by now, too much virtualization in the server farm quickly overloads storage and networking capabilities, which must be virtualized themselves or built out on the hardware level (read, consume more energy) to free up the bottlenecks that ultimately hamper data performance. It's for these and other reasons that data centers continue to struggle with energy efficiency despite virtual server deployments of 30 percent or more.
Like it or not, though, demand for instant data anytime, anywhere is here to stay. And that means the drive to design, build and maintain energy-efficient infrastructure will only increase.