One of the highlights of the Uptime Institute's Lean, Clean & Green Symposium I attended in New York City this week was the presentation by Chris Malone, Google's Thermal Technologies Architect for Data Center Research and Development. Malone discussed, in a detailed manner befitting the experts in attendance, how the company is cutting power across its massive data center holdings.
As I described earlier this week, Malone says that the efficiencies can be gained without exotic or expensive initiatives. While much of what Google is doing is based on careful execution of common-sense ideas, the drive to more efficient powering includes at least one innovative technique: The placement of small uninterruptible power supply (UPS) circuits on servers instead of creating large devices to support multiple devices. Other steps include closely coupling cooling and how heat is controlled to find ways to reduce air-chilling requirements.After describing these steps, Malone presented attendees with a case study of how power efficiency was increased in one of the company's data centers. The bottom line was a happy one: Malone reiterated near the end of his talk that the results "can be implemented in most data centers."
The main statistic data center energy folks focus on is power usage effectiveness (PUE). An invention of The Green Grid, PUE describes the relationship between the total amount of power being used and the amount that is actually driving equipment. Like golf, the lower a data center's PUE, the better. ZD Net's Heather Clancy reports that Malone says that most data centers run a PUE of about 2. Google is beating that number easily. Malone said Google measured five of its data centers, which were not named, and as of March 15 had achieved an average PUE of 1.15. The individual facility minimum was 1.12. This, judging by experts' reaction at the presentation, is (pre-surgery) Tiger Woods territory.
Malone's openness is part of according to this story in InformationWeek. The piece says that on April 1 the company hosted more than 100 experts at the Google Data Center Efficiency Summit at its headquarters and put video about its data center initiatives on YouTube (a link is provided). The story quotes from a posting by chief engineer Jimmy Clidaras in which he mentioned that energy use has been reduced by 85 percent. The piece discusses the same PUE numbers that Clancy reported.,
It's generally assumed-with good reason-that Google has a lot of data centers in a lot of places. TechCrunch adds some specificity to the conversation with an interesting set of maps illustrating where Google data centers are. The maps -- of the United States, Europe and an overview of most of the world -- show both those the company owns, its leased spaces and facilities under construction. There are 36, the piece says. Nineteen are in the United States, 12 are in Europe and three are in Asia. Future sites may include Malaysia, Taiwan, Lithuania and South Carolina, the piece says.
All that energy savings is leading to good green results. Indeed, both types of green. It is helping the environment and Google's wallet. This post says that Google spent $263 million on infrastructure during the first quarter of 2009, the lowest amount since it took over operation of its data centers. That's a monumental reduction from the year-ago quarter, in which the company spent $842 million. The post offers the quarter-by-quarter numbers and a graph that shows the precipitous decline. The capital expenditures (CapX) numbers are the money being spent on building the data centers. The post says the company built data centers in four markets during 2007 and 2008 and throttled back when the economy sent sour. It will be interesting to see how the energy moves affect operational expenses (OpX).