Power: How data centers can most efficiently use it, save it and transport it always has been a driving concern of The Uptime Institute. What's changed during the past year or so is that the topic has become far more important to just about everyone else.
Efficient use of energy is intricately linked with saving money and with environmentally sound "green" initiatives. Thus, the organization's Lean, Clean & Green symposium this week at the Hilton Hotel in New York City dealt with topics that are simultaneously arcane and cut to the heart of many issues management deals with every day.
The big picture and the complex details were a constant thread through the presentations. A morning presentation entitled "No Blank Checks: Even Most Uncompromising IT Environments Imaginable are Deploying Energy Management Programs, Using Power Analytics" focused on EDSA's Paladin family. Chairman and CEO Mark Ascolese said that the platform, which consists of the Paladin DesignBase and Paladin Live, offers planning and ongoing assessment.
The heart of the offering -- and what Ascolese and other panelists say differentiates it from at least most of the others on the market -- is the ability to instantaneously display how any change in the data center affects performance almost instantaneously. While platforms traditionally map changes against the design of the system, the Paladin actually plugs it into the system as it is operating at that moment.
If platforms, like people, are judged by the company they keep, the Paladin scores well. Two users-Visa International and The National Geospatial Agency-sent representatives to appear with Ascolese. Michael Siegele, the chief engineer for Visa International's Facilities Engineering Support, said Paladin discovered a malfunctioning transformer as soon as it was put on line.
Another morning presentation, "When High Density and High Efficiency Collide," featured APC CIO Neil Rasmussen. He explored the relationship between a data center's density-the amount of gear squeezed into a given space -- and how much energy is used. The traditional assumption is that efficiency declines as density rises. The good news, he said, is that this inverse proportionality doesn't hold true in a properly designed data center. The bad news is that few data centers are designed well.
Rasmussen said that in most data centers, there indeed is a relationship between density and efficiency. The key is to break, or decouple, the co-dependency. The best way to do this, he said, is stringent hot or cold aisle containment. In essence, the challenge is to reduce the problem of "bypass air," or chilled air that doesn't actually perform its task of cooling gear. Hot air containment, he said, is the most effective approach. If done correctly, Rasmussen said, greater density can increase efficiency because the presence of gear reduces the amount of empty space that must be cooled.
Any discussion of data centers is incomplete without comment from Google. Chris Malone, the company's Thermal Technologies Architect for Data Center Research and Development, provided insight into how the company approaches its massive data center infrastructure in a presentation entitled "Google's Data Center Energy Efficiency Innovations." The good news, he said, is that the company has been able to drive efficiency way up. The better news is that this was done without exotic technologies. Malone spent the most time describing a signature Google move: The company builds uninterruptible power supply (UPS) into each server, which saves energy and improves efficiency in several ways. Malone referred to the company's data center site often during his presentation.
An effort to drive efficiency in data centers is as much management as technical challenge. An afternoon panel entitled "Collaboratively Managing the Inefficiencies of the Data Center" looked at the prickly relationship between IT and facilities staffs. The panel of five discussed various age-old questions related to the difficulties of achieving cooperation across departmental lines.
The most interesting thing about the panel was that the issues and potential solutions discussed are not at all new. They include the simple question of why the issues linger and what is keeping IT and facilities management from creating an effective dialog. The panel discussed -- and didn't come to agreement -- on whether one executive or a team should have ultimate decision-making responsibilities.
The challenges cited by the group range from the existence of separate and complex vocabularies to the reality that it is impossible to squelch power grabs and political maneuvering. The bottom line is that driving efficiency in the data center is a complex task that requires a lot of communication-and in most instances this communication is not occurring.
entitled "Efficient Computing: Practical Solutions for Lean, Clean & Green" continued the theme through the late afternoon. The conference was interesting in its willingness to provide a platform for individual companies. In this case, EnergyWare CEO Bob Summers described the company's products, which preserve energy by throttling down processors when the task to which they are assigned can be accomplished at a lower speed. Summers said that benefits are greater in virtualized environments.
The takeaway from The Uptime Institute's symposium is that the greening of the data center occurs at the macro and micro levels. Putting UPSes in servers, using proactive cooling techniques and throttling down processors-multiplied across thousands of each type of device-add up to significant power savings. On the macro side, the ability of engineers and executives to effectively communicate with each other is equally important. The sense is that the battle is nowhere near won, but that the importance of proper data center energy management is gaining attention from higher-level executives.