IBM iDataPlex: Saving Money by Eliminating Hot Air

Rob Enderle

After the 2008 elections in the U.S., eliminating "hot air" would seem to be a great thing for any country to focus on, but I'm not talking about the kind of hot air that comes out of politicians or even executives, I'm talking about the hot air that comes out of servers, particularly server blades, which adds significantly to cost, reduces reliability just as significantly (it is a hardware killer), and has us using a fraction of our data center space for fear we'll turn the thing into a server cooking oven.


IBM is using water, something we have always known would work better than air, and doing so with minimal risk and cost to eliminate one of the biggest data center killers, hot air.


Automotive Connections


Initially, most internal combustion cars and motorcycles were air cooled. Only Porsche stayed with this method (though the Smart Car appears to be using it again) and not for all cars. While it showcased that you could use air competitively, you generally couldn't do so at a competitive cost. This is because water, by volume, is a vastly more efficient conductor of heat and if it is heat you want to move, water is the better mechanism. How Porsche cools its 911 isn't that much different from how you typically cool a data center. You move lots of air through the cavity that contains the equipment. In the case of the 911, that is the engine bay; in your case, that is the data center. But unlike cars, which have one engine that is designed to burn gasoline (and thus survive relatively high heat), a data center is made up of lots of material that doesn't like heat much at all and will fail catastrophically if it runs for long over the temperatures we find comfortable.


If you walk around a typical data center, not only is it very noisy, sounding kind of like you are in a wind tunnel all the time, you'll also notice that the temperature tends to vary a lot, indicating overcooling or potentially excessive temperatures. If you get right behind a rack, you'll likely find it suddenly very warm. There is a huge expense and art to balancing the temperature in a data center that typically either involves a lot of sensors and automatically adjustable ducts or massive overcooling. All of this is very costly and, because it is based on air, relatively inefficient. Why not use water?


The Problem with Water


Ideally, if you used water, you'd want to cool the individual components. Much like with a car where you run water inside the engine, you might immerse the hardware in a water jacket. There are several problems with this. One is that, unlike an engine, electrical components tend not to like water much and they can corrode rather badly. And, just as water conducts heat, it also conducts electricity, which means you have to use small pipes connected to each component that is generating heat. Now in the PC gaming market, this isn't uncommon. There are water blocks made for virtually every component you'd want to cool. But unlike PCs, in a data center this would lead to large numbers of very small and fragile hoses that would have to be carefully disconnected during repair and maintenance. Any "oops" moment could lead to massive damage and possibly electrocution of the technician, clearly an unacceptable risk.


There has been some use of nonconductive liquids for this purpose and immersing the rack or even the data center in the liquid, but that has so far proven impractical (evidently technicians don't yet see the excitement of doing their work in scuba gear). I should point out that, as with every generation, some kids like to push the the limits with stuff like this. If you want to see something really interesting, check out this $4,500 to $10,000 gaming computer that uses non-conductive coolant. Unfortunately, it isn't yet practical for a data center, but could be effective at really upsetting a spouse.


IDataPlex Rear Door Heat Exchanger


This brings us to IBM's solution. In effect, it is a heat exchanger mounted to the back of the rack and is piped outside to a chiller, which removes the heat the exchanger captures. You still need fans to push air through the heat exchanger, but you don't need to move much air in and out of the data center itself. This animation shows how it works. Room temperature air is blown into the rack to cool the equipment, which heats the air. The air then passes through the heat exchanger (think of this like a reverse radiator), which takes the heat back out of the air. The now-warmed water is pumped outside, where it is chilled and circulated back into the data center. Once in place, you shouldn't have to move or work on the liquid cooling equipment except for regular maintenance. Normal maintenance on the racks and equipment can go on with virtually no change. In effect, with this technology, you have effectively mitigated much of the heat problem. The pipes needed to handle water are vastly smaller than the massive ducts you'd have to use to move large volumes of air, making this an effective technology for either increasing the capacity of an existing data center or creating a data center in the middle of an existing building.


Wrapping Up: Innovation


Sometimes innovation isn't creating something new; it is taking something old and applying it to a new situation. In this case, IBM did what most automobile makers figured out around 100 years ago and realized that water was a vastly more effective way to get rid of heat. One problem that industry had that IBM likely won't is bugs and rocks getting stuck in the radiators. Then again, the kind of bugs IBM has to deal with, while different, can be just as annoying. I think we can all take comfort in the thought that sometimes the old ideas still work best and the iDataPlex Rear Door Heat Exchanger is an example of that. Now if we could find one of these that would work on politicians.

Add Comment      Leave a comment on this blog post
May 11, 2009 6:07 AM Mark Phinney Mark Phinney  says:

This is not new technology for IT or for IBM specifically.  If I remember correctly, larger models in the System/370 family and the 30-series of mainframe computers back in the 1970s and 1980s were water-cooled machines.  In fact, I remember reading Computerworld and Datamation articles back in those days where data center managers complained of the additional expenses related to the chilled water plants in data centers with water-cooled systems.  One of the big drawing cards for Amdahl plug-compatible mainframes (PCMs) was that they were air-cooled, where their IBM equivalents were water-cooled.

Mark C. Phinney

Senior Software Engineer

L-3 Communications

May 11, 2009 6:46 AM Rob Enderle Rob Enderle  says: in response to Mark Phinney

Mainframes used a much more complex direct cooling system that made them a bit... er, a problem to service.  IBM largely moved away from this method around 1995.   Picture of the inisdes of one of these beasts here:

This intercooler method has no impact on blade service. 


Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.