Denser and Hotter: The Case for Liquid Cooling in Scale-Out Architectures

Arthur Cole
Slide Show

Five Ways to Realize Server Room Profitability

Cooling the data center is already a big enough challenge under normal workloads, but as mobile communications and the Internet of Things push traffic into the stratosphere, many IT professionals are starting to wonder where all the heat from hyperscale, hyper-dense equipment racks will go.

For the handful of companies that specialize in advanced cooling techniques, these are heady days indeed. According to Research and Markets, the data center cooling market will jump nearly 65 percent by 2018, topping $8 billion in revenues. This includes everything from air conditioners and chillers to new rack/server solutions to sophisticated management and monitoring systems designed to provide better balance between data loads and cooling distribution.

But as heat-generating hardware platforms become denser through microarchitectures and scale-out configurations, many are wondering if the age of traditional air-cooling is coming to an end. As densities increase, so does the difficulty in blowing in cool air and exhausting heat. Water, on the other hand, can be delivered more closely to tightly packed instruments and provides a much greater heat-exchange capacity than air.

Companies like CoolIT Systems have been playing up this angle for the past decade, but it is only recently that data infrastructure trends have pushed water-cooling from the novelty stage to a necessity. In company parlance, the technique is called direct contact liquid cooling (DCLC) for its ability to target high amounts of thermal conductivity to highly concentrated areas. As a result, liquid-cooled racks are able to push densities past 45 kW per rack and reduce operating costs some 30 percent by eliminating much of the infrastructure needed to produce and deliver cool air to the data center.

Hyperscale companies like Google are already showing what liquid cooling infrastructure would look like in major production environments. The blue pipes in this photo of the company’s Oregon facility carry cold water to the racks while the red pipes carry the heated water away. I’m not quite sure what the green and yellow pipes do, but they might have something to do with free-cooling capabilities – perhaps helping to lower the temperature of hot water before it is flushed back into the surrounding environment.

If it looks complicated, it is, but this is Google’s application of liquid cooling to a massively scaled-out infrastructure. If your data needs are not quite so broad – say, in a standard HPC setting – liquid cooling can be a relatively simple affair, although one that is dependent on some fairly sophisticated engineering. HP, for example, recently launched its Apollo 8000 warm water cooling system that utilizes sealed metal tubes within the server filled with a highly evaporative liquid, most likely alcohol. This approach drives cooling via the phase change that takes place as the liquid evaporates on one side of the box and then condenses on the other. Heat is then removed through metal plates connected to a rack-level water circulation system. In this way, fluid is not running between the server and the rack, which makes it easier to swap out hardware without shutting down cooling to the entire rack.

Some would argue that liquid cooling is fine as a concept, but it is unrealistic to retrofit legacy infrastructure. Agreed, which is why liquid cooling is more appropriate for greenfield deployments, particularly the converged, modular designs aimed at web-scale processing and Big Data.

As the data loads go up, the enterprise needs every tool at its disposal to lower costs, and that includes innovative ways to keep operating temperatures under control.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
Jul 3, 2014 5:59 PM Phil Phil  says:
There are over 15 different suppliers of DCLC. Most of them offer systems that just pipe water to the hottest chips and need supplementary fan cooling. They all require quick connects to enable the removal of the server from the rack or blade enclosure.There are a few more exotic systems that involve dunking servers in some form of liquid. All these would appear to raise doubts regarding serviceability, robustness and cost effectiveness. HP is the only company beside ourselves that has seriously addressed the serviceability problem by conducting the heat to one edge of the blade where it contacts with a cold plate cooled with water. However, they need custom motherboards Clustered Systems on the other hand, places a very thin flexible cold plates over the whole motherboard which are off the shelf and equipped with heat risers to bring all the heat up to a single level. When the server needs to be removed the cold plate is simply unclamped and remains in the chassis when the server is removed. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.