Cooling the data center is already a big enough challenge under normal workloads, but as mobile communications and the Internet of Things push traffic into the stratosphere, many IT professionals are starting to wonder where all the heat from hyperscale, hyper-dense equipment racks will go.
For the handful of companies that specialize in advanced cooling techniques, these are heady days indeed. According to Research and Markets, the data center cooling market will jump nearly 65 percent by 2018, topping $8 billion in revenues. This includes everything from air conditioners and chillers to new rack/server solutions to sophisticated management and monitoring systems designed to provide better balance between data loads and cooling distribution.
But as heat-generating hardware platforms become denser through microarchitectures and scale-out configurations, many are wondering if the age of traditional air-cooling is coming to an end. As densities increase, so does the difficulty in blowing in cool air and exhausting heat. Water, on the other hand, can be delivered more closely to tightly packed instruments and provides a much greater heat-exchange capacity than air.
Companies like CoolIT Systems have been playing up this angle for the past decade, but it is only recently that data infrastructure trends have pushed water-cooling from the novelty stage to a necessity. In company parlance, the technique is called direct contact liquid cooling (DCLC) for its ability to target high amounts of thermal conductivity to highly concentrated areas. As a result, liquid-cooled racks are able to push densities past 45 kW per rack and reduce operating costs some 30 percent by eliminating much of the infrastructure needed to produce and deliver cool air to the data center.
Hyperscale companies like Google are already showing what liquid cooling infrastructure would look like in major production environments. The blue pipes in this photo of the company’s Oregon facility carry cold water to the racks while the red pipes carry the heated water away. I’m not quite sure what the green and yellow pipes do, but they might have something to do with free-cooling capabilities – perhaps helping to lower the temperature of hot water before it is flushed back into the surrounding environment.
If it looks complicated, it is, but this is Google’s application of liquid cooling to a massively scaled-out infrastructure. If your data needs are not quite so broad – say, in a standard HPC setting – liquid cooling can be a relatively simple affair, although one that is dependent on some fairly sophisticated engineering. HP, for example, recently launched its Apollo 8000 warm water cooling system that utilizes sealed metal tubes within the server filled with a highly evaporative liquid, most likely alcohol. This approach drives cooling via the phase change that takes place as the liquid evaporates on one side of the box and then condenses on the other. Heat is then removed through metal plates connected to a rack-level water circulation system. In this way, fluid is not running between the server and the rack, which makes it easier to swap out hardware without shutting down cooling to the entire rack.
Some would argue that liquid cooling is fine as a concept, but it is unrealistic to retrofit legacy infrastructure. Agreed, which is why liquid cooling is more appropriate for greenfield deployments, particularly the converged, modular designs aimed at web-scale processing and Big Data.
As the data loads go up, the enterprise needs every tool at its disposal to lower costs, and that includes innovative ways to keep operating temperatures under control.