Liquid cooling is gaining, well, steam (sorry) in the data center as compute densities creep up and organizations look for ways to keep temperatures within tolerance without busting the budget on less-efficient air-handling infrastructure.
But there are a number of approaches to liquid cooling, ranging from running simple cold water in and around the data center to full immersion of chips and motherboards in non-conducting dielectric solutions.
According to Research and Markets, the data center cooling market as a whole is on pace to hit compound annual growth of 6.67 percent between now and 2019. The report summary available on the web does not break out the performance of specific cooling categories, but it does note that high adoption of liquid-immersion technologies is one of the key growth factors. As cloud computing and data analytics ramp up in the enterprise, data infrastructure across the board will have to provide greater performance within small, most likely modular, footprints, which means more heat and a more direct way to whisk it away from sensitive data equipment.
This is why we are seeing greater collaboration among high-density system developers and advanced cooling providers. IBM’s OpenPower Foundation recently announced the addition of liquid-cooling specialist Asetek in an effort to drive greater performance and higher density in the Power platform that is expected to provide hyperscale performance to the enterprise. Asetek has already completed more than two million installations of its node-based cooling system, with applications ranging from full data center solutions to servers, workstations and gaming environments.
Still, it is understandable that many IT executives would have a certain amount of “hydrophobia” when considering the prospect of liquids and electrons swirling about their critical infrastructure, says The Register’s Dave Wilby. But as infrastructure becomes more modular, and therefore insulated from either the water-filled pipes or air fans that populate server racks at the moment, the need to bring the heat exchange directly to the component is growing. At the same time, dielectrics like Iceotope’s Novec have been shown to enable heat convection 20 times faster than water, and untold orders of magnitude better than air, reducing power consumption, noise and carbon emissions.
Of course, liquid cooling could prove a tough sell in areas of the world that have scarce water resources, says Datacenter Journal’s Jeff Clark. At the moment, southern California is undergoing a severe drought, with many regions imposing restrictions on all manner of usage, even farming. In that light, it would be short-sighted indeed for a data center operator to invest in liquid infrastructure only to have supplies cut off when the rain stops falling. One ray of hope is desalination, which requires sea water to be brought to a higher temperature than normal, offering a perfect chance to run it through a data center and then a desalination plant. But of course, this would only work at facilities that are close to the ocean.
Big Data and the Internet of Things are driving all manner of innovation on the virtual, abstract layers of the data center, but in order to support all these magical constructs, plain old physical infrastructure will need to roll with the times as well. This means higher densities, hotter conditions and a cooling mechanism that can do the job without blowing the power budget.
In this light, it appears that IT is near the inflection point between fear of water and the need to support advancing architectures.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.