The traditional means of cooling data centers and the equipment they house is air-cooling – A/C units, ducts, heat exchangers and the like. But as hardware densities increase, particularly in the server room, enterprise executives are starting to realize that the old way simply doesn’t work anymore, or at least, not without breaking the operations budget.
So it’s no surprise that liquid-cooled solutions are starting to make a run on the enterprise. Liquid solutions have long been a facet of high-power computing (HPC) environments, which have been dealing with mass-scale and mass-density for years. And as the top end of the industry starts to show the efficiency and cost-effectiveness of the technology, the wider enterprise community is starting to take notice.
A key showcase for liquid-cooling technology recently opened up in Golden, Colo., where the Department of Energy houses its National Renewable Energy Laboratory (NREL). The facility recently installed the Peregrine supercomputer, a 1.2-petaflop scale system developed in conjunction with Intel and HP that uses a warm-water cooling architecture to help the facility maintain power usage effectiveness (PUE) of 1.06, which is about as close to perfect efficiency as you can get these days. The setup will be used for large-scale modeling and material properties and process simulations in the agency’s quest for renewable energy solutions.
Indeed, with commercial processors like Intel’s new Xeon E5-2600 V2 platform finding their way into off-the-shelf HPC-class platforms, it seems only natural that liquid-cooled versions of these machines would trickle down to high-end enterprise applications. For example, Cray is offering the new XC30 machine and CS300 cluster configuration, both of which sport the E5, in both air and liquid-cooled versions. The systems can also be outfitted with alternative processors, such as the Xeon Phi or the NVIDIA Tesla line of GPUs, allowing organizations to scale infrastructure to new heights while retaining the efficiency of liquid-based heat exchange.
All liquid-cooled systems are not the same, however, with some channeling plain water or specialty coolants close to motherboards and processors in order to draw heat and others bathing critical components in dielectric solutions. NEC Corp., meanwhile, has developed a sort of hybrid solution that features a multi-stage approach in which coolant running away from components is quickly evaporated to disperse heat more quickly. According to the company, this phase-change approach cuts cooling costs in half compared to standard A/C and can be applied to rack systems or individual components. The company has also devised new flow-path architectures that takes advantage of natural circulation, which improves both cost-efficiency and system reliability.
Part of the problem with retrofitting an existing plant with liquid-cooled solutions is that it can either void OEM warranties on critical components outright, or the new cooling system will have a cost depreciation life cycle that exceeds the warranties of their target devices. A company called Asetek, however, has found a work-around by teaming up with Signature Technology Group (STG), a warranty service and support firm that has agreed to maintain coverage for systems that have been upgraded with Asetek’s liquid-cooling technology. In this way, organizations can upgrade to liquid cooling and maintain repair or replacement service on critical data systems. Asetek’s RackCDU platform provides direct-to-chip hot water cooling that is said to draw 80 percent of server heat without chilling. The company claims 50 percent cost reduction, a 2.5-fold increase in server densities and the capture and reuse of excess server energy for facility heating and cooling.
Implementation of a liquid-cooled system is not a decision to be made lightly. While the technology is sound, the engineering challenges can be significant considering it involves renovations to both data and facilities infrastructure. Liquid cooling is clearly a viable option for new construction, but if your data center has already seen a fair share of upgrades to its air-cooled platform – which range from energy-efficient units to raised flooring to hot/cold aisle reconfiguration – and need for increasingly dense architectures is still paramount, it might be that the switch to liquid cooling is the next logical step.