More

    Giving Hot Chips a Cool Bath

    In the never-ending quest to keep IT equipment cool, enterprises have become increasingly open to the idea of liquid-based heat exchange – if only because air-cooled systems are not keeping up with increasingly dense systems and architectures.

    To date, most of these approaches involve piping cool water through equipment racks or bathing key components in non-conductive dielectric solutions.  But now that high-performance computing (HPC) is making its way into the data center in the form of converged and hyperconverged infrastructure, some developers are angling to bring liquid-cooling directly to the microprocessor.

    At some of the leading research facilities in the world, direct-to-chip liquid-cooling is emerging as a necessity given the high wattage of the newest chip designs. Japan’s Oakforest-PACS system, for example, features an Asetek D2C system for more than 8,200 nodes using Intel’s Knights Landing architecture. The system features a distributed pumping approach that houses integrated pumps and cold plates within server and blade nodes instead of heat sinks on the processor. Asetek says this is more efficient than centralized pumping and reduces risk of failure. As well, the low-pressure design of distributed pumping can be easily adapted to OEM server designs and can also incorporate memory, VRs and other high-wattage components.

    Meanwhile, the Defense Advanced Research Projects Agency’s (DARPA) is targeting chip-level liquid cooling with its ICECool Applications (Intra/Interchip Enhanced Cooling) program. Among the results is a microfluidic approach developed by Lockheed Martin that uses a combination of axial micro-channels, radial passages and other techniques to direct liquid in and around the chip. The company is also experimenting with micro-pores and miniscule manifold structures to create liquid jets that can target specific heat flux and density issues. With these approaches, the company says it can produce a four-fold reduction in thermal resistance while dissipating 1 kW/cm2 die-level heat flux.

    IBM is also contributing to the ICECool program with a non-conductive fluid approach to cooling nano-sized 3-D chips. The company says this is superior to a water-based approach because the removal of protective shielding around electrical components allows for a higher chip stack without adding to size, weight, power consumption or cost. With this approach, IBM says it can place memory directly onto the stack to improve the speed of high-throughput applications like graphics and deep learning algorithms. Researchers also say the same fluid can be used in the data center to convert heat to vapor in hardware modules, eliminating the need for expensive CRACs and other air-handling devices. (Disclosure: I provide content services to IBM.)

    And for those of you who just happen to have some liquid nitrogen on hand, AMD has been showing how its new Threadripper 1950X CPU can push clock speeds to new heights at extremely low temperatures. At SIGGRAPH last month, the company set up a controlled environment in which a fluid called LN2 was used to push the chip from its base speed of 3.4 GHz to 5.2 GHz, earning it a score of 4,122 on the Cinebench test. According to Digital Trends, the previous high score for a 16-core processor was 2,867. In addition to the liquid nitro, the company slowed down the PCIe lanes and increased the voltage.

    Liquid is far more efficient than air when it comes to removing heat from electronic components. To date, however, air systems have been sufficient for most data centers and are easier and less costly to install.

    But now that Big Data and the IoT are on their way, the enterprise needs to rethink its cooling strategy for a new era. And since liquid infrastructure is far easier to install before the racks become populated, the time is now to determine future density levels in the data center and think about how to get rid of all that heat.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

     

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles