Keep Cool, and Carry on to Hyperscale

Arthur Cole
Slide Show

Top 5 Tips for Building a Fail-Safe Network

It’s been known all along that hyperscale data environments will require new approaches to cooling. But now that more facilities are coming on-line, we are starting to see the first glimpses of what works and what doesn’t.

With hyperscale facilities being so large and their compute environments so dense, standard air-handling systems cannot provide sufficient heat distribution – at least, not in a financially and environmentally responsible manner. This has led to a number of innovative approaches, most of them involving some form of liquid cooling.

Dell, eBay and Intel recently announced the results of a six-year collaboration to utilize plain water to cool sensitive electronic equipment. This is a departure from most liquid cooling systems, which usually involve specialized dielectrics to prevent, well, the obvious. The eBay project, which has since become the basis for Dell’s Triton platform, is engineered with anti-leakage technology that draws water close enough to specially developed Xeon E5 processors to provide a highly effective heat sink without coming into contact with electronics. In this way, the system uses an inexpensive – and in some cases, completely free – coolant that can boost data performance by 60 percent over standard configurations and still maintain a power usage effectiveness rating (PUE) of 1.03 or better.

Meanwhile, multiple start-ups are focusing on liquid-cooling as a key growth area in the hyperscale computing market, with perhaps some trickle-down to smaller-scale facilities as well. A company called Asetek recently updated its RackCDU (cooling distribution unit) water-based system with an in-rack-mounted solution that frees up space in the server room. The company also has a direct-to-chip version that was recently paired with Nvidia’s Tesla P100 for cloud and hyperscale configurations. At the same time, CoolIT is out with a new heat exchange module and active/passive coldplate assemblies designed specifically for Intel’s Xeon PHI.

As the hyperscale market “heats up” (sorry), expect to see a complementary boost to what Technavio describes as the “global data center precision air-conditioning market.” The company predicts a 10 percent compound annual growth rate between now and 2020, with much of the activity taking place in North America and the Asia-Pacific region. The sector includes air, liquid and hybrid systems, as well as maintenance and other services, provided they focus on exchanging heat through highly targeted means rather than the room-level cooling that populates most of the data center footprint today.

It is important to note, however, that while liquid solutions provide much more effective cooling than air (25 times more effective in the case of just plain water), they still utilize the same basic sink and removal process. In all likelihood, this will not be enough to support the megawatt environments of future hyperscale facilities, says tech author Bill Schweber on EE Times. In addition to both liquid and adiabatic cooling, hyperscalers will also have to consider reducing dissipation using variable-speed drives and other methods, deploying modular DC power supplies and increasing server inlet temperatures. In large physical environments, it’s not enough to get the heat off a board, you have to do it in a way that does not impact other systems down the line.

The more you scale, of course, the greater the cooling challenge, so it will be interesting to see how far designers can push data infrastructure before the physical laws of thermodynamics reach their limit. To hear some tell it, the data industry will convert to warehouse-size facilities in short order as competitive pressure causes industries across the board to seek the greatest economies of scale.

New approaches to cooling will therefore be a hot commodity (again, sorry) for a while longer but at some point the scalability train has to reach the end of the line, and the chief obstacle will be a familiar one: too much heat.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.