Data center infrastructure is becoming denser, and this is forcing the enterprise to consider increasingly novel ways to manage power and heat loads.
What was considered experimental – and sometimes a little nuts – just a few years ago is now on the cusp of going mainstream, which is contributing to a general shake-up over what constitutes enterprise data infrastructure and how it is to be apportioned and utilized.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
For standard rack configurations, air cooling seems to be the way to go, although designs are becoming more modular and are aiming to produce high-efficiency cooling through more advanced heat exchangers. A case in point is Schneider Electric’s Ecoflair system that uses a proprietary polymer heat exchange versus a traditional aluminum plate design to reduce corrosion and provide for a more maintenance-friendly architecture. The company says this improves cooling efficiency by 60 percent and can be implemented in facilities rated up to 40 MW.
Things get a bit tricky, however, when organizations embark on hyperconverged infrastructure deployment. Jake Ring, CEO of dcBlox, an Atlanta-based enclosure designer, notes that leading platforms from Nutanix and HPE can easily exceed the 7-10 kW per cabinet limitation of most legacy data centers, which means most installations must go into entirely new facilities geared toward 30 kW or higher. In these cases, the enterprise needs to take particular care to verify the power efficiency claims of its chosen hyperconverged architecture to ensure that the total cost of ownership does not overwhelm available budgets as data loads scale.
One example of this new generation of data center can be found near San Diego, where high-performance hardware designer Cirrascale put some of its own technology to use in a facility rated at 30 kW per rack. According to Data Center Journal, the center will host deep learning and other AI-driven applications on bare-metal infrastructure. The racks are outfitted with combined air- and liquid-cooling systems designed by ScaleMatrix that features vertical air-flow and top-end exhaust, rather than the traditional front-to-back approach, plus a dedicated water supply and circulation system for each cabinet to prevent one cabinet’s temperature from affecting others nearby.
This requires a lot of specialized engineering, of course, which defeats the purpose of hyperscale infrastructure as a modular and easily deployed technology. However, a Dutch company called Asperitas is out with a fully modular, liquid-cooled compute box that it bills as a full plug-and-play solution for high-performance environments. The AIC24 uses immersion technology to bathe components in a non-conductive dielectric capable of drawing up to 24 kW of heat, cutting cooling costs in half due to the lack of fans and other electrical hardware. Each module can hold up to 48 servers and two switches and can function comfortably in environments with outside temperatures topping 15 degrees Celsius.
The density-power/heat paradox will likely be one of the primary inhibiting factors in scale-out data environments. Even as compute and storage resources are pushed to multiple micro data centers on the edge, centralized facilities will continue to grow, both in terms of infrastructure and data loads.
This means more power will have to flow to increasingly smaller spaces, and more heat will have to be pulled away from critical components. And all of this has to happen without pushing overall energy consumption to intolerable extremes.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.