Since my AC is on the fritz today and it’s going to be 100 degrees-plus in the Washington, DC, metro area, I thought now would be a good time to take a look at what’s been happening in data center cooling lately.
It turns out, quite a bit.
Probably the most significant development for future data facilities is Google’s deployment of artificial intelligence (AI) to manage cooling equipment at some of its hyperscale centers. Light Reading’s Brian Santo reports that the DeepMind platform has already produced a 15 percent improvement in power consumption, which, for Google, translates into millions of dollars saved per year. DeepMind, developed in Britain and acquired by Google in 2014, uses pattern recognition and intuitive algorithms to not only monitor and adjust cooling conditions but even recognize what information it lacks to make informed decisions and guide sensor deployment and other structural upgrades. Google says it is now looking to deploy DeepMind across its global data footprint.
Meanwhile, the method used to measure power and cooling efficiency has gotten a much-needed upgrade given the widespread criticism that has followed the Power Usage Effectiveness metric for the past decade. The Green Grid just released a new multi-metric “Performance Indicator” (PI) model that aims to deliver a broader view of data center operations and provide more accurate guidance as to what needs to be done to improve performance. The new standard is derived largely from Future Facilities’ ACE (Availability, Capacity, Efficiency) assessment method, but it also incorporates PUE plus various other metrics such as IT Thermal Conformance and IT Thermal Resilience. The Green Grid also says PI will be a flexible standard that will add new efficiency tools and techniques or make adjustments to existing ones as necessary.
Vendor solutions are becoming more sophisticated as well, particularly when dealing with the high-capacity loads that are becoming commonplace in the era of Big Data and the Internet of Things. Schneider Electric’s new InRow DX system offers a further 50 percent reduction in energy consumption compared to earlier generation technology while maintaining a narrow 600 mm footprint. The system provides high-density cooling up to 42 kW through advances like brushless variable-speed scroll compressors and electronically commutated (EC) fans that utilize DC rather than AC power. It also incorporates a hot air recirculation prevention system, active flow control and both fluid- and air-cooled configurations suitable for closet, server room and full data center deployment.
And since both power and cooling are closely intertwined, more organizations are starting to tailor their UPS configurations in conjunction with emerging Data Center Infrastructure Management (DCIM) platforms. According to Alan Luscombe, director of U.K.-based Uninterruptible Power Supplies Ltd., a tight relationship between the two improves both data center efficiency and availability and can be accomplished with little more than a standard SNMP (Simple Network Management Protocol) connection. Essentially, the UPS becomes an intelligent device in an integrated, centrally managed power/cooling ecosystem.
Sophisticated technology is not the only way to reduce the power burden, of course. There is still plenty of fat to cut using simple containment policies and turning the lights off at night. But as data infrastructure moves toward hyperscale infrastructure, fine-grain, low-yield approaches to conservation start to make more sense – for both the planet as a whole and the enterprise balance sheet.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.