Achieving data center efficiency is not only challenging on a technology level, but as a matter of perspective as well. With no clear definition of “efficient” to begin with, matters are only made worse by the lack of consensus as to how to even measure efficiency and place it into some kind of quantifiable construct.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iAt best, we can say that one technology or architecture is more efficient than another and that placing efficiency as a high priority within emerging infrastructural and architectural solutions at least puts the data industry on the path toward more responsible energy consumption.
The much-vaunted PUE (Power Usage Effectiveness) metric is an unfortunate casualty of this process. The Green Grid most certainly overreached when it designated PUE as the defining characteristic of an efficient data center, but this was understandable given that it is a simple ratio between total energy consumed and the portion devoted to data resources rather than ancillary functions like cooling and lighting. And when implemented correctly, it does in fact provide a good measure of energy efficiency. The problem is that it is easy to game and does not take into account the productivity of the data that low-PUE facilities provide nor the need for some facilities to shift loads between resources and implement other practices that could drive up their ratings.
This is why ASHRAE, the American Society of Heating, Refrigeration and Air-Conditioning Engineers, recently voted to remove PUE as the basis for its emerging facility efficiency standard in favor of more targeted metrics like the Mechanical Load Component (MLC) and the Electrical Loss Component (ELC), although PUE will remain as an alternate choice for compliance.
Still, PUE remains the benchmark for much of the data industry. The federal government, also known as the largest data consumer in the world, recently set a PUE rating of 1.5 (meaning total energy consumption cannot be 1.5-times what is needed to run data equipment alone) for existing data centers, and 1.2 for new facilities. The move came as part of the Data Center Optimization Initiative that seeks to shave more than $1 billion off the federal IT balance sheet by 2018. Any facility that does not meet this standard will be recommended for consolidation or closure, which, according to some estimates, could affect more than half of federal data centers currently in operation – although to be fair, many of these facilities were already targeted for closure under an earlier program.
But while most efficiency programs target cooling as the main culprit, the fact is that simply transferring energy from the grid to the various components in the data center produces an inordinate amount of waste as well, says tech consultant Stephen Ohr. A 12-volt feed from a rack-mounted power supply usually runs through a point-of-load supply (POL) unit to step down the voltage for the various CPUs, memory devices and I/O components, which may require anywhere from 1.2 to 3.3 volts.
At the same time, current requirements can range from 25 amps to more than 200. Each of these transitions produces a slight energy loss that can accumulate to a significant amount across the entire energy train. This is why many power supply firms like Schneider Electric and Analog Devices are touting “digital power management,” which replaces traditional analog voltage regulators with microcontrollers for more precise management of the phase and frequency relationships between systems.
Work is also proceeding on the most basic energy-consuming unit in the data center: the silicon chip. ARM Ltd. is working with Taiwan Semiconductor Manufacturing (TSM) to perfect a 7 nm FinFET (fin-shaped field-effect transistor) that has the propensity to dramatically reduce current leakage on SoC architectures, enabling increasingly dense system configurations (perhaps 10-fold) that run cooler and consume less energy than today’s 10 nm devices. ARM is also working with firms like Cavium Networks to devise low-power solutions for scale-out IoT and Big Data applications.
So in all likelihood, both the standards and the technologies to drive energy efficiency will proceed apace, as will the debate as to what is and is not efficient. And once we start to push hyperscale and hyperconverged technologies into the data mainstream, the argument will no longer be academic but will have dramatic implications for energy and infrastructure development in the coming century.
And as energy usage starts to vary greatly depending on the actual makeup of the data load and the applications that run them, we may just have to settle for an uncomfortable truth: one metric will not fit all use cases.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.