More

    To Save Energy, Limit Your Conversions

    Slide Show

    5 Ways Network Function Virtualization (NFV) Lowers CapEx and OpEx

    As the enterprise works to increase energy efficiency in the data center by virtualizing hardware, consolidating workloads and deploying power-sipping components, it might also help to focus on one of the main sources of waste on the facilities side: power conversion.

    Every time power has to be converted from one form to another in the journey from supply to the consumer, which is invariably a down-conversion, excess energy is given off as heat. The latest generation of switch-mode power supplies (SMPSs) uses a range of switching, storage and filtering techniques to cut down this loss dramatically compared to earlier analog solutions, says Electronic Design’s Lou Frenzel, but the most effective means of driving efficiency is to push higher voltages as close to the end points as possible. Fewer conversions, after all, means less loss, while at the same time you get to deploy more powerful components in the plant, which in the data center’s case is servers, storage and network devices.

    This thinking is at the heart of Google’s recent contribution to the Open Compute Project. The company has designed a rack that distributes 48-volt DC power rather than the 12-volt AC design on the current OCP spec. Not only does this avoid an AC-to-DC conversion, it fosters high-performance computing (HPC) in the data center. Google, in fact, already processes most of its load on 48V servers of its own design, powered by 48V lithium-ion UPS systems, not AC power from the grid. In fact, the company began experimenting with 48V architectures as early as 2009 when engineers realized this alone could produce a 30 percent efficiency improvement over 12V.

    Still, it isn’t realistic to expect everything in the rack to run at 48V, so development is underway to squeeze as much efficiency out of the conversions that do take place. STMicroelectronics has contributed a new set of power-conversion ICs to the Google architecture that provide direct digital conversion of inputs ranging from 36 to 72 volts and outputs of 12 volts down to .5 volts. The devices are compliant with Intel’s Haswell, Broadwell and Skylake processors, as well as DDR3/4 specs and virtually all FPGAs and ASICs designed to meet data center requirements. The top-end STRG06 device manages up to six converters in parallel and supports output power from 50 watts to 300 watts.

    Meanwhile, Vicor Modules has a new 48V direct-to-POL (point of load) system that allows low-voltage, high-current processors and memory devices to run off a 48V distribution bus, which the company says can reduce conversion loss 16-fold compared to 12V architectures. Distributing power at 48 volts also allows systems designed to reduce the footprint of the power infrastructure by using smaller cables, bus bars and storage capacitors, which in turn leads to higher density configurations that support not only data infrastructure but edge devices and even LED lighting. Also, the system can be configured with a digital control and telemetry module for applications that require advanced power balancing, as well as PMBus and SVID control interfaces that support VR12, 12.5 and 13 server processing power.

    Reconfiguring a legacy data center’s power distribution infrastructure is not an easy task. The most logical time to do it is in the normal refresh cycle, which unfortunately is often much longer at the rack level than the component level.

    Greenfield deployments, however, would be ripe for 48V architecture, particularly in the cloud where HPC infrastructure can be leveraged in all kinds of ways. And with Big Data and the IoT driving demand for both scale-out infrastructure and highly automated processes, standardization all the way down to the power supply may become more of a necessity than a luxury before too long.

    But no matter how it’s done, reducing the number of power conversions in the data center is a sure-fire way to improve performance and lower costs. As with any distribution channel, you can get products to consumers faster and cheaper by cutting out the middlemen.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles