You can reduce power in the data center in several ways, from more efficient hardware designs to advanced load balancing and infrastructure management software. But sometimes the direct approach is the most effective: If you want to lessen the power draw, employ low-power hardware.
On the data side of the house, power consumption is mostly a matter of the processors you choose. Also, with new generations hitting the channel that promise greater performance within a lower power envelope than current devices, many organizations will see their power consumption lessen as part of the normal hardware refresh process.
ARM processors, for example, are poised to make a big push for the enterprise within the coming year. ARM developers have been talking about server-class designs for years, but only recently have we seen a critical mass of actual solutions enter the channel, says IDG’s Agam Shah. New models from Gigabyte, Inventec, Wistron, Penguin Computing and E4 are similar in appearance to their x86 counterparts but utilize the new 64-bit ARM architecture instead. Some also pair their ARMs with advanced GPUs from Nvidia to support advanced graphics and HPC applications.
Two can play the low-power game, of course, and Intel is not ready to let a lucrative market like enterprise servers slip away so easily. The company is putting the finishing touches on the Denverton processor, which will be based on the low-power Atom architecture and targeted toward applications that draw lighter workloads. Denverton improves on the current Avoton low-power solution in a number of ways. First, it will have 16 cores instead of eight, and these will be the state-of-the-art Goldmont cores rather than the Silvermont CPU found in current Avotons. The Denverton will also feature 2 MB of level 2 cache per dual-core module and will support up to 128 GB of DDR4-2400 memory vs. Avoton’s 64 GB of DDR3-1600. Still unclear, though, is how well the power envelope matches up between the Denverton and the latest ARM solutions.
Even workloads that require high levels of graphics acceleration are seeing new low-power hardware solutions, but again, deployment will depend largely on the needs of the application. Cutting-edge environments like deep neural networks will probably be better off with the high-end Tesla M40 GPU from Nvidia, for example, but increasingly common applications like machine learning and video streaming would do well with the M4 version, which cuts power consumption 10-fold compared to a standard CPU.
On the network as well, big savings can come in small packages, says Power Electronics. In this case, that would be the emerging field of silicon photonics, which offers the possibility of reducing the current 10W draw of a typical 10 GbE copper interconnect to about 0.2 watts, cutting the annual bill from $3,500 per year to $70. With many enterprises looking to foster 100 Gbps on the data center interconnect (DCI) and even the campus LAN, silicon photonics offers a way to ramp up throughput quickly without breaking the bank.
Low-power systems are an effective means of overcoming the data center’s reputation as an energy hog, but they shouldn’t replace an active energy management program that addresses all facets of the data ecosystem. Too often, power consumption is driven by a mismatch between workloads and available resources, something only a highly sophisticated automation system can address.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.