Ready or not, the enterprise is about to make the jump to hyperscale. According to technology prognosticators, within the next decade or so, enterprise data infrastructure will either be very large or nonexistent. You’ll either have to massively scale your architecture up or out, or you’ll be leasing capacity on someone else’s.
Much of this growth will take place on the logical/virtual layer, but at some point data has to encounter the real world. The expansion of physical layer infrastructure will come into play, which inevitably leads to issues over power and cooling. Their cost, that is
Of course, highly dense, broadly scaled infrastructure is nothing if not efficient. Like any volume-based commodity, the cost ratio for data (in this case, performance/watt) improves as the numbers scale up. But this glosses over the fact that energy consumption, and thus operating expenses, will still rise as infrastructure expands. In the end, you are able to do more, but you must pay more for the privilege.
"If you look at how network technology is evolving, it doesn’t appear that SDN and other solutions will produce much more efficiency than what we have today," said Jeff Lucket, director of engineering at MBX Systems, a developer of dedicated data appliances. "But as you project things out, it is clear that the network will be a lot more power-hungry than it is today."
Ideally, enterprises will find ways to expand capacity and performance as much as possible while maintaining their current power envelope. This isn’t as easy as it sounds, though. Networking has always been parsimonious when it comes to energy consumption, particularly compared to its server and storage brethren, and as a result has largely been overlooked by the Green IT movement of the past decade. But now that networking, too, has joined the virtualization party, with all its attendant scalability issues, the search is on for new networking architectures that can ramp up performance without pushing energy consumption to unsustainable levels.
The most direct and obvious route to energy efficient networking lies in hardware. It’s worth noting that even newly virtualized server and storage environments tend to increase hardware footprints, at least at first. And since power consumption is primarily a facet of processor architectures, it seems reasonable that low-power devices, primarily ARMs, will come into play. Companies like Freescale and Broadcom have recently introduced new ARM-based communications processors, but as Freescale Director of Marketing Nikolay Guemov points out, it isn’t a simple matter of trading high-power cores for low-power ones.
Freescale LSI 1021 ARM communications processor diagram
"The best way to save power is by offloading well-defined tasks from the CPU core to optimized accelerators," he said. "Things like security, packet mapping and inspection or router services like anti-spam go into an ARM-based accelerator, and this reduces the load on the core, which means you need fewer cores in the system. The challenge is that as designers add more capabilities into the system, our chips have to handle more tasks, even though the ARM may be only a portion of the overall SoC."
The best part about new processor designs is that they will likely become common facets of future OEM platforms, so the energy reduction they provide will take hold in the network during the course of the normal hardware refresh cycle.
This isn’t the case when it comes to software-based energy management. Developers have made great strides in balancing loads among server and storage resources, reducing or eliminating power to idle resources, but in the network, the energy savings don’t justify the investment needed to implement such end-to-end functionality. But that doesn’t mean network management has no value at all when it comes to saving energy.
For one thing, says Sanjay Casteline, vice president of IT management firm SolarWinds, effective monitoring can provide a clear view of how systems and architectures function today, and thus guide energy efficient solutions as capabilities are expanded.
"By tracking how energy is used and consumed on the network, you can then make decisions as to how to reduce the draw by shutting down particular routers, improving resource utilization and flexibility – things like that," he said. "But if you’re looking to dynamically switch ports on and off to match loads, the power supply supporting that switch will still run at the same level, so you’ll probably end up wasting energy."
Of course, for organizations that delve into full data center infrastructure management (DCIM), there is no reason why network elements can’t be manipulated with their server and storage counterparts, enabling a more holistic approach to green IT. That is the primary goal behind Cisco’s EnergyWise platform, built largely around assets acquired from Joulex earlier this year. As Tom Noonan, GM for EnergyWise Solutions, noted in a recent web presentation, the system’s agentless approach to monitoring gives it the ability to scale to multiple thousands of devices, including routers, switches and access points.
"We are focused on 100 percent visibility, giving us the ability to understand what the capacity is and where are we utilizing energy," he said. "And we’re tying this into the growth of the UCS platform, moving up the stack to provide the means to monitor, measure and manage the energy utilization of everything [in the data environment], not only on the physical layer, but the virtual as well."
Enterprises that embrace scale up/out architectures may also need to confront the potential need for an entirely new power infrastructure at some point. Dave Sonner, VP of the AC Power unit at Emerson Network Power, notes that today’s hyperscale environments, like Facebook and Google, do away with traditional UPS supplies in favor of a dual-fed approach featuring utility-supplied AC power with DC battery back up.
"We’re finding it is much more efficient to bypass the traditional energy conversion process by drawing the majority power from the AC supply," he said. "DC is only there in the event of a loss of the AC source."
Unfortunately, since most datacenter power systems have refresh cycles in the 15-20 year range, putting this kind of technology in place would require some major rip and replace. But the industry has devised a number of stopgap measures, such as The Green Grid’s Eco Mode, which allows utility AC to be fed through existing UPS systems. The idea is to drive energy efficiency to near 100 percent while keeping the UPS’s internal conversion process on standby for instant restoration in case of an outage.
For some, giving up the UPS in the data center is tantamount to heresy. Uninterruptable power is the first requirement of five-nines data availability. But with the power efficiency tools and techniques currently filtering throughout the entire data infrastructure, it seems that that decision can be put off for a little while.
Right now, the primary challenge is to scale up performance without breaching the existing power envelope. Since the enterprise has a tradition of over-provisioning power and other resources to handle occasional volume spikes, plenty of wiggle room remains.
But at some point, if data loads continue to march inexorably higher, the enterprise will be faced with a stark choice: continue to build out infrastructure or shift all future volumes to the public cloud.