Evidence is mounting that momentum toward an ever-greener, more efficient data center is starting to flag, which leads to the question: What, if anything, should be done about it?
To be sure, the goal of a fully eco-friendly enterprise is probably unobtainable. Even so-called renewable technologies like wind and hydro interfere with fish and fowl migrations, while solar panels are the product of a complex chemical and mechanical process that produces a fair amount of toxic waste. Still, the immediate concerns, primarily atmospheric carbon and other gases, can at least be mitigated by driving greater efficiency into large energy-consuming facilities like the data center.
The problem, as recently pointed out by the Uptime Institute, is that many of the efforts undertaken so far are producing diminishing returns, which forces the enterprise to spend more in order to achieve or maintain their efficiency targets. While the good news is that nearly three quarters of data executives, and nearly all colocation managers, now track metrics like Power Usage Efficiency (PUE), the average rating across the industry bumped up slightly to 1.7 following several years of steady decline.
One of the key shifts taking place at the moment is the decline of in-row and rack-based cooling solutions, says research firm IHS. In 2013, revenue for row/rack solutions fell about 6 percent, while perimeter cooling dropped 3 percent. This can be due to several factors, says study author Elizabeth Cruz. One is that more data infrastructure is being outsourced, so the need for on-site cooling is diminished, while the other is that centralized cooling systems are becoming more effective, reducing the need for supplemental systems. These trends could reverse themselves in short order, however, should rack densities push the 8-10 kW range.
Of course, the simplest way to drive greater data center efficiency, although not the fastest, is to incorporate new tools and capabilities into the normal hardware refresh cycle. Schneider Electric, for example, recently integrated Vigilent Corp.’s Cooling Optimize management module into its StruxureWare platform to provide deeper insight into energy and cooling patterns in production data environments. The combo provides tools like advanced monitoring and analysis and closed loop control to provide a high level of automation and self-correction in data center temperature and air-flow operations. The companies tout cost savings of up to 40 percent with the Cooling Optimize option.
When it comes to hyperscale infrastructure, however, even marginal increases in efficiency can produce substantial gains on the balance sheet. Google, for example, is turning toward increasingly sophisticated systems intelligence and machine learning tools to push PUE to new lows. The company is toying with neural networks and other esoteric approaches to pinpoint minute changes in data patterns that, combined, can add up to big savings. When you’re talking about power consumption in the multiple megawatts, even a .1 percent efficiency gain is a big deal.
In a never-ending quest like energy efficiency, there are bound to be good days and bad days. Most organizations will gladly spend big dollars today for a 10 to 20 percent reduction in energy costs tomorrow, but the rationale starts to break down as the back-end benefits start to diminish.
But looking beyond the numbers, the data industry is already on the hook for being one of the top polluters on the planet, and impressions are everything in the court of public opinion. So even if the latest efficiency measures are not producing the kinds of results we’ve seen in the past, they still provide a certain amount of “green cred” to assure the public that the industry takes the issue of energy consumption seriously and is committed to pursuing as high a degree of efficiency as can be reasonably expected.