One of the reasons energy conservation is such a hot button issue in the data center these days is that no one has a clear idea how to assess the situation.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iTo be sure, metrics like PUE (Power Usage Effectiveness) are a step in the right direction, but even its backers will admit that it is not a perfect solution and should not even be used to compare one facility against another. And as I pointed out last month, newer metrics like Data Center Energy Productivity (DCeP) provide a deeper dive into data operations but ultimately rely largely on subjective analysis in order to gauge the extent that energy is being put to good use.
Quite naturally, it seems there are as many “ideal metrics” when it comes to energy usage as there are IT pros in the field. For Rajat Ghosh, a leading tech researcher at the Georgia Institute of Technology, for example, nothing beats the Resource Allocation Index (RAI), which attempts to tie utilization to actual usage rather than flat fees for service or time designations. In this way, energy consumption and data usage can be based on supply and demand, which should benefit traditional enterprise infrastructure as much as commercial cloud providers juggling multiple clients. Ghosh says RAI allows data center managers to drive energy conservation though “Opex optimization” where resources are constantly balanced between under- and over-utilization.
Odds are, however, that there will never be a single metric to drive data center efficiency, argues eWeek Europe’s Peter Judge. Efficiency for an on-premises facility at a medium-sized enterprise will be a much different affair than at the mega-scale cloud operations favored by Facebook and Amazon. And with much of the industry expected to gravitate toward modular and even micro-infrastructure, energy patterns will likely vary widely over the next decade or so. Indeed, energy usage will be largely dependent on the nature of the data load, with rich media and heavy graphics requiring more support than simple text or database queries.
Rather than engage in a futile hunt for the metric to lower your energy bill, then, why not build a measurement and management architecture that incorporates multiple approaches? Companies like TSO Logic are delivering tools like the Data Center Efficiency Console (DEC) that presents multiple metrics on a “widget-style” dashboard. The system presents data on multiple fronts, including cost per transaction, server utilization, data per kWh and even cost per user, all of which can be parsed according to enterprise needs. As CEO Aaron Rallo notes, a single metric will not be enough when the IT industry can’t even define what a transaction is.
With virtualization boosting the utilization of much of the data hardware infrastructure, many enterprises are already enjoying the fruits of dramatically lower energy bills. However, just because things are better than they were does not mean they are as good as they could be.
The proper mix of metrics, analysis tools and monitoring capabilities needed to produce steadily improving energy/data productivity ratios will probably remain elusive for the time being, and will most certainly not result in a one-size-fits-all approach in the end. But at least the enterprise has a broad set of tools at its disposal to keep energy consumption front and center as the transition to flexible, dynamic architectures unfolds.