Will the data center ever be green enough? Probably not, considering it is arguably the most energy-intensive operation ever devised by mankind.
Yet that shouldn’t stop the progress that has been made over the past few years to make it more efficient.
Part of that effort, of course, will require new ways of thinking about performance and reliability, even if it means calling some long-held practices into question. The Green Grid took another step in this direction with its latest report, Data Center Efficiency and IT Equipment Reliability, in which it points out that most modern equipment is able to operate at much higher temperatures than in the past. That means in many cases that mechanical cooling systems can be shut down entirely and the use of “free cooling” using outside ambient air can be greatly expanded.
In part, the report is a recognition that the obstacles to more efficient data operations are not so much technical as cultural. As Wired’s Robert McMillan points out in a recent close-up of Mozilla, IT managers are loathe to risk their jobs in the name of efficiency if it increases the risk of failure. That means most organizations are comfortable with higher utilization of, say, memory and network I/O, but woefully low utilization when it comes to CPUs. In Mozilla’s case, those numbers are estimated at 80 percent, 42 percent and 6 percent, respectively. But the simple fact remains that the typical enterprise does not know what its utilization rate is because very little research is done in this area.
And that’s too bad, because a number of techniques to increase efficiency are relatively easy to accomplish, provided you have a proper awareness of what needs to be done. According to a new white paper from Schneider Electric, performing an effective environmental health check is step one on the road to greater efficiency, followed by the use of blanking panels and efficient cabling methods, row-based cooling and hot/cold aisle containment. As data architectures continue to shift toward higher densities, enterprises will find that these and other changes in operational design will no longer be optional.
And for some truly radical thinking, some researchers are reaching back to the past, the pre-computer past, to glean ideas on how modern networks should be designed. A team at Cornell University has tapped a mathematical treatise from 1889 to imagine a largely wireless data center network that not only does away with cable and switch costs but calls for a highly economical cylindrical server that utilizes Y switches rather than network cards. On paper, at least, such a configuration would not only cost less to build and operate, but dramatically reduce latency as well.
As I said, though, there will probably never be an “efficient” data center, merely varying degrees of efficiency in support of growing worldwide demand for data services.
In that regard, then, it’s self-defeating to view efficiency in terms of “goals” and “objectives.” A wiser approach would be to pursue never-ending progress in narrowing the consumption/performance ratio. As long as IT can be seen as working toward a more efficient future, it will be viewed as part of the solution, not the problem.