Having spent Friday morning at a Data Center Infrastructure Management (DCIM) conference near Washington D.C., sponsored by Eaton Corp. Future Facilities and RF Code, two things are clear:
First, the state of the technology is not yet at the point of providing a single, over-arching control environment that merges facilities, infrastructure and data management into a dynamic, highly responsive entity capable of shifting resources, power loads and other parameters at a moment's notice.
Secondly, the simple goal of building efficiencies into power and cooling infrastructure will be doubly hard because few enterprises have the means to measure and quantify these things now, so there is no real way to tell if the measures being enacted to improve them are really working.
To the first point, it seems clear that the DCIM stack will have to embrace a fair amount of openness if it hopes to fulfill the promise of data center energy and data automation. Initiatives like the openDCIM project are moving in this direction, but like any open systems endeavor it needs support from a broad range of IT developers to gain the leverage needed to compete with entrenched proprietary interests. And at the moment, at least, it seems the project is primarily focused on inventory tracking and management to put the facilities side of the house on the same flexibility footing as the data side.
Still, this is a worthy goal because it speaks directly to the second takeaway from Friday's event, that it is impossible to get to where you are going if you don't know where you are. That's why more enterprises are investing in RF sensors, both as a means to track physical infrastructure and to keep tabs on operating conditions. RF Core, for instance, has developed a new high-performance temperature sensor that maintains long battery life even while providing regular alerts to data center managers. As well, the devices can monitor temperature variations down to 0.1 degree Celsius, and the system's monitoring software has already been integrated into leading DCIM platforms.
At the same time, Future Facilities is working with Intel to devise a new generation of server-embedded thermal sensors that provide a more granular view of hardware operating environments and draw less energy than external devices. In most cases, cabinet sensors have to be kept cool themselves into order to deliver accurate results, consuming maybe 40 percent of their overall power draw. The company has set up a Virtual Facility Model at Intel's San Francisco campus, highlighting the system's load capacity, CFD analysis, airflow and CPU utilization capabilities. Future Facilities also specializes in predictive software modules that allow for a clearer understanding of design and facilities upgrades and their impact on energy consumption and cooling.
DCIM may still be a work in progress, but it represents a field that cannot be ignored for too much longer. As IT shifts from small, localized data environments to shared, distributed architectures, the era of the hyperscale data center is about to hit full swing. When the number of servers jumps from the hundreds to the thousands or even tens of thousands, a 1 percent gain in energy efficiency translates to a significant operational savings. And for those who might not realize it yet, increased utilization that delays the hardware refresh cycle produces a substantial capex reduction as well.
There is also the fact that the data center is still the bulls-eye for most green energy organizations and advocacy groups. And with virtualization rapidly approach maturity at many facilities, it would be a shame if the progress in energy efficiency exhibited over the last decade were to falter.