More

    Infrastructure Visibility: The First Step Toward DCIM

    Slide Show

    Six Tips for a Greener Data Center

    Create better airflow in the data center.

    Having spent Friday morning at a Data Center Infrastructure Management (DCIM) conference near Washington D.C., sponsored by Eaton Corp. Future Facilities and RF Code, two things are clear:

    First, the state of the technology is not yet at the point of providing a single, over-arching control environment that merges facilities, infrastructure and data management into a dynamic, highly responsive entity capable of shifting resources, power loads and other parameters at a moment’s notice.

    Secondly, the simple goal of building efficiencies into power and cooling infrastructure will be doubly hard because few enterprises have the means to measure and quantify these things now, so there is no real way to tell if the measures being enacted to improve them are really working.

    To the first point, it seems clear that the DCIM stack will have to embrace a fair amount of openness if it hopes to fulfill the promise of data center energy and data automation. Initiatives like the openDCIM project are moving in this direction, but like any open systems endeavor it needs support from a broad range of IT developers to gain the leverage needed to compete with entrenched proprietary interests. And at the moment, at least, it seems the project is primarily focused on inventory tracking and management to put the facilities side of the house on the same flexibility footing as the data side.

    Still, this is a worthy goal because it speaks directly to the second takeaway from Friday’s event, that it is impossible to get to where you are going if you don’t know where you are. That’s why more enterprises are investing in RF sensors, both as a means to track physical infrastructure and to keep tabs on operating conditions. RF Core, for instance, has developed a new high-performance temperature sensor that maintains long battery life even while providing regular alerts to data center managers. As well, the devices can monitor temperature variations down to 0.1 degree Celsius, and the system’s monitoring software has already been integrated into leading DCIM platforms.

    At the same time, Future Facilities is working with Intel to devise a new generation of server-embedded thermal sensors that provide a more granular view of hardware operating environments and draw less energy than external devices. In most cases, cabinet sensors have to be kept cool themselves into order to deliver accurate results, consuming maybe 40 percent of their overall power draw. The company has set up a Virtual Facility Model at Intel’s San Francisco campus, highlighting the system’s load capacity, CFD analysis, airflow and CPU utilization capabilities. Future Facilities also specializes in predictive software modules that allow for a clearer understanding of design and facilities upgrades and their impact on energy consumption and cooling.

    Similar software is available from firms like Server Technology, which recently unveiled version 5.2 of its Sentry Power Management system that provides rack-level power monitoring and management. The system provides predictive analysis and alert functions for everything related to cabinet power, including circuits, lines and temperatures. There are also new capacity planning and redundancy tools designed to improve utilization and system availability. The system is built around the company’s plug-and-play SNAP platform that enables auto-discovery and configuration of power distribution units, quickly bringing hundreds or even thousands of units under control of a single management console.

    DCIM may still be a work in progress, but it represents a field that cannot be ignored for too much longer. As IT shifts from small, localized data environments to shared, distributed architectures, the era of the hyperscale data center is about to hit full swing. When the number of servers jumps from the hundreds to the thousands or even tens of thousands, a 1 percent gain in energy efficiency translates to a significant operational savings. And for those who might not realize it yet, increased utilization that delays the hardware refresh cycle produces a substantial capex reduction as well.

    There is also the fact that the data center is still the bulls-eye for most green energy organizations and advocacy groups. And with virtualization rapidly approach maturity at many facilities, it would be a shame if the progress in energy efficiency exhibited over the last decade were to falter.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles