The data center is caught in what can only be described as the perfect storm of expectations: Process more data, but use less power in the bargain.
Many facilities have responded by increasing their virtual footprints and, in turn, upping the density of their server racks. This is only a partial solution, however, because while it allows data to be consolidated on fewer devices, these usually run hotter because of the increased load. And traditional cooling systems are designed to keep the temperature down in the entire room, not a confined area.
For this reason, designers are starting to rethink the way data environments are cooled. As data architectures and infrastructures change, so too must the cooling system or else the enterprise winds up wasting money and energy to cool areas that don’t need it and failing to provide adequate support to critical systems that are in high demand.
According to Eaton Corp.’s John Collins, cooling trends are quickly shifting away from the “chaos” approach that called for massive amounts of air-handling to vent heat and resupply server rooms with chilled air. Even the hot-aisle/cold-aisle modifications of this basic method are failing to overcome its inefficiencies. Instead, organizations are turning toward a range of containment strategies that feature enclosed racks, direct venting and high-efficiency chillers. The overriding goal is to prevent hot and cold air from intermingling and cut down on recirculation that drives up energy costs.
To be sure, the use of “free-cooling,” in which outside air or water is employed whenever possible, is still a valid approach. But even facilities in warmer climes have a slate of options when it comes to improving cooling efficiency, says Data Center Journal’s Jeff Clark. For example, consolidation can be made more effective if it is accompanied by a reduction in floor space that is then walled off from the empty areas in the data center. As well, some designers are turning toward exhaust chimneys directly over server racks as a means to quickly pull hot air away from critical components.
Of course, in every trend there is someone willing to go to extremes. In this case, that would be Microsoft, which is building a roofless data center in Boydton, Virginia. The idea is to employ the IT-PAC ruggedized container modules the feature side vents and a specialized moisture membrane that cools ambient air before it reaches data systems. Already, the company is talking about future data centers that are little more than containers sitting on slabs of concrete.
But no matter what your approach, the ability to properly analyze air-flow and cooling patterns cannot be overstated. After all, you can’t fix something until you know exactly how it’s broken. That’s why many Data Center Infrastructure Management (DCIM) platforms are loading up on advanced monitoring and management tools such as thermal imaging, real-time data collection and predictive analysis to ensure that any changes made to either facilities or infrastructure strike a balance between data optimization and energy efficiency.
Despite all the new tools and new design approaches, however, achieving that balance will be a never-ending battle. There is no such thing as too cheap or too green, so expect to revisit infrastructure and facility decisions on a regular basis to ensure that yesterday’s thinking bears up to the relentless changes taking place in the wider data environment.
Not every move you make will be right, but doing nothing will most certainly be wrong.