New Approaches to Keeping Things Cool in the Data Center

Arthur Cole
Slide Show

Keep Your Cool in the Data Center

The more tightly you pack heat-generating equipment, the more energy you consume trying to cool the air in and around it. An efficient cooling system is a top priority.

For most of us, January isn't the time to worry about keeping cool. But in the data center, monitoring the thermostat is a year-round job.


The need for effective power management has grown more crucial over the last decade as virtualization and the advent of blade servers vastly increased the power draw at most data centers. As processor.com points out, facilities designed to handle maybe 3 or 4 kW per square foot find themselves pushing 25 kW. The corresponding heat load poses not only an extremely uncomfortable work environment but a direct threat to application and data availability.


Not surprisingly, the available solutions to this problem are as varied as the environments they seek to address. Probably the most direct approach is to build systems and components that can maintain reliable performance even in hot environments. Such is the thinking behind a new patent awarded to Dot Hill Systems to automate such tasks as reducing clock speed and slowing down data throughput and voltage flow to storage controllers if dangerous temperatures are sensed. In all, the patent covers nearly 80 new techniques for both single- and dual-controller systems, most of which will undoubtedly find their way into Dot Hill's 2002 and 3000 Series arrays.


The ability to accurately monitor temperature conditions throughout the data center is probably the most crucial aspect of power and cooling management. But deploying sensors in a meaningful fashion is a complex process that involves a fair amount of cabling and system configuration. That's why Power Assure recently incorporated Packet Power's wireless monitoring system into its Dynamic Power Management platform, a move that allows enterprises to quickly add monitoring points across even large, complex data center infrastructures. The devices provide real-time monitoring of conditions like power consumption, temperature and relative humidity that Power Assure can use toward optimizing data and power distribution.


At some point, though, excess heat needs to be evacuated from the data environment, unless you prefer to see your computer room A/C units (CRACs) constantly working overtime. Energy management specialist Eaton Corp. recently added new exhaust technology with the acquisition of Wright Line to its data center platform. The company's Heat Containment System can be fitted onto virtually any enclosure, drawing heat out of the rack and into a centralized CRAC, at once cutting the CRAC's energy consumption by 30 percent and bypassing a number of heat handling systems like in-row A/C units and air handlers.


Heat is a valuable commodity these days, and a truly far-sighted solution would involve shuttling data center heat to other areas of the facility, cutting down on future heating bills. But that is a secondary concern. The top priority to is ensure the reliability of the data environment, and fortunately that is becoming an easier proposition with each new technology generation.



Add Comment      Leave a comment on this blog post
Feb 10, 2011 3:46 AM Deb Deb  says:

FREE WEBINAR: Don't Get Blown Away by the Cloud: 5 Keys to Optimize Your Physical Infrastructure for Virtualization. February 23, 2011 @ 11:00 a.m. CT

Join Todd LaCognata, Panduit Data Center Solutions Marketing Manager, and Robert Chernesky, Product Development for Panduit Professional Services, as they discuss the key physical infrastructure systems that can negatively impact your virtualization efforts if not optimized.

REGISTER TODAY @ www.panduit.com/AdvisoryServices

Reply
Feb 11, 2011 11:22 AM Dennis Dennis  says:

Reclaiming data center heat for the most part is not economical yet. This is because it is a poor quality heat, temperature wise that is. Until we can boost the heat well into the tripple digits we would need to move huge volumes of air to take advantage of the discharged heat. The energy needed to move those volumes is too great.

However as the data center evolves and we get highter returm temperatures the economics change. This is particularly true as we move back into direct water cooled processing where the heat content per volume is far greater.

For the most part, we are not there yet on heat reclaimination but we will see vast advances over the next 5 years. We will get there, eventually!

Reply
Feb 21, 2011 5:33 AM Kristen Knight Kristen Knight  says: in response to Dennis

Take a look at our EcoCool unit.

www.accuaironline.com -- Telecom

Reply
Mar 21, 2011 12:08 PM Ankit Gupta Ankit Gupta  says:

Articles like this are good for educating everyone about power and cooling, a difficult topic that many facility managers are getting their arms around.  HP has a closed-loop cooling system that can extend the life and capacity of data centers with limited cooling resources. HP's MCS G2 (Modular Cooling System) can integrate with existing and future server cabinets and does not affect how servers are currently deployed, operated, and maintained. The water-chilled, closed-loop cooling system:

. Provides a path for customers to increase power density up to 35 kW per rack (or up to 17.5kW with a dual rack configuration)

. Supports fully populated high-density racks while reducing the overall heat load on the facility

. Saves valuable floor space and cooling resources that would be required for under-utilized racks

Ankit Gupta

Mr. Ankit Gupta, HP

Product Manager

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.