The data center industry is moving toward divergent "cooling" technologies simultaneously.
On one hand, true "fluid"-based cooling is being promoted as the most efficient/effective method to transfer heat from high-density IT equipment (which it is). Unfortunately, so far it is not a standardized or "easy" solution. Lest we forget, mainframes started with fluid-based cooling and in fact some new large systems, such as the IBM z/Enterprise mainframe offer water-based cooling. IBM has also built the 'HydroCluster,' several years ago, which is solely based on 'water' piped to each individual server in the rack.
On the other hand, the existing air-cooled IT equipment, most of which could already run at 90-95F (but has been residing comfortably in data centers at 68-72F), is still in the majority. Moreover, ASHRAE is in the process of updating TC 9.9 this year, and is now promoting "allowable" air intake temperature ranges for Class A1 up to 90F and A2 to 95F, to save energy. ASHRAE has even created two new classes of data center equipment, A3 and A4, which allow air intake temperatures of 104 and 113F, respectively.
Moreover, ASHARE TC 9.9 has even gone so far as to publicly state that they envision a substantial number of data centers without mechanical cooling "to increase the opportunity for data centers to become 'chillerless,' eliminating mechanical cooling systems entirely."
The Bottom Line
We are witnessing a paradigm shift, but it is yet to be seen which design philosophy and cooling technology will win out, perhaps both - each for different types of compute camps - enterprise vs. hosted - cloud vs. colo, etc...
So as energy costs rise and "sustainability" issues drive more financial, design and operational decisions, we will see many things in the data center in the coming years that only a few years ago would been considered absurd, unthinkable or might have gotten you fired, some of which may now earn you a promotion.