Blade configurations are becoming increasingly dense, which, on the surface, is a good thing. But there comes a time when every data center manager has to start wondering when the consequences of such high density will start to outweigh the benefits.
Industrywide, there is no question that the blade is becoming an increasingly popular processing solution. IDC reports blade sales increased by 20.8 percent in the first quarter, putting the devices on track to make up 20 percent of the overall server market in the next year or so. This is coming at the expense of the mid-range and high-end server segments, which dropped by 19 percent and 28.9 percent respectively.
Without doubt, this trend is being driven in part by the ability to pack greater numbers of networking ports on smaller and smaller form factors. The movement got a boost this week from Brocade, which unveiled three new designs for its MLX and DCX Backbone platforms. The 8x10G-M, designed for carrier Ethernet markets, and the data center-oriented 8x10G-D double the number of ports per chassis to 256, essentially halving the power needed to support that many ports compared to existing solutions. The FC8-64 Fibre Channel blade, meanwhile, bumps the port count from 384 to 512 per chassis.
Meanwhile, companies like SuperMicro are pushing processor counts ever higher. The company's SuperBlade family now features the TwinBlade system, which doubles the number of dual-processor nodes in a 7U enclosure to 20. The line supports the new Xeon 5600s or the 12-core Opteron 6100, the latter effectively providing up to 2,880 cores in a 42U rack, all tied together with dual 40 Gbps QDR InfiniBand switching.
With that level of density, one of the chief concerns is cooling. Additional processing is always nice to have, but not if you risk a major meltdown unless you shell out more bucks for A/C.
However, some of the latest cooling designs are zeroing on the particular needs of highly dense blade configurations. A company called Hardcore Computing has introduced a liquid immersion system in which all the heat-producing components in the enclosure are submerged in a special coolant. The company says it is able to cut cooling costs some 60 percent, even while the design acts as a natural fire suppressant
As crazy as that may sound to some, a never-ending quest for increased density is fraught with just as much danger. Not only do you run the risk of over-heating, but most organizations find that they actually increase cooling costs because they simply pack more processing power per square foot in the data center.
In the end, high density is a lot like candy. Eat too much too fast and you get a stomachache.