The Hot-Swappable Data Center?

Arthur Cole

Resource consolidation, whether it is server, storage or networking, is usually viewed in terms of hardware acquisition costs and operational expenses like power and cooling.


But the ability to do more with less is also leading to significant changes in the physical architecture of the data center itself. And if carried to its logical conclusion, this trend could bring the end of the bricks-and-mortar universe that has characterized the IT experience from the very beginning.


The idea of packing a full data center into a shipping container is not exactly new, but it is gathering steam as a wave of new cloud service providers look to put physical infrastructure in place quickly and at relatively low cost. One of the leaders is, rather surprisingly for a software company, Microsoft. The company just brought out its newest Generation 4 Modular Data Center at the latest Windows Professional Developers Conference. It's 20 feet long, half the size of the units in the company's new Chicago facility, although those boxes are half-IT/half-cooling. The new models forego the raised floor design in favor of ambient air cooling for the up to 2,500 rack servers it can hold. Microsoft says it can have the units in place and operational within 24 hours, and you don't even need a roof on the building. Julius Neudorfer at our CTO Edge site has been writing about the logistics of all this.


Since container systems by nature are designed to be modular, the key question posed by Data Knowledge's Rich Miller is whether they should be standardized. If so, we could already be too late. In addition to Microsoft, Sun Microsystems, Rackable Systems, IBM and the Digital Realty Trust are all working on container standards built around modular components that take much of the guesswork, and expensive integration, out of the data center provisioning process.


A key driver in all this activity is the fact that network infrastructures are not nearly as complicated as just a few years ago, according to EDN's Ron Wilson. Now that you no longer need three network planes for interconnect, storage and raw data, most of the extemporaneous hardware is gone, turning the basic infrastructure into little more than direct port-to-port 10 GbE connectivity. And even that may be heading for the dustbin if new chip-based interconnects like HyperTransport start to carry more of the load.


Of course, with every up there is a down. And as processor.com points out, you should know the disadvantages of container-style expansion before you sign the check. Among them are the fact that actual bricks-and-mortar facilities will still need a fair amount of architectural work to accommodate the containers. And since each container holds a massive amount of storage and processing capability, it's easy to overshoot or undershoot your actual needs unless you have a very clear idea of what they will be.


Clearly, the container approach is best suited to those who need a quick and substantial expansion of current resources or are building a data center from scratch. The cost-savings are indeed significant and the scalability potential is certainly impressive.


And who knows? Perhaps someday soon we'll be talking about hot-swapping entire data centers just as we do servers and disk drives today.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.