Valet Parking the Data Center Container: Part II

Julius Neudorfer

In a recent post I postulated that a midsize company's CTO had taken the plunge and committed to boldly move forward with a containerized data center strategy. Everything seemed to be going smoothly, and the new 'data-center-in-a-box' arrived in only six weeks after it was ordered by our intrepid hero. The problems began when it came time 'to just plug it in," since it was ordered complete with 1,000 new servers all pre-installed and presumably ready go.

Our hypothetical CTO had indeed skipped all the tedious details (such as power and cooling requirements) and those usual endless meetings with architect and engineer construction committees normally associated with a new data center. Thus, he saved the 18 to 24 months of waiting for his new data center to be built, or so he thought. While the new containerized units did arrive from the vendor in six weeks as promised, there was no secured location to 'park' his new data center, nor the supporting infrastructure to power and cool it.

Let's look at what a containerized data center offers, and what is really involved with planning, outfitting and operating these new data center innovations.

All the major server manufacturers (HP, IBM, Dell and Sun) have announced containerized data centers this year. While each one differs in the details, they all offer systems empty or pre-loaded with the equipment of your choice. They all claim that they can provide a much more compact footprint and more energy efficiency at a much higher power density than a traditional data center. One vendor claims that its 40-foot container provides the same usable rack space as a 'typical' 4,000-square-foot. data center. Moreover, some vendors state that the units can be located outside a building, exposed to the elements. However, most recommend that they reside inside in a warehouse-style building, sort of a 'container hotel."

Some vendors' designs focus on primarily supporting and installing their own equipment, while others provide full-sized industry-standard 19-inch racks and offer to pre-install and configure their own IT gear, as well as other vendors' equipment. Dell has a 40-foot 'Double Decker' consisting of two stacked containers that were used for Microsoft's new data center near Chicago. The bottom container holds the IT equipment; the top contains the support infrastructure.

Part of the efficiency-improvement claims are based on the fact that the racks in containers are positioned so there is a total separation of the hot aisle and cold aisle airflow. This allows them to offer a much higher power density with lower fan power, since the air flow only needs to travel a few feet, rather than the typical 20- to 30-foot distance from the CRAC to the racks. Some containers require 208V (3-phase) power and distribute it as 208V single-phase to the rack, while others require 415V/240V (3-phase) power and distribute it the rack as 240V single-phase, to improve density and energy efficiency.

While containers do offer the potential to use less space and be more energy efficient, they still require approximately the same amount of conditioned power and cooling (chilled water only, please) that a traditional data center with a similar-sized IT load requires. So while a 40-foot (8-foot wide) container 'eliminates' the need for 3,000 to 4,000 square feet of dedicated data center space (with or without a raised floor-depending on your religious beliefs), it still needs to be fed by a UPS and backup generator. If your IT equipment load is 600KW, then you will still need to have 600KW of cooling via chilled water. And by the way, for those of you who are still thinking in terms of watts/square foot, that translates to 1,875 watts per square foot (600KW/320 {40' x 8' = 320 sf}).  This is in contrast to a 3,200 square-foot 'traditional' data center at 600KW, which calculates to 187 watts/square foot. The high-density containers can support up to 27KW per rack, which is very difficult to achieve in normal racks (non-contained) in open aisles.

There can be many arguments made, pro and con, about when and why to consider a containerized solution. However, the main advantages of these containers are that they offer the potential of rapid deployment and expansion, very high densities, higher space and energy efficiency, as well as ostensibly lower costs (when compared to the build-out costs of data center white space of equal load capacity).  There are other issues to consider, such as UL, NEC approval standards that need to be met by the manufacturers, as well as state and local building, fire and electrical codes, which were never written to deal with a container. Over time, these will be non-issues once/if there is mainstream adoption and published standards and the codes incorporate sections to deal with containers. However, for the moment, buyer beware.

Of course, there is always the question of cost. While each vendor makes its own claims, be prepared to think in terms of least $1 million or more for a 40-foot unit (before the IT equipment costs). This excludes the required supporting infrastructure: UPS(es), backup generator(s) and chiller plant. Ostensibly, the vendors all claim that the overall cost is still lower than a comparably-rated (in power, not size) traditional data center.


Now as to the last line in the first part of this article, 'the cold aisle operates at 90�F." This was quoted directly from Steve Cumings in the HP 'POD' video.

While a 90�F 'cold' aisle is what most data center customers would consider 'server-cide' and is well beyond the ASHRAE 9.9 expanded temperature envelope (which tops out at 80�F), HP seems willing to back it up with its own IT equipment. Clearly, it has thrown down the gauntlet to other manufacturers to 'top this' as a way to improve the energy efficiency of their 'POD' containerized data center. It would appear that it is overstressing the IT equipment, but according to HP and to most other vendors' published specifications, 95�F is the limit (fine for servers perhaps, but not so advisable for tape drives). Of course, I think that most users will opt to run their cold aisles at 80�F or less, if only to be able to sleep at night. Clearly, it will take some getting used to these new computing environments, both from a facilities and IT perspective.

It is still uncertain if these containerized data centers will be a niche market, or only are going to be used by the largest players such Google and Microsoft. Perhaps they will become the new data center paradigm as we strive for more efficiency and flexibility via cloud computing in a 'module." In essence, they represent the next level of a computing 'building block," a mindset much like blade servers have become, the almost a de-facto standard for many server consolidation/virtualization projects. Just call your favorite IT vendor and ask for its supersized '40-foot blade server' with 1,000 blades.

And so in keeping with our blog's name, if the HP POD container's cold aisle runs at 90�F, then we anticipate that the rear of the racks will run 20 to 25� hotter, at the proverbial 110 degrees (or more) 'in the shade."  Accordingly, HP wins our first award for possibly having the 'hottest' hot aisle, but perhaps the highest efficiency as a result.



Add Comment      Leave a comment on this blog post
Feb 3, 2010 6:02 AM Anonymous Anonymous  says:
Containerized Data Centers, Beware of the Fad. Trends come and go in the Data Center and Processing world. We have all seen systems, designs, technology come and go over the past 30 years. Our latest trend is literally an ISO Shipping Container made into a data center pod. Lets plug in communications, chilled water and power and use it at high density. It can withstand all kinds of environmental conditions. With any great envelope designed to protect, that same envelope can contain. That is where this interesting design development falls flat and unattractive. Picture this, your great package intended to be mobile and survive the elements now has some internal elements: Smoke, Fire, Wind, Movement and Water. As with any great container designed to withstand outside elements, that design is now even more effective on interior created elements. What would smoke, fire, wind Movement and water do inside of a Data Center Container? If it is all smoke, smoke is conductive and corrosive. All internal components will be damaged. If there is heat from a fire, how does this challenge our high density? If there is wind, the container may move or slide. So a building or anchoring structure adds to the cost. If there is movement(seismic or other), what will remain in place internally. If there is a chilled water leak, how will the container survive the flood or spray? In any of these events, we could count one thing for sure, your new data center in a container will be down for repairs. The umbilical connections are exposed to anything and anyone moving around it. Just because your container is "Pre-commissioned" at the factory, does not mean we can skip commissioning the umbilicals for power, cooling and flow. Since these are being produced on a line with similar parts and the designs are relatively new, that means we might have potential parts quality or mechanical or electrical design issue that may be a prelude to failure across all of the identical models. So before your Enterprise Fortune 500 servers or "Clouds in the Container" become artificial reefs in a steel aquarium, less the fish, consider the reason we have raised floors in a well designed data center. Reply
Feb 5, 2010 3:02 AM Anonymous Anonymous  says:
Factory manufactured itemes are produced at lower cost and higher reliability than things that are custom built in the field. Defects are driven out of processes by repetition. When defects are reduced, all customers benefit. The legacy data center model only provided benefit to the user when they built multiple data centers each year. Containers represent the commoditization of the data center. The days of overspending for infrastructure are rapidly coming to an end. Reply
Feb 19, 2010 11:02 PM Simon Rohrich Simon Rohrich  says:
Clayton Taylor or (anonymous)makes some good points. Many of the ISO containers have those exact problems. We took a more steamlined approach and produced a containerized solution but in a smaller more manageable package. We have air cooled and water cooled versions. Our water cooled system uses a closed loop method so water is kept from rack mounted equipment. Our MMDC's are also water tight, built to a NEMA 3R and NEMA 4X standard. Smoke, fire, etc. :Our containers are available with Aerogel insulation http://www.youtube.com/watch?v=knTHr8BQ8rc and active fire suppression using "Dry Water" (Novek 1230), a dielectric fire extinguishing agent http://www.youtube.com/watch?v=KDohVakqkic Wind: I would suggest that if there is wind capable of overcoming the friction of a 18000 lbsd ISO container on concrete, you would have much bigger issues. Our MMDCs are 300-1400lbs and -12-16 sqft footprint Data Centers are expensive because they have to build the capacity of the projected demand in 10 years right up front. Pre engineered data center modules are scalable with need. They have the "facilities" portion of the data center (the "cooling, security, fire suppression and physical protection built in." They range from micro 2' x 5" ($17K-$140K to giant 8' x 40'and $1M-3.5M. All reduce the upfront CapEx of a data center. The smaller ones like Elliptical Mobile Solutions RASER http://www.ellipticalmedia.com/products/raser.htm allow for extremely granular control of IT /facilities deployment. Reply
Mar 13, 2010 3:03 PM Clayton Taylor Clayton Taylor  says:
Regarding OSHA confined space rules, if a 20 or 40' data center container has 110F in the hot isle, and only one entrance, OSHA confined space rules might apply. Temperature rules limit the amount of time a person can stay within the space. Life safety rules set egress requirements. Their are could be some good reasons to keep the cold isle down to 80F verses 90F. There would need to be work procedures and time limitations involved in having personnel inside. We all know that technicians will be in and out of these units during operation. Does anybody have a white paper or a study of the OSHA and Life Safety rules involved with larger data center containers? Reply
Mar 16, 2010 5:03 PM Simon Rohrich Simon Rohrich  says:
I imagine that they do, or require some kind of UL cert for a "human accessible machine"... I do know that in the larger container deployments they have different operation model. They overbuild the unit until only about 60% of the equipment still function, then pull it out and replace with a fresh one with new equipment. The idea is that technicians do not access the container. This model has dramatically reduced Opex. Elliptical Mobile Solutions MMDC does not have people in it, but allows for the same method of "fail in place, then swap" Opex savings method. Reply
Apr 14, 2010 7:04 PM Stephen Dixon Stephen Dixon  says:
Enterprise Control Systems announced today the schedule for their 2010 Containerized Data Center Road Show Tour that will showcase a truly vendor neutral containerized data center. The event, which will stop at major cities throughout the Western United States, including a stop at PG and E�s Conference Center in San Ramon, will strive to show data center operators how they can improve energy efficiencies over traditional data center designs by deploying containerized data centers. http://www.rapiddatacenter.com/ Reply
Aug 4, 2010 2:08 PM Dennis Cronin Dennis Cronin  says:
As with any new idea, application or process there are always those Nay-Sayers. The move towards the "Containerized Data Centers" however is evolving. It started many years ago when APC came out with their "Data Center on Wheels" which did not take off as people could not grasp the concept of a Mobile data center however the marketing that this is all about being mobile has remained in people�s minds. After APC the proprietary vendors moved in saying I'll sell you my box but only with my technology in it. What is different today is that we have over a dozen manufacturers that are actively producing product and the majority of them are set up to be vendor agnostic with respect to Technology. This is giving end users choices and we know they like choices and no two end users will make the same choice. This fledging industry continues to evolve and produce improved products. We are seeing many solutions to the challenges of placing servers in an eight foot wide box but now even the box is changing as designs are becoming more modular in construction Vs Containerized. Just like the Steel Building craze of the 1970's and the Modular Housing craze of the 1980's and the Modular Building craze of the 1990's the Containerized and Modular Data Center concept is here to stay. It will get a lot of attention for the next several years because it is new. But the economics of CapEx, OpEx, just in time delivery and quality will permanently establish this as a long-term industry. There is much to be learned about how to apply these systems and I would suggest the next big application of the "Containerization/Modular" concepts will be in creating a data center of multiple Uptime TIERS. Our clients have already posed the question and it will happen. Julius: With respect to the absent minded CIO you described above I would suggest that had he done a traditional data center he is the same one who would have had 1,000 servers delivered without ever telling anyone what types of outlets each one needed, which are 120V single phase and which are 208V three phase, Twistlok or straight blade plugs, etc.. Every data center build needs to be thoroughly planned out and that's why I always push my clients to participate in the build process and integrate their activities with ours. Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.