Custom vs. Standardized Containers

Arthur Cole

The containerized data center has been floating around the margins of the IT industry for several years now, but it looks like the concept is about to get a renewed push now that the cloud is putting an increased premium on quickly deployable, scalable resources.

At the moment, however, there seems to be a schism between the two camps that dominate the movement: those that want more modularity and standardization to reduce costs and deployment hassles, and those who are striving for a more customized approach.

Of late, much of the development news has come from the latter camp. Data center service providers like i/o Data Centers are looking to break with the traditional 20- and 40-foot shipping container for more varied designs said to better accommodate standard infrastructure and component form factors and fit more comfortably within real-world enterprise space limitations. i/o Data Centers is expected to launch a new line of containerized centers in the next few weeks, supposedly with such touches as built-in power and cooling and remote monitoring capability.

Contrast this with some of the newest thinking in data center design, which has some engineering firms turning to standardized shipping containers as the foundation for easy-to-build facility blueprints. One such firm is Gilbane Construction, which is touting a cube-shaped building based on a series of containers. The chief advantage here is that the entire center is pre-fabbed, so many of the variables that can vastly influence future operating costs, such as air flow and power consumption, are already known.

In some parts of the world, there are even government efforts to lock in official container standards, which may, if successful, influence markets elsewhere in the world. Taiwan's Industrial Technology Research Institute (ITRI) is looking to establish the 20-foot container as a standard, complete with server configuration parameters and a set OS, Cloud Operating System 1.0, consisting largely of modules from VMware, IBM, EMC and other U.S. developers. The group says consolidating around a single format will go a long way toward lowering container costs, much as PC standards have lowered the price of the desktop.

Whether or not the IT industry is ready for a commodity data center is an open-ended question at this point, but it's not beyond the realm of possibility. The farther you push users and applications from underlying infrastructure, the less relevant individual hardware configurations become.

It's a trend we've already grown accustomed to in servers and storage. The container merely kicks it up to the next level.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
Jul 23, 2010 8:46 AM Gary Anderson Gary Anderson  says:

The purpose of a container is to meet the needs of a customer for either a short timeframe fix or a higher density solution.  Each customer does and will continue to have different needs and different expectations.  The efforts to standardize may be helpful, but they may not be required, especially in situation where inter-continental shipping is not required.  Over-the-road transportation in the US allows for significant variation and allows for increased widths and heights, so a standardized footprint isn't as necessary.


Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.