In this era of the cloud, deploying additional data center resources is a snap. All it takes is a cloud provider with a self-service portal and any software developer or business unit manager can do it.
But that doesn’t mean the enterprise is ready to ditch all of its internal workings just yet. In fact, many organizations are seeking to expand their own plant, if only as a means to provide cloud-like service to their own employees so they won’t keep pushing data onto third-party infrastructure.
Still, in this age of shrinking IT budgets and growing data expectations, the question is, how to do it? What is the best way to ramp up internal resources without the cost and complexity of siting and building entire new data facilities?
For some, the answer is modularity. Quick to assemble, easy to manage and eminently scalable, modular architecture is emerging as the go-to solution when the enterprise needs to extend on-premise infrastructure in a hurry. In fact, the more unusual your computing requirements are, the more modularity seems to help.
For those facing extreme environmental conditions, for example, modularity often proves the most efficient way to get data infrastructure up and running. It’s gotten to the point that modular infrastructure is going where no data center has gone before, literally. The FAA has turned to Elliptical Mobile Solutions (EMS) and No-Tech to devise a new kind of micro-modular data center to be used for space travel. Initially, the system will be deployed on Virgin Galactic spacecraft to measure radiation levels in near space before the company embarks on its space tourism plans.
And while zero-gravity may be challenging enough for a data environment, how about salty seawater? A company called Liquid Robotics has devised the Wave Glider, an autonomous sea-going system that can be used to analyze ocean currents, weather patterns and even submerged land masses. The system supports up to 24 computing centers and provides wireless connectivity, allowing multiple units to act as a single distributed data environment, complete with multi-tenancy capabilities, automated remote software management and other functions.
For the typical enterprise, however, it is important to note that there are various flavors of modularity – from small blade-populated enclosures to large containerized warehouses – so it helps to do your homework before committing to a single solution. As tech consultant Bill Kleyman notes, the decision to go modular should only be made after a fair amount of due diligence. Issues like power and data management and integration, security, backup and recovery and just about everything else related to infrastructure deployment should not be tossed aside simply because modular systems are easier to install. And since even modular architecture needs to maintain functionality over the long-term, it is vital that both present and expected data requirements are clearly mapped out.
As well, modular systems are not appropriate for all data loads, says FCW’s Alan Joch. As the Lawrence Livermore Lab discovered recently, HPC’s specialized power and cooling requirements don’t lend themselves to modularity very easily. The outfit does have a modular extension up and running that it uses for collaborative research projects, but it isn’t planning to expand it anytime soon because cost analyses to date do not show any appreciable cost benefits.
As the cloud continues to assume a greater and greater share of the overall data load, modular deployments will clearly emerge as a simple, effective means to expand physical infrastructure. After all, promises of limitless scalability have to be backed up by real servers, storage and networking at some point.
But even as modular systems gain ground, traditional infrastructure will thrive as well. With data volumes ramping up the way they have been, infrastructure of any kind shouldn’t be abandoned lightly.