Modular data centers are quickly gaining clout within the IT industry as virtual and cloud architectures drive the need for rapid scalability and low cost operations. But is this really the future of data infrastructure, one in which new resources can be provisioned like so many building blocks in a child’s playroom?
The answer is quite probably yes. In fact, with the functionality derived from converged compute infrastructure and the new premade, containerized modules hitting the channel, it’s getting harder to imagine why anyone would want to build infrastructure the old way anymore.
The movement seems to be more prevalent in high-speed transactional environments, such as banking and finance, where even small gains in efficiency and operational flexibility can bring significant benefits to the bottom line. Fidelity Investments, for example, has turned to HP and GE to coordinate the development of a new modular center in Omaha, Neb., named Centercore. The design features stackable 1 MW units of integrated data and power systems that can be rapidly scaled up as data loads increase. The company estimates that the design cuts real estate costs nearly in half and trims technology expenses by a quarter.
Indeed, power supply companies like GE and Schneider Electric are often the biggest boosters of modular architecture because it allows the enterprise to put additional capacity into the field in a fraction of the time it takes to plan, build and provision a traditional data facility. Schneider, in fact, has developed no less than 14 prefabricated data center options featuring a mix of up to 15 power/compute modules that can be deployed in a matter of weeks in virtually any available space. As well, modular systems are a lot more predictable when it comes to performance, giving the enterprise a better handle on the level of hardware needed to meet current and future data loads.
That’s part of the reason why modular is proving so popular in less developed areas. According to Frost & Sullivan, the Asia-Pacific region is emerging as a hotbed for modular technology as it ramps up IT infrastructure in anticipation of the cloud-based, globally distributed enterprise. The firm estimates that revenues for the region will close in on $2 billion per year toward the end of the decade, fueled primarily by demand in finance and manufacturing markets.
Another likely top consumer of modular technology across the globe is government, according to FedTech’s Dan Tynan. Particularly at top-flight research facilities like the Lawrence-Livermore National Laboratory, modular is seen as the primary means to spin up massive data resources quickly and at low cost. The reduction in prep work is a major factor: Instead of outfitting an entire 48,000-square-foot warehouse with power and cooling, modular systems can be implemented one at a time, eliminating the need to rip and replace substantial infrastructure every time new data architecture is called for. This is particularly relevant for organizations looking at high-density data systems, which will at some point call for a more robust power supply infrastructure.
Probably the biggest strike against modular systems at this point is a lack of standardization between platforms, meaning the enterprise will probably have to stick with a single vendor as modular infrastructure expands. For this reason, thorough due diligence is called for when evaluating not only the various modular designs but also the vendor’s track record as a partner and facilitator of IT infrastructure.
After all, it would be a shame for the enterprise to finally free itself from vendor lock-in as advanced virtual and cloud architectures take hold only to get stuck with a poorly executed modular program when implementing next-generation internal infrastructure.