The data industry is in the midst of great change, so it is understandable that it is experiencing a lot of uncertainty when it comes to planning and provisioning infrastructure.
Many of the assumptions being used to guide buying decisions are simply that, assumptions, with no real way of knowing how it will all turn out. Sure, Big Data is coming, but how big will it be? Hybrid clouds will probably be the norm, but what portion of the enterprise data environment will reside in-house and on third-party infrastructure?
The danger, of course, is either under- or over-provisioning hardware, leaving the enterprise either incapable of handling the load or strapped with excess resources that act as a drag on the bottom line.
In fact, many enterprises are already caught up in this dilemma, according to Schneider Electric’s Kevin Brown. As vice president of data center strategy, he says he has witnessed first-hand how organizations over-estimate density requirements when designing new data center infrastructure, only to spin off the excess to colocation or cloud providers when initial estimates prove wrong. This is the primary reason why Schneider now advocates density on the modular or POD level as the most effective way to match resources with steadily increasing data loads.
Under this approach, a single high-density POD can be introduced at relatively low cost, followed by smaller or larger modules if initial projections change over time. In this way, data infrastructure is still highly integrated, but the entire data center is not committed to a certain density level from the outset. To foster this approach, Schneider has devised a new set of reference designs aimed at establishing scaled, modular infrastructure that can be tailored to specific tiers, power envelopes and other factors. Not only will this afford the enterprise more flexibility when it comes to designing new infrastructure, but it also enables resources to be brought online faster than the standard rack-based approach.
Schneider isn’t the only firm that has hit upon this idea, however. Data Center Resources has taken a similar tack, albeit on the container level rather than within the modular unit itself. The company recently unveiled a line of integrated data centers ranging from 12- to 50-feet in length with power loads of up to 200 kW. The units feature scalable UPS, dual-feed distribution and rack-level power strips, as well as in-row, close-coupled cooling modules, all of which supports cabinet loads of 8 to 30 kW with N+1 redundancy. In this way, the enterprise can deploy low-cost permanent infrastructure that can be pinpointed to specific data requirements.
Another way to keep costs low in modular infrastructure is to cut the link to external power supplies. ABB and IO recently took the wraps off a new module that utilizes DC power, which the companies say can cut the draw by 20 percent. A DC power supply also takes up less room in the module, leaving more space for data equipment, and provides for a more portable configuration and quicker integration into legacy infrastructure. A key aspect of DC power is that it eliminates the need for power conversion to feed the predominantly DC-driven equipment in the module, which provides the bulk of the efficiency and space advantages.
Enterprise executives will continue to struggle with issues like resource allocation and data load handling even under a fully modular infrastructure, but increasingly efficient hardware configurations and the introduction of end-to-end virtual data center environments will help maintain extremely tight data/resource ratios.
This is a radical shift in the way data infrastructure is designed and managed, but it is a necessary change in order to keep up with the demands of 21st Century data users.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.