10 Steps to Increasing Data Center Efficiency and Availability
A systematic approach to building data center infrastructure management.
Is the data center industry really poised to go modular?
That's the impression you get from supporters of the technology, who argue that integrated compute, storage and network configurations are both cheaper and more efficient than traditional infrastructure and will provide a much more flexible and dynamic foundation for the mobile and cloud-based data environments that are quickly becoming the norm for knowledge workers.
There certainly isn't any hesitation on the part of the vendor community to putting modular systems into the channel. Cisco and NetApp recently introduced a new line of ExpressPods targeted directly at small and medium businesses in the hopes that it will appeal to organizations that require increased data capabilities but are hesitant to take on full IT management staffing and infrastructure. The systems utilize Cisco's UCS C-Series servers and Nexus 3048 switches, along with NetApp's FAS2200 storage array. Smaller units hold two C220 servers and the FAS2220 array, while the larger version holds four servers and the 2240 array. Both systems support VMware, Microsoft and Citrix virtualization, as well as a range of management platforms.
Modular systems are also gaining greater reliability ratings, overcoming fears that power and networking issues can cause problems as systems are scaled out. IO's IO.Anywhere line, for example, recently received UL 2755 certification by demonstrating the safety of its data architecture, power distribution and cooling technologies, and smoke/fire protection systems. The UL 2755 is aimed specifically at modular and containerized data center systems to address concerns about concentrating sensitive technology in a confined space. The certification is in the process of becoming a full ANSI standard and is also poised for international adoption through the International Electrotechnical Commission.
Modular systems are also being devised for high-power computing (HPC) environments. SGI recently installed one of its ICE Cube Air systems in the Department of Energy's National Energy Technology Laboratory in Morgantown, W.Va., where it will crunch numbers for the government's Carbon Capture and Storage Initiative. The system holds 378 of the company's Rackable servers, sporting a total of 24,192 Xeon E5-2670 processors, tied together via Mellanox' ConnectX adapters and IS6500 switches. It also provides 72 TB of memory and can deliver more than 500 teraflops of performance, making it the 44th most powerful system in the world. And probably most significantly for the DoE, it provides an average PUE rating of 1.03.
Is this the future, then? Is the entire enterprise industry on the cusp of a complete modular makeover? Not quite, according to Switch VP Mark Thiele. As a relatively new technology, modularity is benefitting from a number of myths that, when revealed, indicate that it is suitable for some data environments, but not all. First off, he notes, traditional data infrastructure is much more customizable than modular systems, meaning it can be more easily tweaked to meet emerging challenges. About the only option for a modular system is to add more modules. And modular systems generally do not offer the kind of data protection or the ability to increase density to suit changing data volumes, making them a risky proposition for Big Data and mission-critical environments.
As for the rest of the enterprise industry, modular is likely to emerge as an adjunct to traditional infrastructure, but not a replacement. Modular is great at provisioning additional capacity where and when you need it in relatively short order, and it offers a higher level of control than outsourced cloud resources.
Consider them another tool in the shed as the enterprise tries to maintain order in an increasingly complex data environment.