As we enter the era of Big Data and the Internet of Things, the enterprise needs two things from its data infrastructure: rapid scale and minimal complexity. Modular infrastructure satisfies both these demands, which is why it is gaining ground in both the enterprise data center and in cloud and colocation facilities.
According to Research and Markets, the modular data center industry is growing by nearly 30 percent per year, with an expected increase from $10.34 billion in 2016 to more than $38 billion by 2021. Key drivers include the need to expand performance and capacity while maintaining, or even decreasing, energy consumption, as well as reducing the complexity of overall infrastructure to allow for improved provisioning, integration and management. As expected, the Asia-Pacific region is the fastest-growing market for modular systems given its high data demands and relatively low installed base of traditional, silo-based infrastructure.
Modular solutions are emerging as the primary means of supporting IoT operations. As EMC’s Matt Oostveen, CTO of VCE solutions for Asia-Pacific & Japan, told Computerworld recently, IoT infrastructure needs to be flexible, scalable and provide seamless connectivity to a wide range of pooled resources, while at the same time providing massive scale at low cost. Already, the traffic coming in from the relatively small number of connected devices (only a few hundred million at the moment) is starting to push the limits of traditional server and storage arrays, which means modular, converged architectures will have to play a more prominent role in both the central and distributed data architectures that drive high-speed analytics going forward.
But given all the terms floating around these days – modular, prefabricated, converged, hyper-converged – how is the enterprise supposed to determine which solution is the right one? As tech consultant Anand Srinivasan explains to Smart Data Collective, the choice usually comes down to needs and objectives. A prefab data center, for example, is typically housed in a shipping container or small building to provide instant expansion to existing facilities, while modular or converged systems feature individual server, storage and related components that can be swapped and upgraded with relative ease. Hyper-convergence utilizes fully integrated compute/storage/networking modules, which means they provide for easy scalability but could result in over-provisioning if a particular load requires, say, more storage but not processing or networking. To make the right decision, organizations will have to take a hard look at their current and expected future applications to determine their connectivity, latency, availability and other requirements.
It’s probably an overstatement to say that all IT infrastructure will go modular before too long. While the operational and economic advantages are significant, most organizations have made quite an investment in their legacy systems and will likely receive good performance for existing applications for some time to come.
But as business models skew toward digital services and processes in the coming decades, it is reasonable to assume that few data centers will be built from the ground up around complex data architectures that require expensive, time-consuming integration and upkeep. With data productivity increasingly driven by software-defined environments, a simplified hardware layer starts to look better and better.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.