More

    For Scale, It’s Best to Go Modular

    The enterprise seems content to build and maintain local data infrastructure despite the cost and flexibility benefits that exist on the public cloud, but that does not mean they will continue with the complex, silo-laden data centers of today.

    Instead, new modular solutions are entering the channel that promise to streamline both the deployment and ongoing management burdens of on-premises resources while duplicating the scale and operational efficiencies of the cloud.

    The modular data center market is expected to grow nearly 10-fold by 2025, according to Insight Partners, climbing from today’s $2.65 billion to more than $22 billion. Part of this can be attributed to the lower capital costs that come from deploying a uniform form-factor, as well as the simplified installation and integration that comes with it. But there is also the fact that data infrastructure is starting to push beyond traditional data center walls where on-going hands-on maintenance and oversight is more difficult to maintain. Indeed, one of the hottest growth areas in the modular sector is the all-in-one data center, usually housed in either a 20-foot or 40-foot shipping container, which can be deployed virtually anywhere and brought online in record time.

    This is one of the primary motivations behind the Open19 Initiative, an effort by LinkedIn and others to provide an alternative to Facebook’s Open Compute Project (OCP) that would enable a foundational hardware layer for both modularized central data centers and the increased proliferation of remote facilities on the IoT edge. The group has focused its attention on a 19-inch rack configuration that features a “cages and bricks” architecture in which the rack is the cage and individual servers act as the bricks. In this way, organizations can mix and match servers of multiple vertical form-factors and then easily swap out both power and data cabling to suit the processing loads they require. Project leaders say their aim is to create a more supplier-friendly framework that allows vendors to contribute without necessarily giving up design secrets.

    For traditional vendors, it seems like modular is the best way to keep revenue streams alive in the enterprise market, says Fortune’s Jonathan Vanian. Even under a hybrid cloud scenario, the enterprise needs to keep both its hardware costs and its footprint to a bare minimum in order to make local resources cost-effective. While this runs the danger of sacrificing high-margin integrated platforms to low-margin commodity infrastructure, companies like HPE still benefit from groups like Open19 and OCP because they allows them to differentiate their offerings on an operational level even if they conform to a common physical ecosystem.

    But given IT’s penchant for turning off-the-shelf systems into customized point solutions, what’s to stop today’s technicians from doing the same with modular systems? In a word, scale, says NetMagic’s Pankaj Nath. With today’s data center facing the addition of millions of device-driven data streams and the massive analytics loads needed to make sense of it all, the enterprise will have neither the money nor the manpower to craft custom solutions like they did in the past. Web-scale architectures must not only be agile and collaborative, they require a high degree of automation to provide effective support to service-based business processes, all of which is extremely difficult when dealing with sprawling, disconnected conglomerations of hardware. Only through modular, software-driven and open infrastructure will the enterprise be able to cope with plethora of cloud service, mobile computing, social network and Big Data workloads that are quickly displacing traditional business operations.

    Even within modular infrastructure, however, organizations will still have the ability to customize their environments where it matters – in software. This will allow for broad differentiation of services and deep insight into the unique data flows that each organization is expected to generate.

    And should anything go wrong on the physical layer, a properly architected data environment will be able to easily, even automatically, shift loads away from the failing device until it can be swapped out for a working unit, with not much more difficulty than changing the batteries on the TV remote.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles