More

    Is Hyperscale Consolidation Really a Good Thing?

    The hyperscale data center model is poised to show its might over the next three years – the benefit of rising data loads, shrinking budgets, and the need to rapidly convert to dynamic and increasingly automated data infrastructure.

    But even as the enterprise industry migrates to the cloud, it might be beneficial to examine some of the downsides to this trend, particularly the consequences of consolidating resources onto a relatively finite number of physical data centers around the world.

    According to Wintergreen Research, the worldwide hyperscale market is set to grow from today’s $87 billion to nearly $360 billion by 2023. This will come largely at the expense of enterprise-based web servers and related infrastructure, but it should attract much of the back-office workload as well. The advantage of going hyperscale lies in the ability to reduce physical-layer infrastructure to two basic components: a single-chip server and a matching ASIC on the network level. Under this model, entire data center instances can be created in software and populated with the latest automated, self-governing tools available.

    These facilities will also feature state-of-the-art security, resiliency and other elements to ensure continuous service to clients, but the fact remains that there are fewer than 300 hyperscale centers in operation at the moment, and nearly half of these are located in the United States. Synergy Research Group, which is keeping track of hyperscale development, estimates that another 100 or so will come online by 2019, which means the vast bulk of economic activity around the world could be vulnerable to breaches and other disruptions if the designs of these facilities are not as robust as their creators think. Remember, it’s easy to harden data infrastructure against known or anticipated risk, but unknown risks are an entirely different matter.

    Still, the transition from on-premises data infrastructure to the cloud is in full swing and isn’t likely to abate any time soon. IDC reports that traditional data centers still command about 57 percent of the total IT budget, but will fall below 50 percent by 2020. And with public cloud already commanding more than 60 percent of the total cloud spend, it seems likely that the vast majority of organizations will migrate to the top MSPs once the three-year lifecycle of today’s in-house infrastructure winds down. The company pegs current infrastructure spending in the cloud at $44.2 billion, with growth in the coming year expected to top 18 percent.

    Interestingly, it isn’t so much the scale of hyperscale that draws enterprise interest, nor even the cost, but the line-up of available tools, says research house Clutch. In a recent survey of 247 top IT decision-makers, the firm discovered that the quality of enterprise-class services like migration and analytics tops the list of considerations among the big three hyperscalers: Amazon, Google and Microsoft. Secondary considerations include factors like security and brand recognition. As such, it appears that Amazon will continue to dominate IaaS deployments, with Microsoft owning cloud-facing services, and Google leading in analytics.

    Throughout computing’s history, infrastructure has developed around the idea that the single point of failure is best avoided. This is how mainframes came to house multiple cores, how distributed server architectures came to replace the mainframe, and how virtualized, abstract data environments came to dominate the data environment of today.

    The hyperscale movement doesn’t reverse this trend completely, but it does introduce consolidation on the facilities level, reducing the points of failure from millions when compared to on-premises infrastructure, and from tens of thousands when compared to generic cloud computing.

    There is no doubt that hyperscale sports the latest and greatest when it comes to security, resiliency and all the rest, but the amount of data under management is staggering – and the damage, financial and otherwise, that could result from data theft or loss of service for even an hour could be considerable.

    It’s something to think about before the migration starts en masse.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles