Can Old Infrastructure Really Support the New Data Center?

    Slide Show

    Top Trends Driving the Data Center in 2015

    The words “rip and replace” are among the most feared in the IT lexicon—right up there with “denial of service” and “The CIO wants you in his office right now.”

    But now that the enterprise is contemplating a data environment that will propel business into the 21st Century, some organizations are giving serious consideration to wholesale replacement of aging infrastructure. In an increasingly interconnected world, it has not gone unnoticed that many emerging markets are already building forward-leaning data environments atop gleaming new hardware platforms.

    Indeed, says EuroCloud co-founder Phil Wainewright, those who don’t embrace some level of rip-and-replace will find themselves outclassed by rivals who do. When the pace of change is moving at hyperspeed, delay is the enemy—it not only limits your ability to compete, it makes the inevitable change that much harder as new systems and software become integrated with the old.

    The key thing to keep in mind, though, is that the R&R experiences of the client-server era do not have to be repeated in the cloud. With the ability to spin up entirely new data environments purely in software, the enterprise can replace first, then rip, which will prove far less disruptive to normal operating conditions compared to a fully integrated IT stack.

    According to the Economist Intelligence Unit, nearly 60 percent of IT executives say they spend considerable time addressing legacy IT issues but consider replacing aging systems as either impractical or improbable. For Richard Clark, business development director for insurance software developer Xuber, this is a call for a gradual approach to infrastructure replacement based on a specialized management stack that supports resource compartmentalization. By carving out discrete resource sets—by region, function or some other criteria—the enterprise can implement an upgrade strategy without facing the disruption of an end-to-end retrofit.

    This approach has a certain appeal, of course, but it often produces less than stellar results. Data performance is limited by the weakest link in the resource chain, so a gradual approach will often require a significant capital investment, resulting in only limited gains in functionality. And by the time the last piece is upgraded, it’s usually time to start over again.

    Data Center

    But even a software-driven ecosystem does not remove the specter of rip-and-replace completely. By now, many organizations are steeped in virtualization, based predominantly on the VMware hypervisor. Making a change to the virtual layer is no easy task, which is why companies like Nutanix are focusing an ever-larger portion of their research budgets to streamline the process and make it less disruptive. At the moment, Nutanix’ storage platform works with both VMware and Nutanix hypervisors, but increasing tension between the two companies may put an end to that, even as they both profess strong commitments to interoperability and open platforms.

    At some point, however, many tech analysts expect a fully software-driven infrastructure to do away with issues surrounding major upgrades. On the hardware level, loads can simply be shifted away from the relevant components and then brought back when the change-out is complete. Software upgrades will continue as normal, but hopefully they will be a bit more streamlined under increasingly sophisticated automation platforms. But to get there, says Brocade’s Alan Murphy, you’ll need to perform probably the biggest rip-and-replace of all: conversion of legacy static networking infrastructure to advanced fabric technology.

    When John Kennedy committed the United States to putting a man on the moon, he said we should pursue great deeds not because they are easy but because they are hard. Rebuilding data infrastructure is not as hard as going to the moon, but it does represent the same journey: Mistakes will be made and setbacks will occur, but in the end the enterprise gains not only an advanced architecture but the knowledge of what it took to build it and how it can best be leveraged for the future.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles