Data Availability: Will It Ever Be Good Enough?

    Slide Show

    Digital Integration: Overcoming Enterprise Data Challenges

    For much of IT’s history, digital infrastructure was deployed, managed and upgraded to support the business model, whether that was selling cars, providing legal services or what have you. If the movement of data was disrupted, it was a hassle, but core business functions like sales continued normally.

    But no longer. These days, digital is the business model. Virtually every product and service known to man has a service component that differentiates it from competitors, meaning that those who skimp on things like availability and reliability run the risk of losing out to those who grasp their importance.

    This is leading to strong demand for more resilient data infrastructure, both in the enterprise and in the cloud. This is easier said than done, however, since it involves a host of disciplines ranging from improved data and power redundancy to rapid failover and in-depth visibility of data constructs that are usually owned by someone else.

    And beneath it all, says FairPoint Communications’ Chris Alberding, is the nagging feeling that what you’ve done is never enough. How can the enterprise ensure reliable power? What level of redundancy is necessary for high-availability? In an age of continuous integration/continuous deployment, is anything less than 100 percent uptime acceptable anymore? And at what point will end-to-end redundancy lead to overly complex infrastructure that drives up costs and may actually hamper data operations?

    A key component of data availability, of course, is power availability, and this is one of the main stumbling blocks in the drive to steer data operations toward greener forms of energy. Hyperscale providers like Facebook and Microsoft are on the cutting edge of renewable supplies like wind and solar, and they are starting to put pressure on states and utilities to make them more reliable. The Renewable Energy Buyers Alliance was recently formed out of a loose coalition of energy and environmental groups with the aim of upping renewable supplies by 60 GW over the next 10 years. A key initiative is a more standardized procurement framework that makes it easier to source power from renewable supplies.

    Increased automation is also a key factor in availability because, let’s face it, human error is a major contributor to downtime. Automated Data Center Infrastructure Management (DCIM) systems like Schneider Electric’s StruxureWare platform help ensure high availability in both power and data infrastructure even in the presence of highly dynamic data loads. The platform was recently upgraded with a number of remote access and predictive management tools that allow users to head off potential issues before they arise, even on remote or colocated infrastructures.

    But with application portfolios becoming increasingly diverse, is it even possible to guarantee uptime for everything? As Kevin O’Connor, senior director of the Cloud Solutions Group at PC Connection notes, the cost of failure can run to the millions of dollars for a large organization, so where should scarce resources go in the interests of fostering high availability? Storage infrastructure is a good place to start. It should be made flexible enough to handle a diverse workload while at the same time capable of supporting real-time performance and advanced analytics. A software-defined data center (SDDC) architecture is also crucial, as it is much easier to foster redundancy and maintain availability on a flexible software stack than one that is inexorably tied to hardware.

    There are no guarantees in life, and that applies to the digital, virtual lives that we humans are building for ourselves as well. People make mistakes (and so do machines), systems fail, data becomes corrupted and, in general, things don’t always go as planned.

    The IT industry has already made a major leap forward by recognizing that availability isn’t about preventing failure at all costs but minimizing the impact when it does happen. The challenge now is to implement this mindset on the infrastructure that we rely upon so that even the biggest of problems won’t bring our digital lives to a halt.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles