Maintaining Control of Flexible, Distributed Storage

    Slide Show

    The Rise of Integrated IT Infrastructure Systems: Top Enterprise Use Cases

    Storage has never been cheaper or easier to provision than it is now. But that does not mean it no longer presents a challenge to the enterprise.

    Indeed, with storage infrastructure now distributed across the cloud and line-of-business managers capable of creating virtually any type of storage they need for a given task, the sheer diversity of media, formats and standards is making it extremely difficult to even come close to a unified, federated storage environment.

    But that does not mean it is impossible. To get there, the enterprise will have to seek out innovative new ways to oversee both data and infrastructure.

    For one thing, says Computerworld’s David Spark, you can target specific storage-related issues like latency and availability rather than shoot for the moon with an all-encompassing management approach. In this way, you can ameliorate the key issues affecting users while at the same time streamlining processes on the back end. As well, there is a lot to be said for consolidating compute and storage infrastructure under hyperconverged footprints and then implementing a robust metadata layer that will aid both manual and emerging automated management functions. In a broader sense, however, the sooner the enterprise replaces the current piecemeal deployment of storage with a more strategic approach, the sooner it will be able re-exert control over its data.

    With data loads growing larger and more diverse by the day, however, the enterprise is not likely to provide a uniform storage environment that can optimize performance for every workload. This is why new approaches to infrastructure, such as convergence, need to be paired with intelligent data monitoring and management, says Windows IT Pro’s Bill Kleyman. The goal is to provide appropriate levels of service using diverse storage resources and other assets while still maintaining thorough visibility and control across resources that can no longer be defined as simply storage, compute or networking. If done right, such an environment will not only lower costs and be easier to manage, but will provide the flexibility and scalability to meet the demands of the rising tide of digital data processes.

    When contemplating a universal storage infrastructure, it is tempting to consider open source as a foundation. While it is true that open platforms enjoy the support of broad communities and vibrant development tracks, they still require some degree of integration, and if any functions are missing from the open portfolio, it’s usually up to the enterprise to create them from scratch.

    But this isn’t to say open source is not making its mark on the storage environment. Red Hat, for one, just came out with Ceph Storage 2 that leverages the latest Ceph Jewel release to streamline management of object-based workloads and take some of the sting out of implementing and running an open source storage solution. The platform offers features like global object storage clusters that support single namespace and data sync across geographically distributed environments, as well as deeper integration with Amazon S3 and OpenStack Swift for enhanced object versions, bulk deletion and other functions.

    Meanwhile, SUSE is out with its own Jewel-based system, SUSE Enterprise Storage 3, which the company has linked to HPE’s Scalable Object Storage Solution for easy deployment on both the Apollo storage-optimized server and the more common ProLiant family. In this way, users can implement software-defined, object-based storage with relative ease while maintaining high levels of resiliency and redundancy even as workloads transition to more commodity-based infrastructure. In addition, says’s Swapnil Bhartiya, the release provides for native filesystem access through the POSIX-compliant CephFS, which enables unified block, object and file access across storage clusters.

    Storage is a good example of the catch-22 that most enterprises find themselves in these days: The easier you make things for users, the tougher it gets for managers. Hopefully, it won’t be this way for very long as new management approaches place the focus on data rather than infrastructure.

    But in the transitional period, at least, expect a number of stops and starts as organizations strive to make storage as flexible and expandable as possible without losing control over where data goes and who can access it.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.


    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles