Global Footprints Require Global Storage

    Slide Show

    Holographs, Liquid-State and DNA: The Future of Data Storage

    One of the primary benefits of the cloud is the ability to distribute data architectures across wide geographic areas. Not only does this protect against failure and loss of service, but it allows the enterprise to locate and provision the lowest-cost resources for any given data load.

    But problems arise in the ability, or lack thereof, of managing and monitoring these disparate resources, particularly as Big Data and other emerging trends require all enterprise data capabilities to be marshalled into a cohesive whole.

    When it comes to storage, many organizations are attempting to do this through global file management, which is essentially putting SAN and NAS capabilities on steroids. The idea, as Nasuni and other promoters point out, is to extend resource connectivity across broadly distributed architectures while maintaining centralized control. This is not as easy as it sounds, however. Traditional snapshot and replication techniques must now work across multiple platforms and be free to make multiple versions of data that would overwhelm standard storage architectures. They must also be flexible enough to accommodate numerous performance levels, but not so unwieldy as to drive up costs by endlessly copying data sets for each new cloud deployment.

    Naturally, this will require a fair amount of cooperation between GFS developers and cloud providers. Panzura, for example, recently tied up with Microsoft to add support for the Panzura Global File System to the Azure Cloud. The goal is to encourage use of Azure storage for primary applications by treating it as a single, large working group. Panzura provides a unique file-locking system that allows real-time access to all users without the file corruption and other issues that arise from simultaneous reads and writes. The platform is also optimized for latency- and bandwidth-sensitive applications to provide in-office performance and cross-site collaboration even on a globally distributed footprint.

    GFS is also starting to make its presence known in the rising Flash-based fabric technologies that are entering the data center. Western Digital’s HGST unit, for example, is using technology from another recent acquisition, Virident, to build a Flash fabric that integrates the full storage stack, from hardware to the application, under a single management architecture. With Virident’s ClusterCache and Share pooling technologies on board, the platform can now work with shared storage systems like Oracle RAC and the Red Hat GFS to provide SAN-like functionality across distributed infrastructure.


    As enterprise reliance on the cloud increases, GFS is likely to emerge as a necessity rather than the luxury, says Storage Switzerland’s Colm Keegan. Think about it: In order to establish the kind of interconnectedness that modern applications demand, off-site infrastructure will require NAS-like functionality. But without a global namespace mechanism in place, this would require the provisioning and management of multiple NAS systems wherever new resources are provisioned. Plus you have the added requirement of housing replication and mirroring capabilities for each site, driving up costs even further.

    With GFS in your arsenal, then, you have the ability to establish your cloud presence as a single, integrated storage environment that can be managed from a central interface. This not only improves your ability to support sharing, collaboration and other advanced applications, but helps to keep costs under control even as the overall data load continues to rise.

    Deploying cloud resources is crucial to expanding data capacity. Binding them together is the key to building a fully distributed data environment.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles