More

    New Platforms Make It Easier to Jump into the Storage Pool

    Slide Show

    Five Innovations in the Data Storage Landscape

    In the quest to implement a cloud-based data environment, building an effective storage pool is often the first challenge. Server infrastructure, and to a large extent networking, are usually good to go once they are lifted onto a virtual plane, but storage pooling is a bit more complicated and requires careful calibration to ensure resources are neither squandered nor hoarded.

    In fact, the lack of an effective local storage pool is the main reason many knowledge workers turn to third-party providers like Amazon and Box for their storage needs: There are plenty of resources to go around, and the cost is more than reasonable. Of course, this puts the enterprise in a bind because the data heading to the outside is a valuable commodity, and yet no one wants to deprive employees of the ability to do their jobs.

    But a number of interesting developments are poised to hit the channel this year that just might make storage pooling easier and more palatable to organizations that want to maintain full control of their data.

    One is from a California company called Hedvig, which has developed a software-defined storage platform that aims to bring hyperscale storage capabilities to the enterprise. The company says it can compile storage pools from internal and external sources while reserving complete control to enterprise storage managers. This allows the creation of hyperscale functionality within the enterprise storage environment and more effective utilization of both on-site and remote resources. The platform is built around a proprietary scheme that collapses multiple storage layers onto a single software construct that can scale into the petabyte range using off-the-shelf x86 and ARM servers.

    Such a system would produce a shot across the bow to established storage vendors like EMC, which is working up its own “data lakes” approach to scale-out storage. The company recently launched its Federation Business Data Lake intended to store and analyze data from disparate sources Big Data-style. The platform relies on key EMC assets, including vCloud and Pivotal Cloud Foundry, as well as newly developed software aimed at data distribution and policy-based access. At the moment, the set-up can only be built on EMC storage, although the plan is to eventually incorporate third-party systems via the ViPR architecture. In the meantime, users can incorporate a range of non-storage components, such as Cloudera and Hortonworks for Hadoop analytics and MongoDB for visualization.

    As well, a company called ownCloud is out with version 8 of its platform that allows organizations to build file-server and collaborative environments similar to Dropbox using either on-premises or cloud-based resources. The system basically provides a file-sharing front end that features browser-based controls, essentially creating pooled storage resources that can incorporate standard documents, complex object stores like Swift and Amazon S3, and database objects like SQL Server and MySQL. The company says the platform delivers mobile application functionality using a server that is controlled entirely by the enterprise.

    Data Management

    And if the enterprise desires additional physical storage of its own to support scaled out pools, Promise Technology is out with the VSky A-Series platform that can deliver multiple petabytes within a few rack units. The A1100, for instance, is a 1U gateway server that draws from the company’s Vess and VTrak systems. It can scale up to three systems per node, typically consisting of a single RAID head plus two JBODs. As well, there is the A1970 4u top-loaded server that sports 70 drive bays and two independent server nodes capable of supporting 35 hard disk drives. Both systems support file, block and object-level storage, as well as NFS/CIFS, iSCSI and RESTful APIs.

    Whether through software or hardware, the enterprise has a vested interest in taking control of the storage pools that are steadily assuming the lion’s share of the overall data load. Converting decades’ worth of storage infrastructure to a dynamic repository of flexible capacity is no easy task, but the alternative is to persist in supporting aging storage architectures while the rest of the data stack sprints headlong into the 21st Century.

    And if employees cannot find the storage resources they need from their enterprise, they will most certainly find them somewhere else.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles