More

    Storage in a Diversifying Data Environment

    Slide Show

    Five Ways to Address Your Data Management Issues

    Larger data loads are coming to the enterprise, both as a function of Big Data and the steady uptick of normal business activity. This will naturally wreak havoc with much of today’s traditional storage infrastructure, which is being tasked with not only with providing more capacity but speeding up and simplifying the storage and retrieval process.

    Most organizations already realize that with the changing nature of data, simply expanding legacy infrastructure is not the answer. Rather, we should be thinking about rebuilding storage from a fundamental level in order to derive real value from the multi-sourced, real-time data that is emerging in the new digital economy.

    Infinidat recently took the wraps off of its latest InfiniBox storage array that offers a multi-petabyte NAS architecture within a single rack. The newest version features NFSv3 file support as a free, non-disruptive software upgrade, coupled with near-sync replication that reduces recovery from minutes to mere seconds. The company stresses that the NAS upgrade is a true native implementation, not just a front-end bolted onto a block system, but still supports traditional block-level features like single-console management, non-blocking snapshots, self-healing and high availability. The company has also released the mid-range F2000 array that packs 250 TB in an 18U configuration with seven-nines availability.

    Meanwhile, Cloudian is out with the HyperStore petabyte-scale solution, which the company bills as “Forever Live” due to the hot-swappability of virtually every component in the system. The newest FL3000 array in the series scales up to 3.8 petabytes in a single rack and features self-service policy support at the bucket level while giving the enterprise oversight of data distribution, protection and other key features. The system can be configured with up to eight 3U nodes, each with 128 GB of memory and dual Flash drives for metadata storage. With bulk storage in a 60-drive 4U module, the entire system cuts space requirements, servers, cables and other necessities in half.

    The so-called Server SAN is also expected to play a large role in emerging infrastructure, with companies like EMC looking to build hybrid configurations to handle large, dynamic data loads. The company’s ScaleIO platform is almost ready to ship, says The Platform’s Timothy Prickett Morgan, even as the company continues to develop traditional SANs like the VMAX and VMware touts virtualized clustered SANs atop the ESXi hypervisor. EMC is betting on the fact that, like the development of SANs in the first place, the transition back to local storage will take some time, so its support of on-server storage software will allow the enterprise to embrace that change without upending its entire storage environment with third-party start-up solutions.

    But as I mentioned, size and scalability are not the only factors in modern storage architectures. Data management is a key player as well and is drawing a range of start-ups into the field. Objectivity recently introduced a new Hadoop-based metadata management system tailored toward the high-speed streaming data environments that are emerging in the Internet of Things. As it stands, much of the unstructured and semi-structured data coming from IoT sources is gibberish, says Datanami’s Alex Woodie, so Objectivity’s new ThingSpan platform offers a YARN-certified method of adding the framework and semantics so it can be run through leading analytics engines without altering or compromising the meaning of the original data. The system works with the graph analytics and machine learning tools found in Apache Spark and Apache Kafka, and incorporates the streaming analytics capabilities of DataTorrent’s Project Apex, which the company says produces a more robust MDM framework than standard NoSQL databases.

    It is tempting to think of traditional data loads, as well as the emerging ones in the Big Data and the IoT spheres, as monolithic entities that can be managed with a single, overarching solution. In reality, there will likely be multiple data types, volume sizes, analytics requirements and other factors that will affect where and how storage is to be handled.

    Bigger storage is most certainly in the cards for the enterprise, but care has to be taken that it is paired with faster, smarter storage in order to derive full value from both the infrastructure being deployed and the data it is intended to support.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles