When Latency Kills, Memory Solutions Start Looking Better

    Slide Show

    5 Ways to Avoid Video Challenges with Specialty Storage

    It seems almost ironic that the solid-state storage arrays that are quickly supplanting hard disks in the enterprise are already losing ground to server-side Flash and in-memory storage architectures.

    But in today’s data environment, latency kills—and even a split-second journey from the server farm to an all-Flash array across the room—is proving to be too much for many emerging Big Data and IoT workloads.

    Enterprise and data center applications are now the main drivers for technologies like Non-Volatile Dual In-Line Memory Modules (NVDIMM), according to Transparency Market Research, which comprises about 75 percent of the overall market. Sales are expected to jump from about $1.35 million in 2013 to more than $570 million by 2020 – still a baby compared to the overall storage market, but an impressive compound annual growth of nearly 140 percent nonetheless. The devices are finding their way into a wide range of server, storage and networking platforms, as well as specialty products for key industry verticals like automotive, health care and aerospace.

    The key advantage that advanced memory solutions like DIMM and DRAM have over the traditional storage array is modularity. Particularly when it comes to hyperscale infrastructure, it is much easier to build and maintain integrated compute modules in a building block style than to deal with separate compute and storage constructs connected by a complex networking scheme. As Computer Weekly’s Bryan Betts points out, a simple server-side cache installed on a PCIe slot is not only faster but several orders of magnitude less complex than HBA or drive controller architecture.

    Indeed, with the world set to boost its data volumes nearly nine-fold in the next decade to 44ZB, organizations or the cloud providers that serve them will have no choice but to pursue storage solutions that are faster, cheaper, denser and more durable than current options, says Enterprise Storage Forum’s Drew Robb. It’s fair to say, however, that after the initial solid-state drives provided a significant improvement over spinning media, the storage industry got a little lazy and failed to pursue alternate deployment options and form factors—something that newcomers like Violin Memory and Micron are striving to take advantage of. At the same time, emerging filesystem technologies like WALDIO (Write Ahead Logging Direct IO) are promising to improve memory performance and longevity in smartphones and are likely to show up in enterprise platforms before too long.


    But whether it is storage, networking or compute, an underlying technology’s performance is based largely on the degree to which an application can leverage what it has been given. In the case of normal in-memory RAM solutions, the biggest problem is the potential loss of data when a server reboots or crashes, says Kroll On-Track’s Stuart Barrows. With Microsoft SQL Server 2014, however, this is less of a concern due to the new Hekaton OLTP system that couples high-priority table and object management with localized backup to ensure that even critical workloads are available for an automated SQL recovery. The key to the system is the use of sequential Data and Delta files that utilize a free-form row structure and an in-memory index to provide a higher degree of data resilience than a standard page-based format.

    Presumably, the rise of non-volatile memory solutions will address this problem, although there will likely be a lot of leeway when it comes to deploying DRAM and NVRAM for select workloads.

    And it is not as if traditional workloads will suddenly start migrating off of legacy systems onto advanced modular infrastructure packed with memory modules. But as the decade unfolds, there is every reason to believe that emerging applications that leverage high-speed data environments will play an increasingly prominent role in the enterprise, and ultimately will be the differentiator between organizations that are highly agile and highly productive and those that are not.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles