SHARE
Facebook X Pinterest WhatsApp

When Latency Kills, Memory Solutions Start Looking Better

5 Ways to Avoid Video Challenges with Specialty Storage It seems almost ironic that the solid-state storage arrays that are quickly supplanting hard disks in the enterprise are already losing ground to server-side Flash and in-memory storage architectures. But in today’s data environment, latency kills—and even a split-second journey from the server farm to an […]

Written By
thumbnail
Arthur Cole
Arthur Cole
Oct 20, 2015
Slide Show

5 Ways to Avoid Video Challenges with Specialty Storage

It seems almost ironic that the solid-state storage arrays that are quickly supplanting hard disks in the enterprise are already losing ground to server-side Flash and in-memory storage architectures.

But in today’s data environment, latency kills—and even a split-second journey from the server farm to an all-Flash array across the room—is proving to be too much for many emerging Big Data and IoT workloads.

Enterprise and data center applications are now the main drivers for technologies like Non-Volatile Dual In-Line Memory Modules (NVDIMM), according to Transparency Market Research, which comprises about 75 percent of the overall market. Sales are expected to jump from about $1.35 million in 2013 to more than $570 million by 2020 – still a baby compared to the overall storage market, but an impressive compound annual growth of nearly 140 percent nonetheless. The devices are finding their way into a wide range of server, storage and networking platforms, as well as specialty products for key industry verticals like automotive, health care and aerospace.

The key advantage that advanced memory solutions like DIMM and DRAM have over the traditional storage array is modularity. Particularly when it comes to hyperscale infrastructure, it is much easier to build and maintain integrated compute modules in a building block style than to deal with separate compute and storage constructs connected by a complex networking scheme. As Computer Weekly’s Bryan Betts points out, a simple server-side cache installed on a PCIe slot is not only faster but several orders of magnitude less complex than HBA or drive controller architecture.

Indeed, with the world set to boost its data volumes nearly nine-fold in the next decade to 44ZB, organizations or the cloud providers that serve them will have no choice but to pursue storage solutions that are faster, cheaper, denser and more durable than current options, says Enterprise Storage Forum’s Drew Robb. It’s fair to say, however, that after the initial solid-state drives provided a significant improvement over spinning media, the storage industry got a little lazy and failed to pursue alternate deployment options and form factors—something that newcomers like Violin Memory and Micron are striving to take advantage of. At the same time, emerging filesystem technologies like WALDIO (Write Ahead Logging Direct IO) are promising to improve memory performance and longevity in smartphones and are likely to show up in enterprise platforms before too long.

Skull

But whether it is storage, networking or compute, an underlying technology’s performance is based largely on the degree to which an application can leverage what it has been given. In the case of normal in-memory RAM solutions, the biggest problem is the potential loss of data when a server reboots or crashes, says Kroll On-Track’s Stuart Barrows. With Microsoft SQL Server 2014, however, this is less of a concern due to the new Hekaton OLTP system that couples high-priority table and object management with localized backup to ensure that even critical workloads are available for an automated SQL recovery. The key to the system is the use of sequential Data and Delta files that utilize a free-form row structure and an in-memory index to provide a higher degree of data resilience than a standard page-based format.

Presumably, the rise of non-volatile memory solutions will address this problem, although there will likely be a lot of leeway when it comes to deploying DRAM and NVRAM for select workloads.

And it is not as if traditional workloads will suddenly start migrating off of legacy systems onto advanced modular infrastructure packed with memory modules. But as the decade unfolds, there is every reason to believe that emerging applications that leverage high-speed data environments will play an increasingly prominent role in the enterprise, and ultimately will be the differentiator between organizations that are highly agile and highly productive and those that are not.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

Recommended for you...

10 Top Data Companies
Tom Taulli
Jul 24, 2022
Unifying Data Management with Data Fabrics
Litton Power
Jun 17, 2022
Top Big Data Storage Tools 2022
Surajdeep Singh
Jun 16, 2022
8 Top Data Startups
Tom Taulli
May 20, 2022
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.