More

    In-Memory Storage: When Speed and Scale Are of the Essence

    Slide Show

    Ten Recommendations for Simplified, Intelligence-Based Storage Management

    Storage has long been the slowpoke of data infrastructure, and in these days of instant service delivery and real-time data functionality, slow is poison.

    Solid state storage has remedied much of the tardiness in storage infrastructure, but simply swapping out hard disk drives with SSDs is no longer enough. To really kick things into high gear, system designers are loading up their servers with local memory, and lots of it. In fact, memory capacities have gotten so large lately that it isn’t completely out of bounds to start thinking in terms of memory-only storage architectures for key applications.

    Samsung pushed the memory capacity game to a new level last fall when it released a 128 GB DDR4 RDIMM, which means servers packing more than 12 TB of memory are no longer the stuff of science fiction, says The Register’s Simon Sharwood. Samsung says its design features chip dies that are only a few micrometers in size and a vertical interconnect that improves signal transmission across the module to upwards of 2.4 Gbps. As well, the company has devised a new buffering mechanism that is said to boost performance and lower power consumption. The devices have begun volume shipments, and Samsung is already contemplating high-performance TSV DRAMs with data speeds on the order of 3.2 Gbps.

    Meanwhile, work is progressing on component-level architectures aimed at improving DIMM performance and capacity even further. Integrated Device Technology recently gained support from Intel, Dell and Micron for its new register, data buffer and temperature sensor devices for modules that are based on the Xeon E5-2600 processor. The systems were built around improved signal integrity, capacity scaling and fault isolation and correction to support emerging applications in data analytics, online transaction processing and workload virtualization. Data buffering, for example, has been bumped up to 3 TB to give load-reduced applications full memory bandwidth even on fully populated systems. As well, signal quality remains best-in-class even with high-speed, heavy-load jobs running on high-density LRDIMMs.

    Advancements in memory architectures are also allowing system designers to push persistent memory capabilities on their high-speed platforms. HPE, for instance, recently launched the new ProLiant9 server that couples high-performance DRAM with reliable NAND Flash on a single NVDIMM. In this way, users gain the speed and performance of a memory solution while automated backup to the Flash component ensures data availability in the event of a power loss or other disruption. At the same time, HPE is working with Microsoft, Linux and other developers to ensure that once storage hits the fast lane, data bottlenecks won’t simply transfer over to software.

    Organizations that pursue a memory-based storage strategy should be aware, though, that there is the potential of introducing new security vulnerabilities, says Ars Technica’s Dan Goodin. Recent tests have shown that new DDR4 DIMMs are susceptible to “bitflipping” attacks that are not addressed by many diagnostic techniques. Bitflipping is when nefarious code known as Rowhammer is introduced to a system and starts flipping zeros to ones and vice versa, causing untold damage to critical data and applications. The weakness was discovered by a company called Third I/O, which subjected DDR3 and DDR4 modules from Samsung, Micron and others to Rowhammer attacks, circumventing even advanced ECC algorithms, to cause server lock-ups and spontaneous reboots.

    Memory-based storage solutions provide the highest level of performance, and for that reason they also come at the highest price. In all likelihood, most enterprises will continue to employ a variety of storage solutions each tailored to a particular suite of applications so as to optimize storage in terms of cost, capabilities, scale and other factors.

    But for jobs that simply cannot wait, however, placing scalable memory as close to processing as possible will undoubtedly be the go-to solution in the coming years.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles