It’s no secret that storage has long been the slowpoke in the enterprise. While processors and network devices are flying through data, storage has to stop, collect itself, locate the files, retrieve them and push them onto the network.
It’s no wonder so many organizations are turning to solid-state solutions. Not only does time-critical data need the highest performance possible, but processor utilization, both in the server and on the network, improves dramatically because cores are no longer waiting around for data to come in from storage.
But do the new on-server memory solutions finally provide the long-sought-after parity between servers and storage? Not quite. While DRAM and other forms of memory offer tremendous gains in throughput, they still are no match for the number-crunching prowess of most server architectures — at least at the moment.
Nevertheless, a number of designers are hard at work on high-performance memory solutions for enterprise-class server environments. Netlist, for example, recently released the 32GB HyperCloud HCDIMM, which played a key role in achieving one million transactions per minute in TPC Benchmark C testing on an x86 platform. The device features a distributed buffer system to reduce latency and adds techniques like rank multiplication and load reduction to boost capacity and take pressure off the memory interface so that it operates more quickly and efficiently.
Transcend Information is on a similar path with its 32GB DDR3 Load-Reduced DIMM (LRDIMM). The unit provides a full memory buffer chip that evens out the flow of data from host to memory, again reducing the load on the interface and allowing more modules per channel to improve overall capacity without hampering throughput. Using the company’s DDR3-1333 model, for example, designers can populate all 24 memory slots on an Intel S2600C motherboard to gain 768 GB in total memory — a 50 percent bump over RDIMM solutions.
Bid deal, you say? Solid state already provides such an improvement over magnetic media that any further gains will be miniscule? This may be true for many current applications, but some of the new ones are highly sensitive to throughput. In fact, says Storage Switzerland’s George Crump, proper server-side memory performance could make or break many VDI deployments, particularly as the number of seats increases and both speed and capacity need to keep up.
As well, adoption of local memory is seen as such a boon for applications ranging from Big Data to social networking that Gartner and others have begun tracking it as a new market segment: In-Memory Computing (IMC). As the speed at which data is processed and passed along starts to take precedence over raw storage capacity, architectures built around server-based memory modules start to look pretty good. Plus there is the side benefit that they take up dramatically less floor space than traditional storage, and use less energy to boot.
The kicker in all of this, though, is volatility. Can DRAM and other forms of memory be trusted with actual storage, as opposed to temporary caching applications? Supporters say yes now that advanced forms of non-volatile memory are starting to hit the channel, but it will probably take more than assuring words to convince most CIOs that NV is ready for prime time. As well, we have yet to see anything close to the RAID protection, resource allocation and other management features that are now commonplace in SAN/NAS environments.
In all likelihood, then, most organizations will find that a mix of solutions provides the optimal storage performance for increasingly complex data environments. And in that regard, selecting and deploying various flash, RAM, platter and even tape solutions will be the easy part. The real challenge will be to identify individual data sets and route them to the appropriate media without adding to the very latency that the new memory solutions are trying to reduce.