More

    DRAM or NAND: Finding a Balance Between Cost and Performance

    The enterprise is in love with DRAM, so much so that it is starting to affect the pricing of what was once a fairly stable component in the IT infrastructure stack, and this may ultimately push system vendors to embrace alternate forms of memory.

    At the moment, however, DRAM is the preferred solution for in-memory server architectures, which are increasingly populating advanced database and high-speed analytics shops. According to IC Insights, DRAM pricing has been particularly volatile over the past year as the supply glut of previous years gave way to increased demand from both data system and device manufacturers. This led to a doubling of the average selling price since this time last year, and the company is predicting that the price-per-bit of DRAM is on pace to jump 40 percent in 2017, the largest annual increase on record.

    And this may just be the start of a long run for DRAM if some long-held technological breakthroughs make it to production environments. One such effort comes from a French company called Upmem, which says it has worked out a method to offload actual processing to DRAM for emerging machine learning and other AI workloads. Electronic Design reports that the system involves thousands of DRAM processing units connected to a traditional processor that runs the operating system. This allows workloads to offload onto the DPU where they can run up to 20 times faster under the same power envelope. The system can also be housed in a standard DIMM module that can be added to existing server motherboards without changes to hardware.

    The question enterprise executives need to ask themselves when deploying memory architectures, however, is whether they need strong performance or high efficiency. As Extreme Tech’s Joel Hruska noted recently, NAND memory caches could lower data center power by significant margins, and can be deployed at a fraction of the cost. True, NAND is extremely slow compared to DRAM, but emerging technologies like MIT’s BlueCache server can narrow the gap by getting rid of the File Translation Layer (FTL) and providing direct access to the memory. It may not be a complete replacement for DRAM, but it can certainly help lower the TCO of an in-memory design.

    In fact, say BlueCache’s developers, the system works best when just a few MB of DRAM is incorporated into, say, a million MB of Flash. With the DRAM holding the database query table for the corresponding address in Flash, you get a vastly more efficient means to detecting data that has not yet imported into cache. As well, you no longer need to rely on software to conduct normal read, write and delete operations, and you can maximize bandwidth usage in and out of memory by amassing queries until there is enough capacity to fully leverage the data bus. Combined, these techniques can cut power usage by 90 percent.

    No matter how you architect the surrounding systems, however, DRAM remains the faster medium, almost doubling the performance of NAND. In today’s world, this might not matter much because the human mind cannot comprehend the latency difference between two microseconds and four. The problem is that in a very short while, machines will have taken over the vast majority of queries and other processes, and they will certainly notice the difference.

    Power efficiency is a worthy goal, but what is true elsewhere in the data center will likely be true in memory: Efficiency will almost always take a back seat to performance. And it would take a truly dramatic increase in DRAM pricing to upset that calculation.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles