One of the issues that any IT organization quickly encounters when working with Big Data is the issue of performance. Normally, the first recourse associated with any performance issue is to throw memory at the problem. But that becomes problematic when dealing with massive amounts of memory.
To help address that specific issue, Terracotta, which was acquired by Software AG last year, has released a version 3.7 upgrade to its BigMemory distributed caching software that allows an organization to store terabytes of data in memory to boost the performance of any Big Data application.
According to Terracotta General Manager Gary Nakamura, BigMemory allows an IT organization to scale out Big Data performance improvements simply by adding processors. Best of all, adds Nakamura, those processors don’t necessarily need to be the most expensive on the market because BigMemory pools access to RAM, rather than being dependent on the CPU capabilities of the processor.
In fact, that scale-out approach to memory is one of the driving factors behind the growing interest in so-called microservers that are based on ARM processors rather than processors from Intel or Advanced Micro Devices.
There are, of course, multiple approaches to solving Big Data performance issues using memory. But one of the simplest and least disruptive is to harness the memory of multiple processors using caching software to create a large pool of memory that can be shared by multiple applications.
More memory may not solve every Big Data performance problem. But for the vast majority of IT organizations, it should create enough breathing room to figure out what they might need to do later on when the problem starts to scale into the range of multiple terabytes.