GigaSpaces Launches Open Source In-Memory Data Grid Project

Mike Vizard
Slide Show

5 Steps to Rationalize Your Application Portfolios

One of the more profound developments with enterprise IT as of late has been the rise of in-memory data grids. As a technology, in-memory data grids have been around for a while. But as the cost of memory has gone down, the feasibility of deploying in-memory data grids has correspondingly increased. To help spur that adoption further, GigaSpaces announced today it has made its core XAP 12 data grid offering available as an open source project.

Data grids that run in memory are becoming more relevant because they enable distributed applications to access data residing in-memory in real time. As the usage of data grids running in memory increases, the actual place where the data ultimately winds up residing becomes less relevant. For example, organizations that employ a data grid running in memory are not necessarily going to need a database that also resides in memory. Many of those organizations will just rely on some form of Flash storage or even traditional magnetic drives to provide applications with access persistent data directly via a data grid.

Ali Hodroj, vice president of product and strategy for GigaSpaces, says this interest in data grids is already quite high in vertical industries where there are a significant number of distributed applications that now have access to almost 3TB of memory on a server platform.

“We’re seeing a lot of interest in financial services and telecommunications sectors right now,” says Hodroj.

Of course, GigaSpaces is not the only in-memory data grid platform available as an open source project. The Apache Group has already thrown its weight behind an Ignite project that is based on a platform developed by GridGain Systems.

Regardless of which approach IT organizations ultimately pursue, the one thing that is clear is that advances in multiple forms of memory technologies coupled with event-driven architectures based on microservices have the potential to radically transform just about every aspect of enterprise computing. The challenge facing IT leaders now is figuring out what forms of memory will be available, and when, at which price points, by the time the next major distributed application development project is scheduled to be deployed in production.

In the meantime, not only is it probable that the next major distributed application an IT organization deploys is going to be several orders of magnitude faster than anything that has gone before it, but the physical number of servers and storage systems required to support it will be much lower, as well.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.