At this point, just about everybody agrees that in-memory computing will transform IT. The degree to which that will happen is what is being fiercely debated.
Last fall, Oracle promised to make available an in-memory computing option for the venerable Oracle database. Today, Oracle announced the imminent general availability of the in-memory computing option for Oracle Database 12C.
Tim Shetler, vice president of product management at Oracle, says that unlike rival approaches to in-memory computing, the Oracle Database In-Memory option is designed to make in-memory computing available for both new and existing applications running on top of the Oracle database.
The Oracle Database In-Memory option is based on columnar database technology. But Shetler says it is designed in a way to give customers the option to store data in columnar and row formats.
The debate concerning in-memory computing essentially comes down to the amount of data that needs to run in-memory at any given moment. Oracle argues that different classes of hot, warm and cold data need to be cost-effectively made available to database applications, regardless of whether the data being accessed resides on Flash or magnetic storage.
More challenging, however, many be retraining developers to build applications that take full advantage of in-memory computing. Developers have been trained to never launch an SQL query directly against a production database. But with the rise of in-memory computing, it’s clear that transaction processing and analytics now need to be run in parallel.
Ultimately, the transition to in-memory computing is going to be a multi-year journey. The important thing right now isn’t necessarily understanding where that journey leads, but rather just making the first step.