With more raw horsepower available than ever before, it’s only natural that software vendors would want to tap into those resources. They just don’t all agree about how to go about it.
Teradata today revealed its approach with in-memory computing with the launch of Teradata Intelligent Memory, an approach to in-memory computing that allows the workloads running on a Teradata database appliance to make use of extended memory.
According to Chris Twogood, vice president of product management at Teradata, the Teradata approach to in-memory represents a more practical approach that allows organizations to make greater use of memory for mission-critical applications where the additional expense can be justified. In contrast, SAP has been pushing a HANA approach to in-memory computing under which all the application code and data in the system needs to run in memory.
Twogood says Teradata Intelligent Memory allows the company to identify “hot” data that is driving a particular application and move it into extended memory. That approach, says Twogood, allows IT organizations to allocate memory to the data where it will be of the most use in the least disruptive way possible to existing applications. Not only is that more economical, says Twogood, it better reflects the reality that not all data is worth the additional premium of running in memory.
No doubt there will be a fierce battle between vendors as they argue over the degree to which in-memory computing should be applied. IBM, for example, recently extended to capabilities of its DB2 database to take advantage of additional memory resources.
The good news is the fundamental advances to computing are now coming at a fairly rapid pace, which should not only dramatically increase application performance but just as importantly make possible new applications that previously would have been inconceivable given the latency limitations of traditional disk-based storage.