In a move that promises to make it simpler to both query and store archived data, Teradata announced today that it has acquired Rainstor, a provider of archiving software that runs on top of Hadoop.
Chris Twogood, vice president of product and services marketing at Teradata, says Rainstor software will be integrated into the Teradata Unified Data Architecture to allow organizations to query archived data even after it has been compressed by as much as a factor of 40.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iTwogood says Teradata will continue to sell Rainstor software as a stand-alone archiving application that can encrypt data in addition to integrating the company’s software into the Teradata Unified Data Architecture that enables organizations to launch queries against both traditional SQL databases and Hadoop.
The acquisition of Rainstor is the fourth Big Data acquisition that Teradata has made this year. Previous acquisitions included Revelytix, a provider of data management tools for Hadoop; Hadapt, a provider of tools for integrating SQL databases and Hadoop; and Think Big Analytics, a provider of IT services focused on Big Data applications.
The compression capabilities provided by Rainstor, adds Twogood, will also make it considerably more affordable for IT organizations to not only store massive amounts of data, but will make it feasible to bring large amounts of “dark data” currently stored on tape back online. In support of Big Data analytics applications, IT organizations are now being asked to make huge amounts of data available that have been stored offline, in many cases for years.
As data warehouses become virtual entities that span both SQL databases and Hadoop, it’s pretty clear that a raft of new data management tools will be called for. It’s also clear that legacy data management platforms are not going away any time soon. The real challenge will be finding a way to manage these platforms in a holistic fashion, because the alternative is to have separate stacks of data, but that would defeat the purpose of investing in Big Data in the first place.