Data virtualization was once seen as an alternative to integration via ETL. As it turned out, that was a poor selling point, Denodo’s Senior Vice President Suresh Chandrasekaran said during a recent interview.
“There was such an entrenched mindset that everything had to be copied and persisted through ETL to do any kind of analytics without impacting source or performance, etc.,” Chandrasekaran said. “That mindset persisted for a long time.”
Earlier this year, I interviewed Chandrasekaran about the emerging use cases that he believes will drive data virtualization adoption this year.
“The primary reason that people are adopting data virtualization is less about real-time integration and more about abstraction and discovery of enterprise data assets,” he said.
In the past, data virtualization was positioned as an alternative to ETL. But these new business uses are less about integration, and more about the ability to create an abstract data layer, Chandrasekaran explained. That allows IT to make data more accessible to business users without revealing or moving the data.
Chandrasekaran identified the following new use cases where data virtualization can play a critical or supporting role:
Hadoop, No-SQL and in-memory database projects: These technologies are designed to handle Big Data, but what happens when you want to use the data from those databases in new applications or analytics tools? It’s not usually practical to replicate these data sets. Data virtualization gives IT a way to make that data available without copying it or providing a direct pipeline into the stores, he said.
As an adjunct to ETL: In the past, data virtualization tended to market itself as an alternative to ETL, but now it’s seen as an adjunct, Chandrasekaran said. That comes in handy with, say, master data management projects where you primarily want to improve your master data, but you also want to be able to add information from other data sources. One Denodo customer used Informatica’s ETL engine for the heavy integration work with its MDM hub, but then used data virtualization to provide a link with other, non-primary data source. Data virtualization may also be used as an adjunct during complex data integration and migration projects, so business users can access the data during the entire migration.
Real-time projects: “There’s an extensive use in data virtualization of lots of real time optimization, which of course harnesses the power of the underlying technology,” Chandrasekaran said. “The general principal in data virtualization is to push the processing to where the data is. So the more sophisticated your query optimizations are, the more you can push down.”
Data services: Data virtualization’s ability to create an abstraction layer means you can use it to support data services, either in the cloud or for mobile apps. This EnterpriseAppsToday article shares how AAA used data virtualization to obscure complexities of a major data migration and consolidation project.
The IoT: The Internet of Things is another driver for data virtualization adoption, primarily because it allows you to take sensor data from a Big Data store and combine it with other data — whether that is parts, warranty, customer or product data — for predictive analytics. My post “” includes examples of real business uses.
Loraine Lawson is a veteran technology reporter and blogger. She currently writes the Integration blog for IT Business Edge, which covers all aspects of integration technology, including data governance and best practices. She has also covered IT/Business Alignment and IT Security for IT Business Edge. Before becoming a freelance writer, Lawson worked at TechRepublic as a site editor and writer, covering mobile, IT management, IT security and other technology trends. Previously, she was a webmaster at the Kentucky Transportation Cabinet and a newspaper journalist. Follow Lawson at Google+ and on Twitter.