The Who and What of Data Virtualization

Loraine Lawson
Slide Show

Top 10 Benefits of Virtualization

Virtualization has taken a firm hold at most enterprises these days, but the fact is we've only just begun to unleash the true potential of the technology.

Yesterday, I wrote about Forrester's prediction that data virtualization would catch on as a core part of the integration toolset in the next year and a half to two years. I've written about its implications for integration previously, but I decided to do a bit more reading and see what else I could turn up - and, of course, share anything of interest today.


Since adoption is still very low - around 20 percent - I thought it might be useful to cover the basics, or what journalist call the "Four W's": Who, What, When and Where. Today, let's look at the first two.

Who or, more precisely, whom will benefit from data virtualization? At first glance, this would seem like a purely IT-driven decision, a choice between technologies. But actually, data virtualization can benefit the business as much, perhaps more than, IT. In fact, one of its primary value propositions is that it makes data more readily and directly accessible to business users. It can also speed up business-focused projects, such as business intelligence or minor reports because the data can be accessed without waiting for IT to create and schedule an ETL run, hand code a point-to-point integration or set up a shiny new, expensive data mart. And, you don't have to sacrifice control or data quality to make the data available because it's not really leaving where it resides.


Earlier this year, I asked Pierre Fricke, Red Hat's director of SOA product line management, about the value of this approach for IT and the business. He said:

They will use the data they have where it is, rather than creating a new bunch of intermediary data marts or one-off, hardcoded data integrations into applications. They will be able to speed new projects to completion, make changes to existing deployments more swiftly, and govern the deployment more easily.

Speed, agility, less integration work, but without sacrificing data quality or governance. You can see why it would appeal to both IT and the business.


Peter Tran, the vice president of product marketing at Composite Software, also explored this topic in more detail if you'd like to read more about how it benefits IT and the business.


What is data virtualization? Put simply, it's a middleware solution that sits between all your data sources and your end users - often, by way of a business intelligence software. The data is pulled out of its silos into a virtual data warehouse, where the rules can change as needed, and then be delivered in services to the business folks. It does this without you integrating all that data in a data mart or using ETL.


"Data virtualization is about placing all your diverse physical database assets behind technology that provides a single logical interface to your data," writes data/SOA guru and Blue Mountain Labs CTO David Linthicum in a whitepaper sponsored by Informatica.


A few oddball facts that help define the "what" of data virtualization:

  • Data virtualization is a real-time data integration approach, according to Linthicum.
  • It's not to be confused with a virtual data center - although, it does "create" a virtual database that can be changed on the fly as the business requires.
  • It's not "just" data federation. Data federation drops the ball on data governance and quality, but data virtualization addresses these concerns, advocates say. Linthicum notes that data federation is limited to SQL or Xquery-only data transformations, while data virtualization "exposes all the rich ETL-like transformations - such as lookups, joiners, and aggregators - which are a prerequisite for enterprise-grade data integration."


A recent article on TDWI's site explained it this way:

Instead of prescribing the federation of resources, DV proposes mixing and matching data integration technologies to construct a virtual view of an integrated enterprise. At a basic level, DV works with the integration technologies that you already have -- ETL, by and large -- and uses federation to knit everything together.

Tomorrow, I'll look at our next two W's: Where and When.

Add Comment      Leave a comment on this blog post
Jul 7, 2011 6:35 AM Robert Eve Robert Eve  says:

Loraine -

Thank you for pointing out Peter Tran's blog post on the business value of data virtualization.

Bob Reary, Composite Software's Director of Customer Value authors the Business Value track within the Data Virtualization Leadership Blog which is fully devoted to data virtualization ROI and other success measures.

As a complement to your ITBusinessEdge Integration Blog, readers interested in these more in-depth explorations of the data virtualization topics including strategy, architecture, products, value, market dynamics and best practices, might want to check out the Data Virtualization Leadership Blog.

Jul 8, 2011 3:09 AM Ash Parikh Ash Parikh  says:


Great article.

A key aspect that needs on-going clarification is that simple, traditional data federation is not data virtualization. Data virtualization is all about hiding and handling complexity. Dealing with enterprise data is a complex proposition which calls for rich transformations and not limiting the user to SQL or XQuery only transformations. Additionally, simple, traditional data federation assumes that the data in the backend data stores is ready for consumption - that it is of good quality, which is not the case. You cannot just propagate bad data in real-time and then loose the advantage gained to post-processing. You need to do these complex transformations including data quality on the fly, on the federated data.

As you know, Informatica recently released the latest version of its data virtualization solution, Informatica Data Services version 9.1, as part of the Informatica 9.1 Platform.

Key highlights for this release are:

The capability to dynamically mask federated data as it in flight, without processing or staging, just like what we were doing with the full palate of data quality and complex ETL-like data transformations before. This is helping end users leverage a rich set of data transformations, data quality, and data masking capabilities in real-time.

The ability for business users (analysts) to play a bigger role in the Agile Data Integration Process, and work closely with IT users (architects and developers), using role-based tools. This is helping in accelerating the data integration process, with self-service capabilities.

The ability to instantly reuse data services for any application, whether it is a BI tool or composite application or portal, without re-delpoyment or re-building the data integration logic. This is done graphically in a metadata-driven environment, increasing agility and productivity.

Here is a demo and chalk talk:


Ash Parikh


Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.