Data Virtualization Ripe for Adoption, Says Forrester

Loraine Lawson
Slide Show

Seven Data Virtualization Keys

Consider applying seven secrets practiced by your enterprise counterparts to make your own advanced data virtualization projects and architectures successful.

Remember "The Six Million Dollar Man"? "Gentlemen, we can rebuild him. We have the technology. We have the capability to build the world's first bionic man. Steve Austin will be that man. Better than he was before. Better, stronger, faster."

 

I've been thinking about that introduction a lot since I shattered my wrist in May. They gave me what the doctor called "a bionic arm," which - I was disappointed to see - was nothing more than a metal splint with a lot of pads, velcro and a joint so I could move my elbow. No lifting cars for me. Bummer.

 

That's how things go so often with technology: It always sounds so promising, but fails to deliver on that vision.

 


Data virtualization is no exception, it seems. It claims to integrate data faster than moving it to data marts, make the data quality better than ETL and ensure it's more easily accessible than other integration approaches. But early products didn't deliver, while other suppliers underplayed the benefits, according to a recent Forrester study, "Data Virtualization Reaches Critical Mass."

 

That's part of the reason why less than 20 percent of IT departments in North America and Europe are using data virtualization and why even fewer realize its potential, according to a recent ComputerWeekly article quoting the study's author, Forrester analyst Brian Hopkins.

 

Hopkins believes that's on the verge of changing, thanks to improved data virtualization products that include better performance and the type of integration capabilities IT knows and loves - including connectors for third-party data sources and security measures such as data masking.

 

Forrester predicts interest in data virtualization will grow over the next 18-36 months, largely because it addresses the weaknesses of other integration options. A TechTarget article quotes this explanation by Hopkins:

Integration by ETL creates data quality problems and delays information delivery. Integration by DBMS consolidation is high impact, expensive, and risky.

So how can you get started with data virtualization? Forrester recommends three baby steps, according to TechTarget:

  1. Find data integration projects you keep putting off.
  2. Ask whether these projects really require ETL. If not, this might be a good first project. (Keep in mind the importance of identifying business needs in all-things data, though.)
  3. Develop SOA integration patterns with an eye toward exposing information to the virtualization layer. This will make it easier to reuse the virtualized data integration logic and spread adoption.


Add Comment      Leave a comment on this blog post
Jul 8, 2011 3:55 AM Robert Eve Robert Eve  says:

Loraine -

Like your "Bionic Arm", data virtualization does provide some pretty powerful capabilities that are accelerating adoption.  Paraphrasing Forrester,  these include:

- Cost-based query optimization performance improvements that open the door for additional use cases and wider adoption;

- Distributed caching which sets the framework for global deployments;

- Improved discovery tools that simplify understanding of data and accelerate new development;

- Data masking and other security enhancements to better govern data delivery;

- Third party technology integration such as CDC and ESB to extend data virtualization tooling; and

- Big data integration to create new opportunities for analysis and insight.

"Better, stronger, faster"  Absolutely.

Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.