Infrastructure virtualization is a proven means of streamlining hardware footprints and increasing resource agility in order to better handle the demands of burgeoning data loads and wildly divergent user requirements.
But it turns out that what is good for infrastructure is also good for data itself, which is why many organizations are looking to augment existing virtual plans with data virtualization, particularly when it comes to massive volumes found in archiving and data warehousing environments.
The Data Warehousing Institute’s David Wells offers a good overview of data virtualization and how it can drive greater enterprise flexibility. In essence, the goal is to enable access to single copies of data across disparate entities, preferably in ways that make details like location, structure and even access language irrelevant to the user. For warehousing and analytics, then, this eliminates the need to move all related data to a newly created database, which gives infrastructure and particularly networking a break because data no longer has to move from site to site in order to reach the user. Couple this with semantic optimization and in-memory caching and suddenly Big Data starts to look a lot less menacing.
Cloud and colocation providers are already turning to data virtualization as a way to cut costs in an increasingly competitive market. Utah’s C7 Data Centers has turned to Actifio’s newest copy data virtualization solution to drive efficiency in its backup and recovery capabilities, with some added security benefits thrown in for good measure. The idea is to decouple data management from infrastructure management, which in turn allows the consolidation of many silo-based management systems into a single overarching platform. Not only does this improve agility and resiliency, but allows for a more unified view to assess risk and improve threat management as clients are added to C7’s roster.
How is data virtualization different from standard ETL/DW functionality? According to Data Waterloo’s Ray Ullmer, the key is the visibility it affords into the data itself, rather than the data store. Regardless of what type of store is in place, the view presented to the user or the application is provided via a common, abstracted interface, which translates into near-zero latency and the end of data-at-rest requirements. As well, there is no need for a full accounting of every source element, which reduces storage overhead and helps to deliver optimized performance across a wide array of users – in other words, no more tailoring expensive warehouses to select workloads.
Data virtualization is a key facet of the emerging agile infrastructure. While virtualization of systems and resources offers the ability to define the parameters of the data environment completely in software, virtualization of data gives actual data loads the ability to leverage the newly flexible infrastructure to produce real, tangible benefits to the business process.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.