Server virtualization is a fairly easy concept to understand: Add a layer of software that allows processing capability to work across multiple operating environments. It drives both efficiency and performance because it puts to good use resources that would otherwise sit idle.
Storage virtualization is a different animal. It doesn't free up capacity that you didn't know you had. Rather, it allows existing storage resources to be combined and reconfigured to more closely match shifting data requirements. It's a subtle distinction, but one that makes a lot of difference between what many enterprises expect to gain from the technology and what it actually delivers.
As ZDNet Asia's Liau Yun Qing gleaned from numerous interviews with storage experts in the Asia-Pacific region, storage virtualization doesn't allow you to provision more storage out of existing capacity. A byte of data saved on a disk, after all, occupies a set piece of real estate on the platter, and no software in the world can allow another byte to share that same space. However, you can begin to pool both internal and external resources into a cohesive operating environment, and then deploy advanced tiering strategies to implement more efficient allocation of data types. This type of dynamism is crucial if you expect storage to keep up with the highly fluid nature of newly virtualized server and networking infrastructure.
It also affords enterprises much more flexibility when it comes to provisioning and integrating disparate storage platforms, at least in theory, according to Storage Switzerland's George Crump. Once you've abstracted the services provided by the storage controller, which is essentially where storage virtualization resides, you have the ability to federate things like LUN management, snapshots and thin provisioning. And if server hypervisors can be tweaked just a bit to improve their ability to manage storage services, we could be on the verge of an entirely new generation of mix-and-match storage infrastructure.
Whether or not the major platform providers see an opportunity here is another matter. At the moment, companies like EMC seem to prefer higher levels of integration between their storage management portfolios and top virtual environments, in this case VMware. The company is offering a new version of its Virtual Storage Integrator (VSI) that features end-to-end support for the vCenter virtual management stack and direct mapping between virtual machines and storage. The goal is to more closely match virtual server and storage environments with the expectation that it will drive greater efficiency and a more streamlined infrastructure.
At the same time, DataCore has raised eyebrows with its new SANsymphony-V - not so much for its capabilities but for the fact that the company describes it as a "storage hypervisor." Tech analyst Dan Kusnetzky says the term is misleading because it does not provide for a fully independent operating, or in this case storage, environment the way a bare-metal or OS-based hypervisor does. Still, he says, the new SANsymphony does a good job of optimizing and managing storage to better support virtual infrastructure.
In that sense, then, the term "storage virtualization" itself is something of a misnomer. A more accurate description would be "virtualization-optimized storage management." Marketing departments being what they are, however, it seems the former is here to stay.
That's OK, as long as enterprises realize that no matter what you call it, it won't, and can't, provide the same benefits as server virtualization.