Defining Storage Virtualization

Arthur Cole

It's been said that the virtual era is coming to a close and that the cloud era is upon us. That is true up to a point, but only because we as an industry have failed to define what virtualization actually is.

For many, virtualization stops at the server farm. Once the physical server has been loaded up with virtual machines and the infrastructure itself has been consolidated, virtualization is largely complete. Anyone who talks about storage or network virtualization is playing with semantics, the argument goes, because you're not really creating something out of nothing.

This is a false argument, however, because if you look closely at server virtualization, all you're doing is building an abstract layer on top of physical hardware that can be used to run multiple, logical servers. The same physical processors are still in use − it's just that now they can be used more efficiently to handle more work.

In the storage and networking realms, we don't call this "virtualization" anymore because it is more properly described as "software-defined" architecture. And if that's the case, we probably shouldn't bother with "server virtualization" anymore and start talking about "software-defined processing."

Network Computing's David Hill points out that once we take the word "virtualization," or the even-more-dreaded "hypervisor," out of the storage equation, it becomes evident that software-defined storage (SDS) can deliver nearly all the benefits of server virtualization over the entire storage farm. With an abstract layer on top of physical storage, unused resources can be tapped, pools of storage can be provisioned for particularly heavy loads and a high degree of automation can be introduced to make storage both more efficient and more productive. To be truly effective, SDS will require some changes in the IT mindset, particularly when it comes to the fiefdoms that arise over dedicated infrastructure, but the benefits of change are likely to overcome any initial resistance.

That shouldn't be too hard once the front office gets a look at the cost-savings that SDS offers, according to Information Age's Kane Fulton. Current storage systems are largely proprietary, which means if you want to expand your footprint you have no choice but to build out existing infrastructure or add new, high-capacity systems. Under SDS, hardware expansion can go the commodity route like much of today's server infrastructure, and it should be easier to provision new cloud resources because integration can take place on the abstracted layer.

And let's not overlook the many operational benefits, says Steve Houk, COO of storage hypervisor pioneer DataCore Software. So far, most enterprises have been willing to place low-level applications in virtual environments, but mission-critical apps were considered too vital for the storage and networking conflicts that arise in virtual environments. With storage also functioning on the virtual plane, those conflicts should disappear. Intelligent software can now safely manage the increased traffic from virtual environments, delivering the appropriate resources to Tier 1 applications like ERP and OLAP.

But the story doesn't end here. Now that the three pillars of IT infrastructure − servers, storage and networking − can exist simultaneously on an abstract layer, the notion of a fully stateless, completely external IT environment is finally coming into focus. So far, the idea of outsourcing all enterprise resources over the cloud or in some other fashion has largely been more vision than reality.

Now that underlying infrastructure can be created in software and not just on silicon, we can stop talking about utility computing and start doing it.



Add Comment      Leave a comment on this blog post
Nov 20, 2012 12:54 PM Gregg Gregg  says:
Great post! The ability to deliver an abstraction layer through software on top of any heterogeneous proprietary storage is an important step towards achieving software defined storage. To truly realize the benefits of the larger vision of a software defined datacenter however, the unit of management and design point needs to be the virtual machine, not the traditional LUN. Traditional storage virtualization hardware solutions, including those that have recently ported to be shipped as software appliances, have the LUN as their design point and unit of management. Next generation, purpose-built storage hypervisor software from Virsto was designed from the ground up to deliver per-VM and VMDK level granularity, making operations much more efficient and scalable. The data services like snaps, clones, thin provisioning, de-duplication, and policy based, automated provisioning integrated directly to VMware vCenter and Microsoft System Center required for the SDDC vision require management at the virtual machine level to truly achieve the potential of software defined storage. Reply
Jan 22, 2013 2:01 AM Kanye Kanye  says:
You can either use a design point as a VM or directly integrate it with the cloud storage virtualization to make the best of the available resources. The point is, does the SDDC require management at the virtual machine level. Here is where I learned the most effective technique http://www.examiner.com/article/things-to-consider-storage-virtualization Reply

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date