It’s been said that the virtual era is coming to a close and that the cloud era is upon us. That is true up to a point, but only because we as an industry have failed to define what virtualization actually is.
For many, virtualization stops at the server farm. Once the physical server has been loaded up with virtual machines and the infrastructure itself has been consolidated, virtualization is largely complete. Anyone who talks about storage or network virtualization is playing with semantics, the argument goes, because you’re not really creating something out of nothing.
This is a false argument, however, because if you look closely at server virtualization, all you’re doing is building an abstract layer on top of physical hardware that can be used to run multiple, logical servers. The same physical processors are still in use − it’s just that now they can be used more efficiently to handle more work.
In the storage and networking realms, we don’t call this “virtualization” anymore because it is more properly described as “software-defined” architecture. And if that’s the case, we probably shouldn’t bother with “server virtualization” anymore and start talking about “software-defined processing.”
Network Computing’s David Hill points out that once we take the word “virtualization,” or the even-more-dreaded “hypervisor,” out of the storage equation, it becomes evident that software-defined storage (SDS) can deliver nearly all the benefits of server virtualization over the entire storage farm. With an abstract layer on top of physical storage, unused resources can be tapped, pools of storage can be provisioned for particularly heavy loads and a high degree of automation can be introduced to make storage both more efficient and more productive. To be truly effective, SDS will require some changes in the IT mindset, particularly when it comes to the fiefdoms that arise over dedicated infrastructure, but the benefits of change are likely to overcome any initial resistance.
That shouldn’t be too hard once the front office gets a look at the cost-savings that SDS offers, according to Information Age’s Kane Fulton. Current storage systems are largely proprietary, which means if you want to expand your footprint you have no choice but to build out existing infrastructure or add new, high-capacity systems. Under SDS, hardware expansion can go the commodity route like much of today’s server infrastructure, and it should be easier to provision new cloud resources because integration can take place on the abstracted layer.
And let’s not overlook the many operational benefits, says Steve Houk, COO of storage hypervisor pioneer DataCore Software. So far, most enterprises have been willing to place low-level applications in virtual environments, but mission-critical apps were considered too vital for the storage and networking conflicts that arise in virtual environments. With storage also functioning on the virtual plane, those conflicts should disappear. Intelligent software can now safely manage the increased traffic from virtual environments, delivering the appropriate resources to Tier 1 applications like ERP and OLAP.
But the story doesn’t end here. Now that the three pillars of IT infrastructure − servers, storage and networking − can exist simultaneously on an abstract layer, the notion of a fully stateless, completely external IT environment is finally coming into focus. So far, the idea of outsourcing all enterprise resources over the cloud or in some other fashion has largely been more vision than reality.
Now that underlying infrastructure can be created in software and not just on silicon, we can stop talking about utility computing and start doing it.