If given the choice, would you want to build a new storage environment from scratch, provisioning and integrating multiple hardware and software components from the ground up, or would you rather sit at a PC and click-and-drag everything into place?
This, of course, is the appeal of software-defined storage (SDS). Like its brethren, virtualization and software-defined networking, SDS promises to take the dull drudgery away from building and managing the storage farm. In place of rigid architecture, you get a highly dynamic storage system that is both easier to operate and more resilient to the changes taking place in the worldwide data ecosystem. And as I mentioned a few weeks ago, with everything else getting the software-defined treatment, it would be a shame to leave storage out.
And now that even a storage stalwart like EMC has some skin in the SDS game, it seems we are truly on the road to a fully software-defined data center. The company unveiled its ViPR platform this week, billed as the means to transition legacy storage infrastructure from the old silo architectures of the past to the virtual, dynamic data environments of the future. In essence, the system provides a layer of abstraction in the control and data planes of storage networks, much the way SDN platforms do for the LAN. In this way, the enterprise will be able to manage disparate storage infrastructure to more closely match the data-handling characteristics of virtualized server and networking plants.
But it goes even deeper than that, according to ITBE’s Michael Vizard. Ultimately, ViPR is expected to fulfill the promise of unified storage, in which services will be delivered to any platform on the market regardless of whether it utilizes block, file or object-oriented architecture. Such is the power of abstraction that underlying infrastructure becomes irrelevant when all data services are run through a ViPR-based controller, particularly one housed in the cloud.
EMC is not the only one pursuing this storage nirvana, however. A Belgium-based firm called Cloudfounders recently released a pilot SDS program that utilizes OpenStack, VMware and Amazon S3 to enabled scale-out storage architectures on commodity x86 platforms. The Open vStorage system provides SAN, SSD caching, compression, encryption and other functions across hybrid cloud architectures using either bare-metal or virtual storage appliance configurations. And with 10 GbE networking and top-end cache management technology, the company says it can provide dual-tier cache support at more than 100,000 IOPS per host.
At the same time, a company called Jeda Networks is out with its own software-defined storage network (SDSN) platform, dubbed the Fabric Network Controller, designed to lessen the cost and complexity of operating multi-vendor storage environments. The company fired it up at the recent Interop show in Las Vegas using a 40 GbE backbone and more than 15 storage products from various vendors on the show floor. The company hopes to pitch the system as a solution for small and medium-sized businesses that require the scale and flexibility of larger organizations but lack the resources and skillsets to manage fully virtualized environments.
In the early days of server virtualization, very few voices raised the specter of a fully abstracted, software-defined data center. Most people laughed at the idea.
But no one is laughing now. With server farms, networks and now storage quickly taking up the software mantle, it won’t be long before end-to-end data environments will be provisioned, launched and decommissioned through a simple user interface. And then we’ll see how truly creative the knowledge industry can become when no longer confined by the limits of the physical universe.