Development of software-defined storage (SDS) is moving at a steady clip, promising streamlined infrastructure, improved flexibility and dramatically lower costs than today’s massive arrays.
But while the trend lines toward smaller, leaner storage infrastructure seem clear, does this really portend massive disruption in the data center? Perhaps not, if history, and the continued presence of giant mainframes in the age of virtual servers, is any guide.
To be sure, there is no shortage of storage upstarts who are ready to proclaim a new era in data preservation. Nexenta, for one, is counting on its newest line of open source SDS platforms to lead the way to all-Flash virtual- and cloud-scale infrastructure for next-generation workloads. The NexentaFusion 1.0 management system provides the intuitive provisioning and workflow configuration that is supported by the RESTful API design of the NexentaStor 5.0 platform. At the same time, it offers proactive alerting and rapid troubleshooting to ensure smooth operations in highly dynamic data environments, plus the ability to oversee multiple NexentaStor appliances from a single management interface.
Meanwhile, advanced container management is starting to make its presence known in the storage farm. As Red Hat’s Irshad Raihan points out, the need for persistent application states and data preservation remains, even as microservices and other container-level tools come and go – something that cannot be addressed by independent storage clusters. What’s needed, he says, is a containerized SDS layer that exists on the same host as the container and can be provisioned dynamically using a management platform like Kubernetes. In this way, there can be a single control plane that allows applications and storage to be orchestrated in tandem, while at the same time eliminating the need for independent storage appliances.
But if these market forces are causing any worry in the boardrooms of traditional storage vendors, they aren’t showing it. In fact, they seem more than willing to ride the storage array train as long as possible while filling out dynamic, scale-out product lines of their own. Dell-EMC, for instance, recently tied the ScaleIO SDS block storage solution to the PowerEdge server to create an all-Flash “Ready Node” that provides server-side SAN performance in a form that can be easily integrated into legacy data infrastructure. In this way, the enterprise maintains critical functions like advanced caching and dynamic I/O provisioning while maintaining application performance as older storage systems are decommissioned.
This need to maintain consistent functionality across old and new storage infrastructure is likely to be a key requirement during what could be a lengthy transition phase, says FalconStor Software CEO Gary Quinn. Posting on vanillaplus.com, Quinn, paraphrasing Mark Twain, said “rumors of legacy storage’s demise have been greatly exaggerated,” but it nonetheless needs to be adapted to modern workflows in order to stay relevant in the emerging digital economy. To that end, the company strives, in part, to integrate SDS functionality into existing systems to mix high performance with broad hardware compatibility. As well, Quinn sees a need for vendor-agnostic solutions that allow the enterprise to break with single-vendor infrastructure as they seek to create distributed computing environments in the data center and the cloud.
It seems, then, that the enterprise is in a sweet spot as far as storage is concerned: Emerging platforms are making it easier to launch new application and service environments that support increasingly mobile and collaborative workflows, while legacy infrastructure remains on hand to support the bulk of the normal business load.
In this way, the enterprise can make the transition to digital operational models at its own pace (or at least the pace dictated by competitive pressures) without having to endure the hassles of a major technology disruption.
The only tricky part is determining what kind of storage environment will be optimal for future applications and how to implement them in the smoothest, most cost-effective manner.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.