Despite what you may have heard, we are not in the virtual era, nor the cloud era, nor the services era. We are in the software-defined era.
After all, software manipulation of what once were strictly hardware infrastructure components is the running theme across all server, network and, now, storage architectures over the past decade. But while it’s hard to argue against the efficacy of software-based management and configuration, the fact remains that we are still at the starting line of what is sure to be a lengthy race for software dominance, with a finish line that is far from clear.
Storage is the latest realm to embrace software mania. Software-defined networking is already part of the IT lexicon, and there are many who argue that virtualization is simply a fancy word for software-defined server architecture. But as Enterprise Storage Forum’s Paul Rubens notes, there is as yet no solid definition of SDS – at least, not like networking’s control/data plane separation or virtualization’s VM partitioning. That means vendors have taken it upon themselves to define SDS as they see fit, leaving a very blurry line between full software definition and souped-up traditional management techniques.
That means we will probably see quite a wide variety of software-defined platforms in the coming months, putting enterprise executives under the gun when it comes to determining their merits. For instance, we have Silicon Mechanics’ new zStax StorCore 104 system, which uses the Zettabyte File System (ZFS) and standard x86 hardware to provide an open, unified SDS environment capable of high-performance, scalable and virtual/cloud-optimized operation. The system supports CIFS, NFS, iSCSI, InfiniBand and Fibre Channel networks in either NAS or SAN configurations and can be populated with hard disk, solid state and hybrid storage component. Impressive, but is it SDS? At the moment, there is nothing to say it isn’t.
This is a bigger problem than it seems, given the fact that even when clear definitions are available, as is the case with SDN, there is still a lot of confusion regarding deployment, configuration and overall operations. As VMware’s Bruce Davie noted at the recent Open Networking Summit, SDN does not encompass advanced capabilities like application-level programming and bandwidth scalability – only full network virtualization can do that. Fortunately, VMware’s Nicira subsidiary is already hard at work on this next level of networking functionality.
Meanwhile, Piston Cloud Computing is leveraging the OpenStack framework as the basis for a full software-defined data center platform. The Enterprise OpenStack 2.0 system comes complete with Ceph virtual SAN technology, automated storage and network configuration, near-instant VM provisioning and full commodity x86 compatibility. The company’s goal is to wean the enterprise away from Amazon Web Services and other public cloud providers by providing an easy-to-deploy private cloud solution. To that end, software-defined everything is the means to a full automated, highly scalable end-to-end data environment.
Ideally, purveyors of these and other technologies would like to see a world in which start-ups face no greater hurdle in establishing data environments than they do when turning on the electricity or contracting for garbage removal. And certainly, there is nothing so far to suggest that this can’t happen.
When it comes to applying labels like virtual or software-defined to any one technology, however, the truth will likely be in the eye of the beholder, or the vendor in many cases. All the more reason, then, for enterprise executives to focus on solutions that work, rather than technology trends of the moment.