All aboard for the software-defined data center (SDDC)! Hurry, the train is leaving the station and those who miss it will never find their way to the new data ecosystem.
We’ve heard this all before, of course. From the mainframe to the server, from the server to virtualization, from virtualization to the cloud—every development is heralded by the hawkers who urge a fast speed of adoption above all things, as if getting it wrong is not nearly as bad as getting it later.
Like all of these past developments, however, the SDDC is no myth, at least, not entirely. It’s just that the changes it will bring about will not happen overnight, so there is nothing wrong with a little planning, and perhaps some clear-eyed assessment as to what, exactly, is being proposed.
According to HP Chief Technologist Chris Coggrave, the SDDC will be built entirely on the virtual layer, and individual compute elements will be abstracted to an overarching control level that allows entire architectures to be spun up with a few mouse clicks, all without disrupting operations elsewhere in the fabric. Naturally, this has a lot of top IT executives salivating over the prospect of a low-cost, highly fungible collection of resources that may finally satisfy the demands of data users and other stakeholders. To that end, many are already planning the final piece of the infrastructure puzzle, the software-defined network (SDN), which represents the final phase of the all-virtual data environment.
The only problem with this plan, however, is that while the virtual technology needed to raise IT off bare-metal infrastructure exists, the tools to manage all these virtual and cloud resources do not. As Enterprise Management Associates analysts Torsten Volk and Jim Frey have noted recently, the SDDC is still very much in the conceptual stage. So rather than talk about this being “the year of SDDC,” perhaps it’s better to talk about “the decade of SDDC.” This at least provides a little perspective on the transition that is to come and the fact that there is still plenty of time to figure out what we want and how to get it.
Of course, all the pieces will be in place at some point, and creative minds in the industry are already starting to puzzle it all out. Kent Smith at LSI Corp., for instance, envisions a dynamic series of feedback loops between layers of resources and operating software stacks, all governed by policy-driven analytics and automation stacks that interact directly with intelligent hardware platforms. Does Smith know if such a scheme already exists in the lab? He doesn’t say, but he does note that without this type of functionality, the world simply will not be able to cope with the 44 zettabytes of data that will be created annually by 2020.
Another danger is that by the time technology is ready to truly support the SDDC, the industry will have moved on to the next marketing buzzword. StorageIO founder Greg Schulz is already wondering if the software-defined label has already “jumped the shark,” given the amount of standard enterprise platforms that have been quickly rebranded as “SD-ready.” Hardware in general is not very useful until it has been software-defined, so it’s important to distinguish between what is truly new and what is yesterday’s reheated meatloaf. And while we’re at it, perhaps we can add a new acronym to the IT lexicon: SDBS.
At the core of every technology hype, however, is a kernel of truth. In this case, it is the fact that fully virtualized architectures do, in fact, open up the possibility of highly dynamic data environments that can be created, managed and then decommissioned entirely in software. Fully virtualized data infrastructure is the first step toward that vision, but it is by no means the last.