Enterprises contemplating the transition from hardware-based infrastructure to a software-defined data center (SDDC) have already come to expect a fair share of complications. But exactly what are the pain points, and how have early adopters managed their way, or not, around them?
Every implementation will be different, of course, but since the SDDC market is still so green, it is fair to say that most of the issues encountered to date will be fairly universal.
For one thing, says VMware’s Jose Alamo, it’s best to go in with a clear strategy in mind – not just for the transition, but for what comes after. A key challenge is capacity planning which, if left to chance, can push resource consumption, and costs, to unacceptable levels. Before you can gauge capacity, of course, you’ll need to get a handle on demand, and in particular the way in which loads can be expected to rise and fall over time. With that in hand, you can then get into the nitty gritty of workload density and distribution, availability, process relationships and how all of this is to be handled through the management dashboard.
In many ways, SDDC management relies on the same core principles as traditional infrastructure management, says Solar Winds Senior VP Joe Kim. Things like configuration management and service level optimization will seem familiar, although they now cut across multiple abstracted layers. Two key management frameworks, DART and SOAR, are emerging as crucial elements in successful SDDC operations, offering key toolkits to oversee everything from discovery and remediation to automation and optimization.
There is also something to be said for deploying the SDDC on new modular infrastructure rather than force-fitting it into legacy environments, says Fujitsu’s Craig Parker. For one thing, a modular base like the company’s PrimeFlex platform is pre-configured and pre-tested for SDDC architectures and can provide a fast on-ramp to Big Data, the IoT and other emerging data initiatives. Once the basic hardware layer is in place, the enterprise can implement virtualized infrastructure in stages, starting with a basic cluster-in-a-box and working up to VMware or OpenStack clouds.
Even with a greenfield deployment, organizations still face difficulties migrating legacy data environments. Fortunately, vendors like Dell EMC are already undertaking these efforts, so they will have hands-on experience to share with customers when the time comes. The company’s Wayne Haber and Stephen Dion highlighted a recent move of 500 servers to a newly crafted SDDC, noting that one of the most crucial elements in the project was the management of expectations. This included not only potential challenges and disruptions during the move, but possible loss of functionality afterward. On a technical level, one of the most important aspects to keep in mind is that server dependencies need to be tracked very carefully in order to minimize the impact on the wider data environment.
The SDDC represents the culmination of the virtualization movement that has remade IT infrastructure over the past two decades or so. And as has already been seen in individual server, storage and networking environments, the separation of software and hardware produces not only a change in architecture but in functionality and processes as well.
In that light, long before the enterprise makes the decision to implement an SDDC and begins the arduous process of transformation, it should think long and hard about how it intends to use it and how it will reorganize itself around a highly fluid, data- and application-centric work environment.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.