More

    Managing the SDDC

    Slide Show

    Software-Defined Networking and the Enterprise

    Much of the attention surrounding software-defined data centers (SDDC) is on building them. In very short order, however, the enterprise will have to figure out how to run them as well, and not surprisingly, this is an entirely different ballgame from IT management as we know it today.

    Software architectures are much more fungible than hardware ones, of course, but this does not necessarily make them easier to operate. In fact, the variety of constructs that can be developed and the speed at which they can be deployed means the SDDC will be much harder to manage, although the rub is that this is more of a problem for automation and orchestration stacks than for actual humans.

    Still, there are a number of pitfalls in the transition from hardware-based to software-based infrastructure that could limit the functionality of the SDDC when all is said and done. The biggest potential problem is the risk of a catastrophic misconfiguration, says Continuity Software’s Eran Livneh, which can occur faster and be much harder to pinpoint under an automated management regime. This is why it’s important to verify your existing configurations before exposing them to software-defined management and then incorporate strong quality control mechanisms in the application development process to ensure they do no harm to workflows while they are attempting to make them better. At the same time, make sure that all affected systems can support the configuration changes that will inevitably take place in a software-defined ecosystem, as most outages are caused by gradual shifts in the data environment rather than sudden modifications.

    IT will also have to get used to the idea of no longer managing data or infrastructure but operating environments and workspaces. Emerging SDDC management platforms like IndependenceIT’s Cloud Workspace Suite 5.0 provide advanced orchestration and workflow automation aimed at fostering cohesive Workspace as a Service (WaaS) environments across disparate cloud architectures. The platform features an intuitive UI that enables automated deployment across multi-cloud infrastructure, as well as a scalable application services suite, a resource scheduling module and a live server scaling component to enable highly efficient resource consumption and SLA management.

    Meanwhile, HPE has introduced a new modular solution aimed at bringing SDDC capability to mid-sized enterprises and remote offices that generally lack advanced IT skills. The Hyper Converged 380 platform, built around the ProLiant DL380 server, offers a template-based workflow management system that enables virtual machines and storage resources to be provisioned in a matter of minutes, while at the same time offering multi-node scalability for data-intensive applications like virtual desktops and Big Data processing. It also provides a mobile-friendly management interface, the OneView UX, that provides a common look for both desktop and smartphone access.

    Data Center

    Even with a new management platform in hand, the enterprise will have to expand the focus of its management footprint if it hopes to leverage the SDDC to its fullest extent. One of the ways to do this is through improved DNS (Domain Name System) tracking. As NS1 CEO Kris Beevers noted on Datacenter Knowledge recently, today’s applications are no longer housed in a staid, predictable environment but can wind up distributed across multiple endpoints, each with its own DNS. At the same time, users are growing increasingly less patient when it comes to accessing their preferred application, so emerging services, both professional and consumer, need to incorporate fast DNS lookup, increasingly sophisticated application- and network-awareness capabilities and real-time telemetry functions to track not only where an application is but where it’s going.

    It seems, then, that the SDDC offers the ability to create more magic in the data ecosystem, but more mischief as well. As it becomes easier to change the operating environment, so too will it become more difficult to calculate the full impact of those changes, particularly when they encounter changes that other people, or machines, are making as well.

    This is going to put more pressure on IT to oversee the broader, strategic aspects of data infrastructure even as it sheds responsibility for day-to-day operations. And above all, it will need to retain the ability to step in and take control at a moment’s notice should anything go seriously wrong.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles