Containers took the development world by storm last year, giving the enterprise a quick and convenient way to compile the building blocks for new and innovative applications and services.
But since development and operations are uniting under new DevOps models of IT management, the focus is now shifting toward finding the best ways to leverage containers for the operational side.
According to CIO’s Peter Fretty, containers can support production workloads in a number of ways, particularly when applied to Big Data and IoT applications in hyperconverged infrastructure. By encapsulating an entire runtime environment, plus all dependencies, libraries and configuration files, containers provide a portable operating environment that can reside virtually anywhere. But containers alone cannot provide the operational efficiencies needed by a dynamic production environment. The best way to accomplish this is to house containers within virtual machines tied to software-defined storage, all governed by advanced orchestration and security systems to bring order to highly scaled environments.
Already, container management platforms are looking to add production-ready services into the mix. IBM’s Bluemix container service, for instance, integrates the Docker container engine with the open source Kubernetes orchestration stack in a bid to help organizations manage the transition from container development to container operations. As the IBM Fellow Jason McGee told Computerworld Australia recently, containers are practically the de facto means of building software at this point, but the mass transition to production environments has only just begun. (Disclosure: I provide content services for IBM.)
Oracle is also targeting production-ready containers through an enhanced runtime environment and a new security framework, says eWeek’s Sean Michael Kerner. The new Railcar runtime uses the open source Rust programming language that the company’s Vish Abrams contends provides tighter thread control and improved management in high-scale environments. As well, a new security environment based on the open Smith project will allow for more flexibility in production environments by streamlining the container to only its required processes and direct dependencies and limiting ownership and permission restrictions to what is needed by executable components.
And some IT vendors are starting to offer their traditional IT operations products in container form, providing a quick way to deploy new management functions across distributed cloud architectures. HPE recently took this tack with its IT Operations Management (ITOM) suite, which VP Tom Goguen says will improve both the flexibility and reliability of the data ecosystem. Newly containerized tools include Hybrid Cloud Management, Data Center Automation, Operations Bridge and the IT Service Management Automation stack – each of which is now available in a Docker container with native lifecycle management functions and production workload deployment features.
Without doubt, a containerized production environment will be more complicated than a containerized development environment. As resources become more fungible over increasingly disparate infrastructure, the enterprise will need to continuously devise novel new ways to ensure stable and secure operations over hundreds, if not thousands, of nodes.
It will probably be a while before the enterprise can perfect the transition from development to production, but once in place this new container ecosystem will mark the beginning of an entirely new generation of enterprise data services.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.