It’s pretty certain that much of the enterprise workload, and virtually all of the Internet of Things, will run on containers before long, which means the basic configuration of these little virtualized environments will have to be as interoperable as possible.
And that’s a problem, because if there is one thing the IT industry should know by now, true interoperability across multi-vendor platforms is exceedingly rare.
Nevertheless, the container development community took a bold step forward in this effort earlier this month with the release of the Open Container Initiative (OCI) 1.0 standard. This lays the foundation for a Linux-based runtime and image format that should provide basic interoperability to any container that conforms to the spec. Now, instead of having to deal with potential silos of containerized workloads across the data center and on the cloud, organizations should be able to mix-and-match a wide range of services and microservices that will hopefully spawn an entirely new, largely autonomous, data ecosystem that will push performance and productivity to new levels.
The most significant aspect of OCI 1.0 isn’t the actual standard but the fact that so many container vendors are in alignment, says InformationWeek’s Charles Babcock. Not only has it gained support from rival container developers like Docker and CoreOS, but management platforms like Kubernetes and Red Hat, and cloud and data center platform providers like Intel, IBM and Google. Even VMware, which could potentially lose some clout as containers supplant virtual machines for some applications, is on board.
The name of the game here is portability, according to Enterprise Tech’s George Leopold. With a common runtime and image format, the basic lifecycle of the container can now be maintained across multiple implementations, which means that as long as the environment it encounters is compliant, the enterprise should be able to release the container into the wild and watch it do its thing. There are limits, however. The OCI is a Linux specification, so even though Microsoft has backed the effort, it can still create an OCI-compliant implementation for Windows that will not run on Linux servers even through it uses the same runtime and image.
And while the release of the spec is a big step forward, there is still a lot of work to do in the areas of certification and compliance enforcement, says The Register’s Thomas Claburn. At the moment, the standard offers enough assurance to early adopters that the container environment will be stable enough to warrant additional investment, but it will be a while before we start to see (or should start to see, anyway) “OCI-certified” or “OCI-compliant” labels appearing on actual products. In the words of Docker’s David Messina, OCI 1.0 is akin to TCP/IP or HTML5 in that it provides a base-level specification on which commercial platforms will be built.
Interoperability has always been a spectrum rather than a hard line, so there was no reason to believe it would have been any different for containers. But with a widely accepted open platform in hand, the technology is starting off in a better position than most.
In a way, this is a sign of the times given that it is virtually unheard of for an organization to rely solely on its own infrastructure anymore, and containers were designed from the ground up to spread workloads across a highly diverse and dynamic ecosystem.
It seems likely, then, that going forward, the container development community will exhibit greater cooperation than vendors have in the past – but only because they have to.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.