Containers have captured the imagination of the enterprise in recent months, and no one is more enthusiastic than VMware. The virtualization company, in fact, led some of the initial research into container technology way back when its virtual machine was “the next big thing” to hit the enterprise, and it is now tickled pink that companies like Docker are able to leverage the idea into a potentially lucrative revenue stream.
All of this begs the question, what is VMware thinking here? Because if the company cannot successfully incorporate containers into its virtualization and cloud platforms, it runs the very real risk of losing its “first among equals” status in the emerging enterprise data environment.
VMware’s stated position is that containers are great, but containers couched within a broader virtualization stack, namely theirs, are better. The chief advantage is that VM-supported containers afford the enterprise a high degree of flexibility when it comes to assigning workloads to the appropriate resource architecture. The problem is, others are vying for control of the container environment as well, and it isn’t all that clear whether virtualization hurts or helps the enterprise when it comes to leveraging containers for next-generation data and application loads.
CoreOS, for instance, recently unveiled its Tectonic platform, which leverages the Linux OS and Google’s Kubernetes management system to provide a scale-out solution for companies that do not necessarily need to take it to hyperscale levels just yet. The entire package entails a server OS, container networking and runtime tools and a browser-based cluster management system designed to guide workflows through the environment. The end game is to provide a means to offer services at scale by simplifying the container deployment process while maintaining consistent security and governance.
For VMware, this all makes perfect sense, and indeed the company has already added support for CoreOS in the vSphere and vCloud Air platforms. When ServerWatch’s Paul Rubens asked key VMware executives why, the answer was that CoreOS management combined with the open-VM capabilities of VMware Tools offers a powerful combination to oversee environments that are still running atop VMware’s virtualization layer. By running containers within the virtual machine, enterprises gain high degrees of performance and isolation, plus broad third-party development support for advanced functions like virtual networking and software-defined storage. In short, as long as the enterprise stack sits on top of a VMware virtual machine, the company is happy to play ball with whoever comes up with an innovative management solution.
This is all well and good as long as the enterprise continues to concern itself with infrastructure. But what happens when organizations follow the plan currently laid out by the cloud industry and adopt a more application/services-centric approach to IT? In this case, says container developer Dinesh Subhraveti, traditional virtualization actually gets in the way because it requires the app to sit atop a guest OS that resides within the virtual machine in order to scale. With a container-based solution, the apps fit directly within the container, removing a layer of complexity that operators and/or automation systems would otherwise have to deal with in order to implement a highly dynamic application environment.
VMware is undoubtedly aware of this because it is already taking steps to accommodate cloud-native applications within vSphere. The company’s Project Lightwave and Project Photon are both designed to “cradle” cloud-native apps by Docker, Pivotal and others, according to eWeek’s Chris Preimesberger, which should go a long way toward supporting the app-centric enterprise. Still, the rationale behind this is to support greater functionality and infrastructure elasticity by layering OS, security and other functions between the container and the underlying virtual infrastructure.
Ironically, this harkens back to the same reaction that the leading server manufacturers gave when confronted by virtualization: Sure, you can deploy virtualization on your own, but wouldn’t it be better if it was integrated within the same heavy hardware that you know and love? This strategy worked for a while, but as soon as the enterprise reached a certain comfort level with virtualization, server sales started to tank.
To be sure, the virtual machine is a much more flexible and malleable creature than a physical server and can undoubtedly adapt itself to all kinds of container-based constructs within the enterprise data environment. But ultimately, this begs the question: Is VMware facing the same kind of rough transition that hardware vendors encountered when it becomes obvious to the enterprise that the virtual layer is no longer necessary for a scale-out application-centric data ecosystem?
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.