Multiple Approaches to Container Scalability

Arthur Cole
Slide Show

Amazing Growth in Data Requires Innovative New Solutions

Few enterprises have made serious inroads into the emerging field of container virtualization, but already there is growing concern that the technology might not be as effective as advertised in supporting advanced applications and microservices – at least not yet.

At the moment, the big issue is scalability. Docker, the leading container developer, has made no secret of its desire to incorporate greater scalability on its platform, primarily the ability to enable more efficient networking between large numbers of containers. To that end, the company has offered a number of orchestration and management tools through joint development projects with companies like Red Hat, Amazon and IBM.

The company has also worked closely with Google and its Kubernetes container management system, but as The Platform’s Timothy Prickett Morgan points out, even Kubernetes is coming up short on the scalability meter, at least by Google standards. A typical Google cluster, after all, houses about 100,000 machines that are overseen by the company’s Borg controller, which itself can scale upwards of 10,000 nodes. Kubernetes, meanwhile, tops out at about 100 nodes with perhaps 30 container pods per node, which is barely large enough for a medium-sized enterprise, let alone a large firm or cloud provider. Google, in fact, may prefer it this way so as not to give potential rivals a ready-made solution to achieve Google scale.


Still, enterprises looking to deploy containers will want scale above all else, or else why bother with containers at all? To that end, a number of third-party developers are crafting their own solutions. Nexenta, for one, recently added container support to its NexentaEdge software-defined storage solution, which should provide a means to leverage containers for cloud-native applications. As emerging stateful cloud applications and microservices start to address enterprise-class workloads, the need for integrated persistent storage is growing. Nexenta says it can satisfy this demand and maintain efficient resource consumption by providing seamless storage integration even as the number of containers under management increases.

Meanwhile, a company called Univa has added Docker support to its Grid Engine workload and resource manager. This should allow the enterprise to not only manage containers at scale but then blend them into existing workloads across heterogeneous application and infrastructure environments. The Grid Engine handles scheduling, resource allocation, prioritization and other tasks that are required to bring containers out of the test bed and into production environments. As a multi-infrastructure, multi-OS platform to begin with, Grid Engine has the advantage of scaling thousands of applications and application frameworks across disparate resources, enabling the enterprise to scale its container environment to the limits of available infrastructure.

At the same time, Mesosphere is looking to address container scale by consolidating data center functions within its overarching Datacenter Operating System (DCOS). The company recently added the Marathon initialization and control system that supports Docker across clustered deployments. The system incorporates Kubernetes for host management, but also adds a number of home-grown features like resource and configuration management to balance container size and other parameters against available resources. This, in turn, allows the container environment to scale across tens of thousands of nodes. As part of the Apache Mesos framework, the system is designed to support Big Data, IoT and other large workloads.

They say necessity is the mother of invention, and in this case the need for scale in containerized environments is paramount. Companies like Docker were undoubtedly anxious to get their alternative virtualization solution to market quickly, but in doing so failed to address the key aspect in modern data architectures: Everything needs to scale these days or it is DOA.

Docker is working on the problem now, but it remains to be seen whether the solution should reside on the container level or elsewhere in the stack.

Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.