Containers are quickly moving from the test lab to production environments and many enterprise executives are facing the same problem that has accompanied virtually every other technology advancement through the ages: Changes to one piece of the data environment affect the performance of others, and not always in a good way.
In this case, containerized workloads, while highly effective for emerging digital transformation initiatives, may have difficulty integrating with non-containerized workloads, or at the very least, the non-container-optimized portions of legacy infrastructure. While this is likely to be a temporary issue in the age of software-defined infrastructure, the fact remains that containers can produce some short-term headaches if not provisioned and deployed properly.
One issue is with storage, or more specifically, persistent storage. According to a recent study by ClusterHQ, integration with non-volatile storage infrastructure is now the top barrier to container implementation among early adopters. With issues like security and isolation already behind them, the leading container pioneers are now trying to kick storage performance up to match container speed and flexibility. This is trickier than it sounds because container duration is all over the map, with some lasting months and others disappearing in a few seconds. The obvious solution is to deploy more RAM in the data center, but that takes time and money and IT is under pressure to push containers into production-level environments now.
This is where companies like Portworx hope to make a difference. The firm has a new container management solution called PX-Enterprise that promises to meet the needs of containerized app development by distributing persistent storage at the node level. The system essentially converts commodity x86 infrastructure into a converged storage node, which can be scaled across multiple clusters and automatically provisioned by any Docker scheduler. In this way, you maintain scalable, persistent storage while preserving key storage policies for things like IOPS and availability in the container. The system also provides highly granular container snapshots and replication.
Containers also pose a management problem as they move beyond the single tenant environments of the test bed and into the multi-tenant world of full production. A California developer called ContainerX is offering what it calls the world’s first multi-tenant Container as a Service (CaaS) solution, pulling end-to-end container management under a single pane of glass. The system is built around two patented technologies, Elastic Container Clusters and Container Pools, that enable auto-scaling of CPU, memory and other resources across bare-metal, virtual and cloud-based infrastructure. As well, it uses VM-like control functionality for Docker management so DevOps teams can get up to speed quickly, and can integrate into legacy environments using zero-day setup, seamless day-N operations and other enterprise-grade management tools.
Implementing containers is also difficult to manage because no one is really sure how disruptive they will be, says InfoWorld’s Serdar Yegulalp. A company called New Relic has been tracking container deployments for over a year, and while overall usage has nearly doubled, implementation patterns are puzzling, to say the least. For one thing, long-running containers are lasting longer, nearly two weeks, but the short ones are getting shorter, dropping the average lifespan from 13 hours to just over 9. This suggests that most organizations are using containers for the build process but not production, which leads to the possibility that containers will act more as supplements for virtual machines rather than replacements.
Ultimately, the entire enterprise will have to pivot toward container-based workloads, and this will have ramifications not just in technology but on processes, business models and even the enterprise organizational structure. This means container management will have to be a top concern going forward before self-service capabilities and automation start using the technology in ways that may disrupt existing operations.
The ripple effect of one container is like a tiny pebble tossed in a pond, but imagine the pond after it has been inundated with 100,000 pebbles.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.