More

    Containers: Strengthening IoT Infrastructures

    There’s a general consensus that containers are poised to become the dominant artifact that will be employed to run software on Internet of Things (IoT) platforms. The devil, in terms of achieving that goal any time soon, is as always in the details.

    Understanding Containers

    Containers enable developers to encapsulate software in a lightweight artifact that can run anywhere. That’s crucial because IoT environments are made up of a wide range of hardware that run a wide variety of operating systems. Attempting to package software in a unique way for each platform is cost prohibitive.

    However, getting containers to run on an IoT platform is one thing, managing fleets of IoT platforms loaded with containerized applications is quite another. At the most basic level, IT teams need to appreciate the simple fact that IoT platforms come in all shapes and sizes —  ranging from a hyperconverged infrastructure (HCI) platform deployed at the network edge to a device that requires a special type of tiny container because it’s running on a platform with limited memory. 

    The good news is that the cost of hardware, based on platforms such as inexpensive Raspberry Pi cards that can run any type of container, continues to decline. That is critical because IoT applications will be most often built using Docker Engine or the Container Runtime Interface defined by the Open Container Initiative (CRI-O) optimized for Kubernetes clusters. Additionally, larger IoT platforms will need the container orchestration capabilities enabled by Kubernetes. The two instances of Kubernetes that are most likely to be employed on IoT platforms are K3s and MicroK8s.

    Also read: APM Platforms are Driving Digital Business Transformation

    K3s and MicroK8s

    Originally developed by Rancher Labs that has since been acquired by SUSE, K3s is now being advanced under the Cloud Native Computing Foundation (CNCF) that also oversees the development of the larger Kubernetes parent project also known as K8s.

    MicroK8s, meanwhile, is being advanced by Canonical, which has integrated this distribution of Kubernetes with other tools for packaging applications such as snap. The most recent release of MicroK8s reduced the memory footprint of MicroK8s by 32.5%. Canonical envisions organizations employing its snap tool to package, deliver, and update application binaries to platforms running MicroK8s, says Alex Chalkias, product manager at Canonical. 

    However, as critical as Docker and CRI-O may become in IoT environments, there are still going to be instances where IoT hardware is too memory constrained to run either of those types of containers. Nubix is making a case for a class of “tiny” containers that are 100 times smaller than a Docker container.

    Also read: Red Hat Looks to BU to Advance Hybrid Cloud Research

    Managing Containers

    Regardless of the type of container employed, managing a mix of IoT platforms based on bare-metal hardware and virtual machines is going to be a major challenge. In fact, there is still much work to be done when it comes to managing Kubernetes clusters at the level of distributed scale, says Jason McGee, vice president and CTO for IBM Cloud. “There are some real challenges managing Kubernetes in those environments,” he says.

    The CNCF has launched a KubeEdge incubating project to build a platform incorporating the distribution of Kubernetes  to support network, application deployment, and metadata synchronization between the cloud and edge computing environment.

    The Eclipse Foundation, meanwhile, in collaboration with Red Hat and Edgeworx is trying to drive adoption of ioFog, an open source project that provides agent software and a controller that enables microservices based on containers to be deployed more easily on any edge computing platform. 

    “Putting containers at the edge is just the starting point,” says Kilton Hopkins, project lead for the ioFog.

    It’s also not practical to push application code from a central repository simultaneously across a fleet of distributed IoT platforms. Vendors across the software development spectrum are racing to build automation frameworks that will make it feasible for individual IoT platforms running Kubernetes to pull an image from a central repository whenever needed. 

    “A golden dedicated image isn’t going to work,” says Keith Basil, vice president of cloud native infrastructure for SUSE.

    Of course, in the rush to innovate it’s worth remembering that when it comes to deploying software on any edge computing platform there’s already been a lot of work done, says Alex Ellis, founder of OpenFaaS, an open source serverless computing framework. In some cases, developers will be better off employing functions within the context of an event-driven architecture to deploy software on any type of edge computing platform, says Ellis. “There’s a lot of prior art,” he says.

    No matter how software comes to IoT platforms specifically and edge computing platforms in general, it’s clear much work needs to be done before cloud-native technologies, such as containers and Kubernetes clusters, are routinely deployed. The challenge and the opportunity now is to start laying the foundation of managing a highly distributed IoT application environment based on containerized software at an unprecedented scale.

    Read next: Is Serverless Computing Ready to Go Mainstream?   

    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Latest Articles