Kubernetes is an open source container management platform designed to run enterprise-class, cloud-enabled and web-scalable IT workloads. It is built upon the foundation laid by Google based on 15 years of experience in running containerized applications.

Though their popularity is a mostly recent trend, the concept of containers has existed for over a decade. Mainstream Unix-based operating systems (OS), such as Solaris, FreeBSD and Linux, had built-in support for containers, but it was Docker that truly democratized containers by making them manageable and accessible to both the development and IT operations teams. Docker has demonstrated that containerization can drive the scalability and portability of applications. Developers and IT operations are turning to containers for packaging code and dependencies written in a variety of languages. Containers are also playing a crucial role in DevOps processes. They have become an integral part of build automation and continuous integration and continuous deployment (CI/CD) pipelines.

The interest in containers led to the formation of the Open Container Initiative (OCI) to define the standards of container runtime and image formats. The industry is also witnessing various implementations of containers, such as LXD by Canonical, rkt by CoreOS, Windows Containers by Microsoft, CRI-O — being reviewed through the Kubernetes Incubator, and vSphere Integrated Containers by VMware.

While core implementations center around the life cycle of individual containers, production applications typically deal with workloads that have dozens of containers running across multiple hosts. The complex architecture dealing with multiple hosts and containers running in production environments demands a new set of management tools. Some of the popular solutions include Docker Datacenter, Kubernetes, and Mesosphere DC/OS.

Container orchestration has influenced traditional Platform as a Service (PaaS) architecture by providing an open and efficient model for packaging, deployment, isolation, service discovery, scaling and rolling upgrades. Most mainstream PaaS solutions have embraced containers, and there are new PaaS implementations that are built on top of container orchestration and management platforms. Customers have the choice of either deploying core container orchestration tools that are more aligned with IT operations, or a PaaS implementation that targets developers.

Figure 1: High-level architecture of a container orchestration engine.

Figure 1: High-level architecture of a container orchestration engine.

The key takeaway is that container orchestration has impacted every aspect of modern software development and deployment. Kubernetes will play a crucial role in driving the adoption of containers in both enterprises and emerging startups.

Read More:   DevOps Needs Guardrails, Not Gates, for Security – InApps Technology 2022

Kubernetes Architecture

This architecture of Kubernetes provides a flexible, loosely-coupled mechanism for service discovery. Like most distributed computing platforms, a Kubernetes cluster consists of at least one master and multiple compute nodes. The master is responsible for exposing the application program interface (API), scheduling the deployments and managing the overall cluster. Each node runs a container runtime, such as Docker or rkt, along with an agent that communicates with the master. The node also runs additional components for logging, monitoring, service discovery and optional add-ons. Nodes are the workhorses of a Kubernetes cluster. They expose compute, networking and storage resources to applications. Nodes can be virtual machines (VMs) running in a cloud or bare metal servers running within the data center.

Figure 2: Kubernetes breaks down into multiple architectural components.

Figure 2: Kubernetes breaks down into multiple architectural components.

A pod is a collection of one or more containers. The pod serves as Kubernetes’ core unit of management. Pods act as the logical boundary for containers sharing the same context and resources. The grouping mechanism of pods make up for the differences between containerization and virtualization by making it possible to run multiple dependent processes together. At runtime, pods can be scaled by creating replica sets, which ensure that the deployment always runs the desired number of pods.

Replica sets deliver the required scale and availability by maintaining a pre-defined set of pods at all times. A single pod or a replica set can be exposed to the internal or external consumers via services. Services enable the discovery of pods by associating a set of pods to a specific criterion. Pods are associated to services through key-value pairs called labels and selectors. Any new pod with labels that match the selector will automatically be discovered by the service. This architecture provides a flexible, loosely-coupled mechanism for service discovery.

Figure 3: The master is responsible for exposing the API, scheduling the deployments and managing the overall cluster.

Figure 3: The master is responsible for exposing the API, scheduling the deployments and managing the overall cluster.

The definition of Kubernetes objects, such as pods, replica sets and services, are submitted to the master. Based on the defined requirements and availability of resources, the master schedules the pod on a specific node. The node pulls the images from the container image registry and coordinates with the local container runtime to launch the container.

etcd is an open source, distributed key-value database from CoreOS, which acts as the single source of truth (SSOT) for all components of the Kubernetes cluster. The master queries etcd to retrieve various parameters of the state of the nodes, pods and containers.

Read More:   Update The Future of Data in Serverless Will Be API-Driven

This architecture of Kubernetes makes it modular and scalable by creating an abstraction between the applications and the underlying infrastructure.

Figure 4: Nodes expose compute, networking and storage resources to applications.

Figure 4: Nodes expose compute, networking and storage resources to applications.

Key Design Principles

Kubernetes is designed on the principles of scalability, availability, security and portability. It optimizes the cost of infrastructure by efficiently distributing the workload across available resources. This section will highlight some of the key attributes of Kubernetes.

Workload Scalability

Applications deployed in Kubernetes are packaged as microservices. These microservices are composed of multiple containers grouped as pods. Each container is designed to perform only one task. Pods can be composed of stateless containers or stateful containers. Stateless pods can easily be scaled on-demand or through dynamic auto-scaling. Kubernetes 1.4 supports horizontal pod auto-scaling, which automatically scales the number of pods in a replication controller based on CPU utilization. Future versions will support custom metrics for defining the auto-scale rules and thresholds.

Hosted Kubernetes running on Google Cloud also supports cluster auto-scaling. When pods are scaled across all available nodes, Kubernetes coordinates with the underlying infrastructure to add additional nodes to the cluster.


Intel’s Jonathan Donaldson Discusses Kubernetes

Listen to all TNS podcasts on Simplecast.

An application that is architected on microservices, packaged as containers and deployed as pods can take advantage of the extreme scaling capabilities of Kubernetes. Though this is mostly applicable to stateless pods, Kubernetes is adding support for persistent workloads, such as NoSQL databases and relational database management systems (RDBMS), through pet sets; this will enable scaling stateless applications such as Cassandra clusters and MongoDB replica sets. This capability will bring elastic, stateless web tiers and persistent, stateful databases together to run on the same infrastructure.

High Availability

Contemporary workloads demand availability at both the infrastructure and application levels. In clusters at scale, everything is prone to failure, which makes high availability for production workloads strictly necessary. While most container orchestration engines and PaaS offerings deliver application availability, Kubernetes is designed to tackle the availability of both infrastructure and applications.

On the application front, Kubernetes ensures high availability by means of replica sets, replication controllers and pet sets. Operators can declare the minimum number of pods that need to run at any given point of time. If a container or pod crashes due to an error, the declarative policy can bring back the deployment to the desired configuration. Stateful workloads can be configured for high availability through pet sets.

For infrastructure availability, Kubernetes has support for a wide range of storage backends, coming from distributed file systems such as network file system (NFS) and GlusterFS, block storage devices such as Amazon Elastic Block Store (EBS) and Google Compute Engine persistent disk, and specialized container storage plugins such as Flocker. Adding a reliable, available storage layer to Kubernetes ensures high availability of stateful workloads.

Read More:   Visualize Time-Series Data with Open Source Grafana and InfluxDB – InApps Technology 2022

Each component of a Kubernetes cluster — etcd, API server, nodes— can be configured for high availability. Applications can take advantage of load balancers and health checks to ensure availability.

Security

Security in Kubernetes is configured at multiple levels. The API endpoints are secured through transport layer security (TLS), which ensures the user is authenticated using the most secure mechanism available. Kubernetes clusters have two categories of users — service accounts managed directly by Kubernetes, and normal users assumed to be managed by an independent service. Service accounts managed by the Kubernetes API are created automatically by the API server. Every operation that manages a process running within the cluster must be initiated by an authenticated user; this mechanism ensures the security of the cluster.

Applications deployed within a Kubernetes cluster can leverage the concept of secrets to securely access data. A secret is a Kubernetes object that contains a small amount of sensitive data, such as a password, token or key, which reduces the risk of accidental exposure of data. Usernames and passwords are encoded in base64 before storing them within a Kubernetes cluster. Pods can access the secret at runtime through the mounted volumes or environment variables. The caveat is that the secret is available to all the users of the same cluster namespace.

To allow or restrict network traffic to pods, network policies can be applied to the deployment. A network policy in Kubernetes is a specification of how selections of pods are allowed to communicate with each other and with other network endpoints. This is useful to obscure pods in a multi-tier deployment that shouldn’t be exposed to other applications.

Portability

Kubernetes is designed to offer freedom of choice when choosing operating systems, container runtimes, processor architectures, cloud platforms and PaaS.

A Kubernetes cluster can be configured on mainstream Linux distributions, including CentOS, CoreOS, Debian, Fedora, Red Hat Linux and Ubuntu. It can be deployed to run on local development machines; cloud platforms such as AWS, Azure and Google Cloud; virtualization environments based on KVM, vSphere and libvirt; and bare metal. Users can launch containers that run on Docker or rkt runtimes, and new container runtimes can be accommodated in the future.

Through federation, it’s also possible to mix and match clusters running across multiple cloud providers and on-premises. This brings the hybrid cloud capabilities to containerized workloads. Customers can seamlessly move workloads from one deployment target to the other. We will discuss the hybrid architecture in the next section.

Feature image via Pixabay.

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.