Kubernetes, often abbreviated as “K8s”, orchestrates containerized applications to run on a cluster of hosts. The K8s system automates the deployment and management of cloud native applications using on-premises infrastructure or public cloud platforms. It distributes application workloads across a Kubernetes cluster and automates dynamic container networking needs. Kubernetes also allocates storage and persistent volumes to running containers, provides automatic scaling, and works continuously to maintain the desired state of applications, providing resiliency.
See how enterprises are using Kubernetes to build, deploy and run modern applications at scale.
Kubernetes has many features that help orchestrate containers across multiple hosts, automate the management of K8s clusters, and maximize resource usage through better utilization of infrastructure. Important features include:
- Auto-scaling. Automatically scale containerized applications and their resources up or down based on usage
- Lifecycle management. Automate deployments and updates with the ability to:
- Rollback to previous versions
- Pause and continue a deployment
- Declarative model. Declare the desired state, and K8s works in the background to maintain that state and recover from any failures
- Resilience and self-healing. Auto placement, auto restart, auto replication and auto scaling provide application self-healing
- Persistent storage. Ability to mount and add storage dynamically
- Load balancing. Kubernetes supports a variety of internal and external load balancing options to address diverse needs
- DevSecOps support. DevSecOps is an advanced approach to security that simplifies and automates container operations across clouds, integrates security throughout the container lifecycle, and enables teams to deliver secure, high-quality software more quickly. Combining DevSecOps practices and Kubernetes improves developer productivity.
Containers encapsulate an application in a form that’s portable and easy to deploy. The Kubernetes architecture is designed to run containerized applications. A Kubernetes cluster consists of at least one control plane and at least one worker node (typically a physical or virtual server). The control plane has two main responsibilities. It exposes the Kubernetes API through the API server and manages the nodes that make up the cluster. The control plane makes decisions about cluster management and detects and responds to cluster events.
The smallest unit of execution for an application running in Kubernetes is the Kubernetes Pod, which consists of one or more containers. Kubernetes Pods run on worker nodes.
It’s important to know the names and functions of the major K8s components that are part of the control plane or that execute on Kubernetes nodes.
The control plane has four primary components used to control communications, manage nodes and keep track of the state of a Kubernetes cluster.
- Kube-apiserver. As its name suggests, the kube-apiserver exposes the Kubernetes API.
- etcd. A key-value store where all data relating to the Kubernetes cluster is stored.
- Kube-scheduler. Watches for new Kubernetes Pods with no assigned nodes and assigns them to a node for execution based on resources, policies, and ‘affinity’ specifications.
- Kube-controller-manager. All controller functions of the control plane are compiled into a single binary: kube-controller-manager.
A K8s node has three major components:
- Kubelet. An agent that makes sure that the necessary containers are running in a Kubernetes Pod.
- Kube-proxy. A network proxy that runs on each node in a cluster to maintain network rules and allow communication.
- Container runtime. The software responsible for running containers. Kubernetes supports any runtime that adheres to the Kubernetes CRI (Container Runtime Interface).
Additional terms to be aware of include:
- Kubernetes service. A Kubernetes service is a logical abstraction for a group of Kubernetes Pods which all perform the same function. Kubernetes services are assigned unique addresses which stay the same even as pod instances come and go.
- Controller. Controllers ensure that the actual running state of the Kubernetes cluster is as close as possible to the desired state.
- Operator. Kubernetes Operators allow you to encapsulate domain-specific knowledge for an application similar to a run book. By automating application-specific tasks, Operators allow you to more easily deploy and manage applications on K8s.
The Kubernetes platform has become popular because it provides a number of important advantages:
- Portability. Containers are portable across a range of environments from virtual environments to bare metal. Kubernetes is supported in all major public clouds, as a result, you can run containerized applications on K8s across many different environments.
- Integration and extensibility. Kubernetes is extensible to work with the solutions you already rely on, including logging, monitoring, and alerting services. The Kubernetes community is working on a variety of open source solutions complementary to Kubernetes, creating a rich and fast-growing ecosystem.
- Cost efficiency. Kubernetes' inherent resource optimization, automated scaling, and flexibility to run workloads where they provide the most value means your IT spend is in your control.
- Scalability. Cloud native applications scale horizontally. Kubernetes uses “auto-scaling,” spinning up additional container instances and scaling out automatically in response to demand.
- API-based. The fundamental fabric of Kubernetes is its REST API. Everything in the Kubernetes environment can be controlled through programming.
- Simplified CI/CD. CI/CD is a DevOps practice that automates building, testing and deploying applications to production environments. Enterprises are integrating Kubernetes and CI/CD to create scalable CI/CD pipelines that adapt dynamically to load.