A Kubernetes cluster is a set of nodes that run containerized applications. Containerizing applications packages an app with its dependences and some necessary services. They are more lightweight and flexible than virtual machines. In this way, Kubernetes clusters allow for applications to be more easily developed, moved and managed.
Kubernetes clusters allow containers to run across multiple machines and environments: virtual, physical, cloud-based, and on-premises. Kubernetes containers are not restricted to a specific operating system, unlike virtual machines. Instead, they are able to share operating systems and run anywhere.
Kubernetes clusters are comprised of one master node and a number of worker nodes. These nodes can either be physical computers or virtual machines, depending on the cluster.
The master node controls the state of the cluster; for example, which applications are running and their corresponding container images. The master node is the origin for all task assignments. It coordinates processes such as:
The worker nodes are the components that run these applications. Worker nodes perform tasks assigned by the master node. They can either be virtual machines or physical computers, all operating as part of one system.
There must be a minimum of one master node and one worker node for a Kubernetes cluster to be operational. For production and staging, the cluster is distributed across multiple worker nodes. For testing, the components can all run on the same physical or virtual node.
A namespace is a way for a Kubernetes user to organize many different clusters within just one physical cluster. Namespaces enable users to divide cluster resources within the physical cluster among different teams via resource quotas. For this reason, they are ideal in situations involving complex projects or multiple teams.
A Kubernetes cluster contains six main components:
These six components can each run on Linux or as Docker containers. The master node runs the API server, scheduler and controller manager, and the worker nodes run the kubelet and kube-proxy.
To work with a Kubernetes cluster, you must first determine its desired state. The desired state of a Kubernetes cluster defines many operational elements, including:
To define a desired state, JSON or YAML files (called manifests) are used to specify the application type and the number of replicas needed to run the system.
Developers use the Kubernetes API to define a cluster’s desired state. This developer interaction uses the command line interface (kubectl) or leverages the API to directly interact with the cluster to manually set the desired state. The master node will then communicate the desired state to the worker nodes via the API.
Kubernetes automatically manages clusters to align with their desired state through the Kubernetes control plane. Responsibilities of a Kubernetes control plane include scheduling cluster activity and registering and responding to cluster events.
The Kubernetes control plane runs continuous control loops to ensure that the cluster’s actual state matches its desired state. For example, if you deploy an application to run with five replicas, and one of them crashes, the Kubernetes control plane will register this crash and deploy an additional replica so that the desired state of five replicas is maintained.
Automation occurs via the Pod Lifecycle Event Generator, or PLEG. These automatic tasks can include:
You can create and deploy a Kubernetes cluster on either a physical or a virtual machine. It is recommended for new users to start creating a Kubernetes cluster by using Minikube. Minikube is an open-source tool that is compatible with Linux, Mac and Windows operating systems. Minikube can be used to create and deploy a simple, streamlined cluster that contains only one worker node.
In addition, you can use Kubernetes patterns to automate the management of your cluster’s scale. Kubernetes patterns facilitate the reuse of cloud-based architectures for container-based applications. While Kubernetes does provide a number of useful APIs, it does not supply guidelines for how to successfully incorporate these tools into an operating system. Kubernetes patterns provide a consistent means of accessing and reusing existing Kubernetes architectures. Instead of creating these structures yourself, you can tap into a reusable network of Kubernetes cluster blueprints.
Streamline operations across multi-cloud infrastructure.
What tools should you choose to succeed with containers?
Once you understand what containers and Kubernetes are, the next step is to learn how the two work together.