Kubernetes: Container Orchestration Explained

Kubernetes: Container Orchestration Explained

To learn about Kubernetes, we first need to understand what containerization and containers are. To understand containers read following blog

Containerization and Docker

What is Kubernetes?

Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. To understand better, let’s take an example:

Imagine you’re running a lemonade business. At first, you just have one stand, and everything is easy to manage. But as your business grows, you open more stands in different locations. Managing all these stands—making sure each has enough lemons, sugar, and cups—becomes overwhelming. You might even lose track of which stand needs what. Kubernetes is like a smart manager for your lemonade business. In the world of computers, instead of lemonade stands, we have applications running as containers. These containers need resources like CPU, memory and storage to run smoothly. Kubernetes helps to manage these applications and resources.

Why do we need Kubernetes?

There are four major problems why you need to use Kubernetes for your application.

  1. Single host

    Let us say you are running your application in form of docker containers. If the docker daemon goes down your containers stop working. In other case if the resources on the host gets suddenly consumed more by a specific containers it affects the performance of other containers. Kubernetes works as a cluster of multiple nodes. Hence, even if a machine goes down or there is shortage of resources, entire system doesn’t fail and performance is high as resources are managed between multiple nodes.

  2. Auto Healing

    If a container or resource goes down or stops working for any reason, Kubernetes automatically replaces it with a new one or hosts it on another node and ensures everything keeps running smoothly

  3. Auto Scaling

    Let us say usually there are thousands of customers accessing your application and you need 10 CPUs to handle this load. But on some occasions your application may get hundreds of thousands of customers. To address this you might create hundreds new CPUs, but then the resources get under utilized and cost of operations gets too high. Kubernetes automatically scales up or down depending on the load on the application.

  4. Enterprise

    To host enterprise level applications, there are lots of things to be taken care of such as load balancing, firewall, automation and API gateways. Kubernetes provide all these and many other functionalities.

Kubernetes architecture

Kubernetes have 2 main components: the control plane(the brain) and the data plane(the hands).

Control Plane

  • API server:

    This is like the receptionist. Anyone who wants to talk to Kubernetes (like you or another program) talks to the API Server. Anyone who wants to access the cluster needs to access through the API server. All the components within the control plane and the kubelet from worker nodes talk to the API server.

  • ETCD:

    It is a key value store. It stores all the information about the cluster resources and their states.

  • Controller Manager:

    Controller Manager supervises the current state of the cluster and desired state in the etcd. With the help of controllers, it makes sure that if any resources go down or are missing, they get replaced.

  • Scheduler:

    The scheduler decides which tasks or pods will run on which node based on the available and required resources.

  • Cloud Controller Manager:

    Manages the communication and resource creation between Kubernetes and the cloud provider.

Data Plane

  • Kubelet:

    Think of the Kubelet as the manager of the worker node. It receives tasks from the control plane and makes sure they’re completed. It talks with the API server and exchanges information about nodes health.

  • Container runtime:

    To run java applications we need java runtime. Similarly, to run containers, we need container runtime. This is the engine that runs containers. Examples include dockershim, containerd or cri-o.

  • Kube-Proxy:

    It is responsible for the networking in a node. It ensures that requests reach the correct pods.

Working of Kubernetes

  1. You make a request to Kubernetes with an API or Web UI or CLI through the kube-api-server.

  2. The API Server takes your request and tells the Scheduler to find worker nodes with enough resources.

  3. The Scheduler assigns the tasks to the nodes.

  4. The Kubelets on the nodes start the resources.

  5. The Controller Manager keeps an eye on everything. If one resource goes down, it makes sure another substitute is created.

  6. The Kube-Proxy ensures customer requests always reach the right resource.