Kubernetes Architecture

Kubernetes Architecture and its architectural components. This section explains the architectural components of Kubernetes.

Kubernetes Architecture

Kubernetes Architecture - Architecture and architectural components of Kubernetes explained

In this section we are going to learn the internals of Kubernetes and show you the architectural components of the framework. Kubernetes is a very complex system that comes with many components that work together in the highly available Kubernetes cluster.

Kuberntes is a powerful container orchestration tool with many features for running your infrastructure with ease and without downtime. System is complex and it takes time to master the technology behind Kubernetes. There are many deployment, management and configuration of the containerized applications. This section will provide you in depth information on the architecture of the Kubernetes cluster.

Kubernetes is powerful container orchestration technology developed by Google, which can be used to run any type of containerized applications including microservices on the distributed cluster. Kubernetes runs on the distributed cluster consisting of worker nodes managed by master node(s) in the cluster. Kubernetes cluster is highly scalable and it provides the capabilities like auto-deployment, automatic rollbacks in case of issues and self healing capabilities. Application containers can be configured to run the container with the specified no of CPU's, RAM and other resources.

Kuberntes cluster was developed to provide a solution to run large numbers of applications with ease. As said above Kubernetes is a complex system and it is designed in such a way to hide complexity with the end user. Kubernetes comes with the API server which provides REST API's for end users to connect with the Kubernetes server. Thus making it easy for the developers and administrator to work with the Kubernetes server. Kubernetes can be installed on the bare metal machines running Linux or any other operating system, AWS, Azure, OpenStack, or Apache Mesos.

What type of cluster Kubernetes is?

Kubernetes is based on the master and slave architecture, where the master is responsible for managing the cluster. The slave is also known as the worker node that actually runs the workload. Master connects to the external world through REST API's and receives the request from the admin/developers to run/stop/rerun etc.. commands and then internally instruct the worker nodes to do the required jobs. No one can talk to the worker nodes. Through master nodes only we can perform any activity on the Kubernetes server.

There are at least one master node and multiple worker nodes in the Kubernetes cluster. To set up a very small cluster node master node and two worker nodes are required. In our tutorial series we will show you how to setup a minimal Kuberntes cluster for learning and running our workload.

Types of nodes in Kubernetes cluster

As said above we have two types of nodes in the Kubernetes cluster Master node (Control Plane) and worker nodes.

Master Node (Control Plane)

Master node is also known as control plane and responsible for managing the whole cluster. It interfaces with the external world through REST API and then internally interacts with the other components of the cluster to accomplish the job.

Worker Node

The worker node is a powerful server that actually executes the jobs. For example once master nodes tells it to run pod(s) of an application then it allocates the resources and then runs the application pod(s). No one can directly interact with the worker nodes. In the cluster we can have 2 more nodes.

Kubernetes Cluster Architecture

Kubernetes consists of two types of nodes:

a) Master Node(s)
b) Worker Node(s)

Let's understand the roles and responsibilities of these two node types.

Master Node or Control Plane

The Master Node is also known as Control Plane and is responsible for property running the Kubernetes cluster. Master node plays an important role in the cluster and it decides about the cluster for example scheduling the jobs, coordinating with the worker nodes. Master Node interacts with the external outside world, for example admin issues a command to run a pod, and then instructs all other components in the cluster to do their specific jobs to accomplish the assigned task. It handles the work like detecting and responding to the cluster events, deploying new pods on the worker nodes. In case of failure instruct the worker node to run new pods to make the system highly available.

In actual installation of Kubernetes cluster you can run Master Node or Control plane components in other machines in the network, but for simplicity and fast operation all the components of the master node are installed on the same machine.

There are four components in the Kubernetes Master namely API Server, Controller Manager, Scheduler and etcd. All these components work together to run the Master Node of the Kubernetes cluster.

Let's understand the components of Master Node.

API Server: The API-Server is the front-end for the Kubernetes cluster, it provides API for the external world to interact with the Kubernetes server. The API-Server is also considered as the entry-point to the powerful Kubernates cluster. External applications and admins use the tools that connect to the Kubernetes API Server for giving the instructions to the cluster.

Kubernetes API-Server then connects with the worker nodes and other components to do the actual job. Even worker node-to-node communication happens through the API Server. This service works as the central unit and then connects all the components by sending/receiving instructions to perform the server tasks.

Admin can also install a Dashboard Client on the API-Server to see the status of the jobs, scheduling, pod running status, old previous pods and the logs. You can use the command line tool to connect to the API Server as well. As we proceed with the tutorial, we will show you the steps to work with the Kubernetes cluster.

Controller Manager: The Kubernetes Controller Manager is an important component of the Kubernetes cluster, this component works as a daemon and comes with the core control loops for handling various functionalities of the cluster. The Controller Manager daemon is responsible for watching the status of the cluster. Kubernetes Controller Manager comes with the replication controller, endpoints controller, namespace controller, and service accounts controller. Controller manager connects to the API server and then gets the status of the cluster using these controllers.

In nutshell Kubernetes Controller Manager watches the state of the cluster through the apiserver, it also changes the current state to achieve the desired state. For example if the pod state is failed then restart a new pod instance if required.

Scheduler: Kubernetes Scheduler is a very important component in the cluster, which watches the newly created Pods with no Node assignment. Scheduler finds the best Node for the Pod and then runs. It communicates to the Nodes through API Server. Scheduler selected the Nodes based on the pre-defined scheduling principles.

In the Kubernetes cluster on the master Node kube-scheduler is the default scheduler, it is designed in such a way that developers can also write their own scheduler based on the business requirements. But the default scheduler is sufficient for most of the business scenarios.

etcd: The etcd is an external component and it's not part of Kubernetes project, but etcd is a necessary component to run Kubernetes cluster. The etcd is a key-value storage used by Kubernetes cluster for holding the state of the cluster. All the components connect to etcd through the API server to update the status of the components to the etcd key-value storage.

Worker Node

Worker Nodes in the Kubernetes cluster are responsible for running the pods and this machine is more powerful than the Master Node. If you schedule a job on the Kubernetes cluster then the Pods are created and run on the worker node.

The worker nodes in the Kubernetes cluster should be powerful as this node is responsible for running the containers. This node requires more processing power and RAM to meet the workload. So, you should choose a worker node based on the data processing requirement and it should meet your daily data processing requirements.

The Worker Node in the Kubernetes cluster consists of two components namely Kubelet Service and Kube-proxy Service. Let's understand these two components of the Worker Node.

Kubelet Service: The Kubelet process is installed on each worker node and runs all the time. The Kubelet service talks to the cluster through the API server and updates the server status inside the cluster. Kubelet service is responsible for executing tasks on the worker node. Kubelet Service listens for the instruction from the API Server and then manages the containers on the worker node. 0

Kube-proxy Service: The Kube-Proxy Service in the Kubernetes cluster and runs on each Worker Nodes. The Kube-Proxy component is responsible for seamless communication between services within the cluster.

Kube-proxy is a networking component which is installed on each worker node and runs as the daemonset. Kube-proxy implements the low-level rules that allow the communication to pods both from inside as well as outside the Kubernetes Cluster. So, when Kubernetes receives a request for the deployment of a node then the request reaches this service and then it is forwarded to underlying pods.

Container Runtime: The Container runtime is also installed on all the nodes of Kuberntes cluster. It is required to be installed on the master nodes and the worker nodes. 1