Day 30-Kubernetes Architecture/90 Days of DevOps Challenge

Day 30-Kubernetes Architecture/90 Days of DevOps Challenge

What is Kubernetes? Write in your own words and why do we call it k8s?

Kubernetes is an open-source container orchestration platform initially developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF). It automates the deployment, scaling, and management of containerized applications, providing a robust and scalable infrastructure for running and managing container workloads.

Kubernetes simplifies the management of complex distributed systems by abstracting away the underlying infrastructure and providing a unified API and set of tools for managing containerized applications. It allows developers to package their applications into self-contained units called containers, which encapsulate the application code, dependencies, and configurations. These containers can then be deployed and managed efficiently across a cluster of machines.

The name "Kubernetes" originates from the Greek word for "helmsman" or "pilot," reflecting its purpose of steering and managing containerized applications. The abbreviation "k8s" is derived from the word "Kubernetes" itself. The "8" represents the eight letters between "K" and "s." It's a common convention to use this abbreviation in command-line tools, scripts, and discussions to refer to Kubernetes more concisely.

The "k8s" abbreviation was first used by the Kubernetes community as a way to shorten the name of the project and make it easier to reference in conversation and code. It's now widely used by developers, engineers, and other professionals who work with Kubernetes regularly.

What are the benefits of using k8s?

Here are some of the key benefits of Kubernetes:

  • Containerization: Kubernetes leverages containerization technology, such as Docker, to encapsulate applications and their dependencies into isolated, lightweight units called containers. Containers offer several advantages, including improved resource utilization, easy application packaging, and consistent behavior across different environments.

  • Scalability: Kubernetes enables effortless scalability of applications. It allows you to scale your microservices applications horizontally by adding or removing instances, known as pods, based on the workload demands. This helps ensure that your application can handle increased traffic or accommodate higher resource requirements. This improves performance and responsiveness and is especially needed while migrating workload to DevOps.

  • High Availability: Kubernetes supports high availability by providing automated failover and load balancing mechanisms. It can automatically restart failed containers, replace unhealthy instances, and distribute traffic across healthy instances. This ensures that your application remains available even in the event of infrastructure or container failures. This helps reduce downtime and improve reliability.

  • Resource Efficiency: Kubernetes optimizes resource allocation and utilization through its advanced scheduling capabilities. It intelligently distributes containers across nodes based on resource availability and workload requirements. This helps maximize the utilization of computing resources, minimizing waste and reducing costs.

  • Self-Healing: Kubernetes has self-healing capabilities which means it automatically detects and addresses issues within the application environment. If a container or node fails, Kubernetes can reschedule containers onto healthy nodes. It can also replace failed instances and even perform automated rolling updates without interrupting the overall application availability.

  • Portability: Kubernetes offers portability, allowing applications to be easily moved between different environments, such as on-premises data centers, public clouds, or hybrid setups. Its container-centric approach ensures that applications and their dependencies are bundled together. This reduces the chances of compatibility issues and enables seamless deployment across diverse infrastructure platforms.

  • DevOps Enablement: Kubernetes fosters collaboration between development and operations teams by providing a unified platform for application deployment and management. It enables developers to define application configurations as code using Kubernetes manifests, allowing for version-controlled, repeatable deployments.

  • Kubernetes has a Large community: The Kubernetes community is a large and vibrant community of developers, operators, and users who are dedicated to improving and advancing the Kubernetes platform. The community is open and inclusive, welcoming contributions from anyone who wants to get involved. The Kubernetes community is committed to making Kubernetes the best container orchestration platform possible, and their hard work and dedication have made it the popular and widely-used platform it is today.

  • Kubernetes has Multi-cloud Capability: Kubernetes provides a powerful and flexible solution for multi-cloud capacity management. By abstracting away the underlying infrastructure, Kubernetes allows you to deploy and manage containerized applications across multiple cloud providers without having to worry about vendor lock-in or other compatibility issues.

  • Operations teams can leverage Kubernetes to automate deployment workflows, monitor application health, and implement continuous integration and delivery (CI/CD) pipelines.

Its benefits include improved scalability, high availability, resource efficiency, self-healing capabilities, portability, and support for implementing DevOps, Cloud, and DevSecOps practices. By leveraging Kubernetes, organizations can streamline application deployment and operations, increase productivity, and deliver more reliable and resilient applications.

Architecture of Kubernetes

The architecture of Kubernetes is based on a master-worker model, where the master node controls and manages the cluster, while the worker nodes run and execute the application workloads. Let's explore the key components of the Kubernetes architecture:

  1. Master Node:

    The master node is the most vital component of Kubernetes architecture. It is the entry point of all administrative tasks. There is always one node to check for fault tolerance.

    The master node has various components, such as:

    • API Server: Acts as the primary control plane component that exposes the Kubernetes API, which allows users and other components to interact with the cluster.

    • etcd: A distributed key-value store that serves as the cluster's backing store for all configuration data, ensuring high availability and consistency.

    • Scheduler: Responsible for assigning pods (the smallest deployable units in Kubernetes) to worker nodes based on resource availability, constraints, and other policies.

    • Controller Manager: Manages various controllers that handle different aspects of the cluster, such as node management, replication, and endpoints.

    • Cloud Controller Manager (optional): Integrates with cloud provider APIs to manage cloud-specific resources like load balancers, storage, and networking.

  2. Worker Node: A node is a worker machine that performs the requested tasks assigned by the control plane/master node.

    The worker node consists of Kubelet, an agent necessary to run the pod, Kube-proxy maintaining the network rules and allowing communication, and the Container runtime software to run containers.

    • Kubelet: Runs on each worker node and is responsible for managing and maintaining the state of the node. It communicates with the master node and ensures that the containers specified in the pods are running and healthy.

    • Container Runtime: Kubernetes supports various container runtimes like Docker, containered, or CRI-O, which are responsible for pulling and running container images.

    • Kube Proxy: Manages network connectivity between pods and provides services such as load balancing and network routing.

  3. Networking:

    • Pod Networking: Each pod gets its own IP address, and containers within a pod share the same network namespace, allowing them to communicate with each other over the local host.

    • Service Networking: Kubernetes assigns a virtual IP address (ClusterIP) to a service, which acts as a stable endpoint for accessing a set of pods. Services enable load balancing and service discovery within the cluster.

      1. Add-Ons:
  • DNS: Provides a DNS service for resolving the names of other services or pods within the cluster.

  • Dashboard: A web-based graphical user interface for managing and monitoring the Kubernetes cluster.

  • Ingress Controller: Manages external access to services within the cluster by exposing HTTP and HTTPS routes.

Overall, the Kubernetes architecture is designed to be highly scalable, fault-tolerant, and extensible. It enables the efficient management of containerized applications across a distributed cluster of nodes.

What is a Control Plane?

The Kubernetes control plane manages clusters and resources such as worker nodes and pods. The control plane receives information such as cluster activity, internal and external requests, and more.

Based on these factors, the control plane moves the cluster resources from their current state to the desired state. The Kubernetes Control Plane functions over multiple systems used by a cluster to make an application fault-tolerant while providing high availability for processing requests.

Components of the Kubernetes Control Plane:

The control plane consists of five significant components, each serving a specific purpose. These components work in synergy and ensure clusters are running optimally.

kube-apiserver:

These components work in synergy and ensure clusters are running optimally. kube-apiserver is responsible for managing the container lifecycle and, in essence, the end-to-end operations. Acting as the front end of the Kubernetes API, it serves as the access point for client requests that require Kubernetes resources to process any task. The API server creates multiple instances based on traffic and resource demand, thus enabling the cluster to scale horizontally and maintain optimum availability, performance, and resource utilization.

kube-scheduler:

kube-scheduler is responsible for scheduling and assigning pods to nodes based on the following constraints:

  • Time-sensitivity of a request

  • Restrictions due to policies

  • Data locality

  • Inter-workload interference

  • Hardware

  • Software and more.

kube-controller-manager:

A controller generally monitors and tracks the functioning of one or more Kubernetes resources. It specifically looks at the desired state mentioned in the spec variable and, with the help of kube-apiserver, works to enforce the desired state. Depending on the type of resource the kube-controller-manager monitors, the type of controller manager also changes. A few examples include:

  • Node Controller: Tracks node status, onboards new nodes, and determines whether a node is responsive or not. Based on this, it keeps the pods assigned to a specific node or reassigns them to a different, healthy, node.

  • Job Controller: Waits for new jobs or one-time tasks to be created, and once they are, the Job Controller sends them to newly-created pods for completion.

  • EndpointSlice Controller: EndpointSlice is a resource that represents a group of network endpoints, typically belonging to the same service. The EndpointSlice Controller is responsible for creating and managing EndpointSlie resources in the cluster. .

  • ServiceAccount Controller: Creates default ServiceAccounts for new namespaces, as well as reconciling them with the actual state of the cluster.

Usually, all controllers are compiled into one binary and run as one process. This reduces operational complexity and optimizes controller performance.

cloud-controller-manager:

The cloud controller manager is responsible for interacting with cloud-specific APIs and resources. It is designed to abstract the differences between various cloud providers

and to provide a common interface for managing cloud-specific resources within a Kubernetes cluster.

Like the previous component, the cloud-controller manager manages different types of controllers:

  • Node Controller: Checks whether a node inside a cloud is responding or not. Based on this, it checks whether the inactive node is deleted or not from the cloud. If it is, the controller removes the Node Object from the cluster.

  • Route Controller: Creates and manages routes within the cloud infrastructure for containers across nodes to communicate with each other.

  • Service Controller: Creates and manages service resources in the cluster.

etcd:

etcd is the data store that contains all key-value pairs necessary to determine the current and desired state of the system. In essence, etcd stores all cluster data from which the API server can collect and decide how to bridge the current and desired state.

These five components comprise the control plane and interact with other cluster resources, such as worker nodes, pods, and services, to handle requests and keep the application running.

Kubernetes Control Plane Work with the Rest of the Architecture

The control plane provides instructions to the worker nodes responsible for executing them and performing the relevant functions. The worker node comprises three major components: the kubelet, kube-proxy, and container runtime. Together, the three handle the incoming requests that the control plane forwards to them.

The control plane interacts with the worker nodes through the agent kubelet. kube-proxy deploys facets of the ServiceConcept and ensures that the various pods adhere to the rules of the network. The component also routes/reroutes traffic based on the rules mentioned above. The third component is container runtime, which runs containers. Besides these, there are other add-ons that the control plane interacts with and leverages to perform specific tasks and handle certain requests.

What is the difference between kubectl and kubelets?

kubectl and kubelet are two distinct components in a Kubernetes cluster that serve different purposes:

  1. kubectl:
    kubectl is a command-line interface (CLI) tool used to interact with the Kubernetes cluster. It acts as a client to the Kubernetes API server, allowing users, administrators, and developers to manage and control the cluster. Some key features and functionalities of kubectl include:

    • Deploying and managing resources: kubectl enables users to create, update, and delete Kubernetes resources such as pods, services, deployments, and namespaces.

    • Inspecting cluster state: kubectl provides commands to view the current state of the cluster, including retrieving information about running pods, services, nodes, and other resources.

    • Interacting with the cluster: kubectl allows users to execute commands within pods, copy files to and from pods, and access logs and other diagnostic information.

    • Managing cluster configuration: kubectl can be used to manage authentication, context, and cluster configuration for multiple Kubernetes clusters.

In summary, kubectl is the primary tool for managing and interacting with a Kubernetes cluster from the command line or through scripts and automation.

  1. kubelet:
    kubelet is an agent that runs on each worker node in the Kubernetes cluster. It is responsible for managing and maintaining the state of the node and ensuring that the containers specified in the pods are running and healthy. Key responsibilities of thekubelet` include:

    • Pod Management: The kubelet monitors the state of pods assigned to its node. It interacts with the control plane components, retrieves the pod specifications, and ensures that the requested containers are running and meet the specified criteria (e.g., resource limits, health checks).

    • Container runtime interaction: The kubelet communicates with the container runtime (e.g., Docker, containered) to start, stop, and manage container instances on the node.

    • Node status reporting: The kubelet provides regular updates to the control plane about the node's status, including resource availability, health, and runtime conditions.

To summarize, kubectl is the client-side tool used to interact with the Kubernetes cluster, whereas kubelet is the agent that runs on each worker node and manages the containers and pods on that node.

Explain the role of the API server:

The API server is a key component of the control plane in a Kubernetes cluster. It acts as the central control point and primary interface for interacting with the cluster. The API server exposes the Kubernetes API, which allows users, administrators, and other components to communicate with the cluster.

Here are the primary roles and responsibilities of the API server:

  1. API Endpoint:
    The API server serves as the endpoint for all communication with the Kubernetes cluster. Clients, including kubectl and other Kubernetes components, send requests to the API server to perform various operations on the cluster, such as creating or updating resources, retrieving cluster information, or issuing commands.

  2. Authentication and Authorization:
    The API server handles authentication and authorization of requests made to the cluster. It verifies the identity of the client making the request and checks whether the client has the necessary permissions to perform the requested operation. This ensures that only authorized users and components can access and modify the cluster's resources.

  3. Validation and Admission Control:
    The API server validates incoming requests to ensure they comply with the cluster's configuration and policies. It performs checks to verify that the requested resources are well-formed and that the specified values are within acceptable ranges. Additionally, the API server supports admission control plugins that allow custom validation and mutation logic to be applied to incoming requests.

  4. Resource Management:
    The API server manages the cluster's resources and state by storing and updating information about the cluster's objects, such as pods, services, deployments, and namespaces. It persists this information in the cluster's backing store, typically etcd. When clients make requests to create, update, or delete resources, the API server validates the requests, updates the object's state, and ensures consistency across the cluster.

  5. Cluster-wide Coordination:
    The API server acts as a centralized coordinator for various cluster-wide operations. It ensures that changes made by different clients are properly coordinated and synchronized. For example, when a client requests the creation of a pod, the API server coordinates with the scheduler to determine which node should run the pod and updates the desired state accordingly.

  6. Custom Resource Definitions (CRDs):
    The API server supports Custom Resource Definitions, which allow users to define their own custom resource types and controllers. CRDs extend the Kubernetes API to include user-defined resources, enabling the creation of custom controllers that manage and reconcile the state of these resources.

Overall, the API server plays a crucial role in managing the Kubernetes cluster. It provides a unified and standardized interface for interacting with the cluster, handles authentication and authorization, validates and stores cluster state, and coordinates cluster-wide operations. The API server is vital for managing the lifecycle of Kubernetes resources and ensuring the cluster operates according to the desired configuration and policies.