🎉 I'm releasing 12 products in 12 months! If you love product, checkout my new blog workingoutloud.dev

Back to home

Recapping Kubernetes Basics

This post will recap some of the basics of Kubernetes to lay the foundations for upcoming posts.

The recap will include:

  1. What does the Control Plane comprise of?
  2. What is within a Node?
  3. What is a Pod?
  4. What is a Deployment
  5. What is a Service?

Prerequisites

  1. Basic familiarity with running Kubernetes on your local machine. This post is meant to recap some of basic concepts.

The Control Plane

The Control Plane manages, monitors, plans and schedules nodes on a Kubernetes cluster.

It is made up of the following:

AppDoes
etcdKey-value store for critical cluster info
Kube schedulerPuts containers to proper nodes
kubectl managerEnsures proper state of cluster components
Kube API ServerAn exposed server that lets you communicate with the cluster

It is the Control Plane that brings the cluster to a desired state.

Anytime you want to make a change to the cluster, you need to make changes to the Control Plane. This is done by interacting with the master node via the Kubernetes API that the Kube API Server exposes. We can interact with this with kubectl.

What does the Kubernetes Control Plane consist of?

Select one or more answers

What is within a Node?

You can use the acronym NPC to remember how the order of encapsulation works for a Node.

LetterMeaning
NNode
PPod
CContainer

A Node consists of a number of Pods which in turn consist of a number of Containers.

NPC diagram

NPC diagram

The Node needs the Container Runtime Engine (Docker, Containerd, CRI-O, frankti). It is the software required to run the images.

Something needs to run in the node to communicate with the Control Plane - that is the kubelet that runs in each node of the cluster.

As for node-to-node communication, this is done using kube proxy. The kube proxy is a network proxy that runs on each node in the cluster. It maintains network rules on nodes. These rules allow communication to your nodes from inside or outside your cluster.

What is a Pod?

Containers that we build are deployed to a Kubernetes object named a Pod. We cannot directly deploy a container to the cluster itself.

Pods are the smallest unit of execution in Kubernetes. They are composed of one or more containers.

Pods themselves are ephemeral by design. If a pod fails, the scheduler can schedule the creation of a new replica of that pod to continue operations.

Each Pod has a unique IP address that other Pods within the cluster can communicate with.

Apps can be scaled by spinning up more pods with another app container within it.

You can read more about Pods here.

What is a Deployment?

Deployments are a way to describe the desired state of your application.

The Deployment Controller changes the actual state to the desired state at a controlled rate.

You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Some use cases:

  1. Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the status of the rollout to see if it succeeds or not.
  2. Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
  3. Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each rollback updates the revision of the Deployment.
  4. Scale up the Deployment to facilitate more load.
  5. Pause the Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to start a new rollout.
  6. Use the status of the Deployment as an indicator that a rollout has stuck.
  7. Clean up older ReplicaSets that you don't need anymore.

An example of a deployment:

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80

This deploy will create 3 replicated Pods containing the nginx container (as denoted by .spec.replicas).

The container itself is denoted at .spec.template.spec.containers[0].

The .spec.selector field defines how the Deployment finds which Pods to manage. In this specification, we select a label that is defined in the Pod template (app: nginx).

We could create this deployment on our cluster by running:

$ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml $ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/3 0 0 1s

A more sophisticated example:

# The deployment object apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment # Manages pods with label `app: nginx` and manages replicaset defined within labels: app: nginx # Creates a replicaset of 3 spec: replicas: 3 selector: # This replicaset will manage pods with label `app: nginx` matchLabels: app: nginx # This takes care of rolling updates minReadySeconds: 10 strategy: rollingUpdate: maxSurge: 1 # Update must be done in a way that at least 3 pods are running maxUnavailable: 0 template: metadata: # Where the label is denoted for the app container labels: app: nginx # Creates a pod with app container nginx:1.14.2 spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80

The comment throughout the YAML configuration explain how this configuration works.

What is a Service?

In an example where a pod goes down within a node and a ReplicaSet brings it back up, we know that the new pod will have a new IP address. How does our cluster know what is happening?

Services are a way to expose a set of Pods to the outside world by connecting the Pods using IP and connecting to a Service that will distribute traffic.

Each service has a name. In the example given, there might be a service that helps the web server pods connect to database pods. At the same time, there might be a frontend service that connects users to the web servers.

The service helps to discover new pods and helps with the distribution when there are changes in state.

How does it work? We tell it to manage any pods using a selector (for example app: frontend).

Service can be a few different types:

TypeDescription
ClusterIPDefault kind of service and only accessible from within a cluster. This means if you are outside of the cluster and want to access a service, you cannot do it.
NodePortAccessible from outside of the cluster and creates a cluster-wide port. It allows us to pick a port number and the combination of the node ID and node port can access the pods from outside of the cluster.
LoadBalancerCloud-specific implementation (AWS vs Google Cloud vs Azure)

In production, the LoadaBalancer type is the most common and NodePort is barely used in practice to export applications.

The LoadBalancer is access from outside the cluster, has a DNS name and also includes features such as SSL termination, Web Application Firewall (WAF) integration, access logs, health check etc.

Summary

Today's post recapped some of the important definitions and terms in Kubernetes. We will be referencing these all the time over the upcoming posts as we dive more into Kubernetes and EKS.

Resources and further reading

Photo credit: virussinside

Personal image

Dennis O'Keeffe

@dennisokeeffe92
  • Melbourne, Australia

Hi, I am a professional Software Engineer. Formerly of Culture Amp, UsabilityHub, Present Company and NightGuru.
I am currently working on Visibuild.

1,200+ PEOPLE ALREADY JOINED ❤️️

Get fresh posts + news direct to your inbox.

No spam. We only send you relevant content.

Recapping Kubernetes Basics

Introduction

Share this post