In IT Consulting
Kubernetes Basics: Pods, Nodes, Containers, Deployments and Clusters - read the full article about Kubernetes tutorial, IT Consulting and from Anton Putra on Qualified.One

Youtube Blogger

Kubernetes won the Container Orchestration War.
If you are a developer, DevOps, or SRE engineer, you have to know at least the basics of how Kubernetes operates.
In this video, we will go over the basic concepts such as pods, nodes, containers, deployments, and clusters.
Also, we will touch on ingresses and load balancers.
Lets start with a Node, which is the smallest unit of computing hardware in the Kubernetes cluster.
It is a single machine where your applications will run.
It can be a physical server in a data center or a virtual machine in the public cloud such as AWS or GCP.
You can even build Kubernetes from multiple raspberry pis.
Thinking of a machine as a Node allows us to insert a level of abstraction.
Now, you dont need to worry about a single server in your cluster and the unique characteristics of how much memory or CPU it has.
Instead, you can delegate the decision of where to deploy your service to Kubernetes based on the spec that you provide.
Also, if something happens with a single node, it can be easily replaced, and Kubernetes will take care of the load distribution for you.
Sometimes it can be helpful to work with individual servers, but its not a Kubernetes way.
In general, it should not matter for you or the application where it will be run.
Multiple nodes are combined into a node pool.
When you deploy the service, Kubetens will inspect individual nodes for you and select one node based on the available CPU, memory, and other characteristics.
If for some reason, that node fails, Kubernetes will make sure that your application is rescheduled and healthy.
You can have multiple node pools or sometimes called instance groups in your cluster.
For example, you can have ten nodes with high CPU and low memory to run CPU-intensive tasks, and another node pool would have high memory and low CPU.
In the cloud, its very common to separate node pools to on-demand nodes and spot nodes that are much cheaper but can be taken away at any moment.
Since applications running on your cluster arent guaranteed to run on a specific node, you cannot use the local disk to save any data.
If the application saves something on the local file system and then is relocated to another node, the file will no longer be there.
Thats why you can only use a local disk as a temporary location for the cache.
To store data permanently, Kubernetes uses Persistent Volumes.
While the CPU and memory resources of all nodes are pooled and managed by the Kubernetes cluster, persistent file storage is not.
Instead, local or cloud drives can be attached to the cluster as a Persistent Volume.
You can think about it as plugging an external hard drive into the cluster.
Persistent Volumes provide a file system that can be mounted to the cluster without being associated with any particular node.
To run an application on the Kubernetes cluster, you need to package it as a Linux container.
Containerization allows you to create self-contained Linux execution environments.
Any application and all its dependencies can be bundled up into a single image and then can be easily distributed.
Anyone can download the image and deploy it on their infrastructure with minimal setup required.
Usually, creating Docker images is a part of the CI/CD pipeline.
You check out the code, run some unit tests and then build an image.
You can add multiple applications in one single container, but you should limit yourself to one process per container if possible.
Its better to have a lot of small containers than one large one.
If the container has a tight focus, updates are easier to deploy, and issues are easier to debug.
Kubernetes doesnt run containers directly; instead, it wraps one or more containers into a higher-level structure called a pod.
Any containers in the same pod will share the same resources and local network.
Containers can easily communicate with other containers in the same pod as though they were on the same machine while maintaining a degree of isolation from others.
Pods are used as the unit of replication in Kubernetes.
If your application needs to be scaled up, you simply increase the number of pods.
Kubernetes can be configured to automatically scale up and down your application based on the load.
You can use CPU, memory, or even custom metrics such as a number of requests to the application.
Usually, you would run multiple copies for the same application to avoid downtimes if something happens with a single node.
As a container that can have multiple processes, a pod can have multiple containers inside.
However, since pods are scaled up and down as a unit, all containers in a pod must be scale together, regardless of their individual needs.
This leads to wasted resources and an expensive bill.
Pods should remain as small as possible to resolve this, typically holding only a main process and its tightly-coupled helper containers.
We typically call them side-cars.
Pods are the basic unit of computation in Kubernetes, but they are not typically directly created in the cluster.
Instead, Kubernetes provides another level of abstraction such as Deployment.
A deployments primary purpose is to declare how many replicas of a pod should be running at a time.
When a deployment is added to the cluster, it will automatically spin up the requested number of pods and then monitor them.
If a pod fails, the deployment will automatically re-create it.
Using a deployment, you dont have to deal with pods manually.
You can just declare the desired state of the system, and it will be managed for you automatically.
By now, we have learned about some core components in Kubernetes.
We can run the application in the cluster with the deployment, but how can we expose our service to the internet.
By default, Kubernetes provides isolation between pods and the outside world.
If you want to communicate with a service running in a pod, you have to open up a channel for communication.
There are multiple ways to expose your service.
If you want to expose the application directly, you can use the load balancer type.
It will map one application per load balancer.
In this case, you can use almost any kind of protocol: TCP, UDP, gRPC, WebSockets, and others.
Another popular method is the Ingress controller.
There are a lot of different ingresses available for the Kubernetes with different capabilities.
When using an ingress controller, you would share the single load balancer between all your services and use subdomains or paths to direct traffic to a particular application within the cluster.
Ingresses only allow you to use HTTP and HTTPS protocols.
And it is way more complicated to set up and maintain over time than simple load balancers.
If you want more videos like this, subscribe to my channel.
Thank you for watching, and Ill see you in the next video.
Anton Putra: Kubernetes Basics: Pods, Nodes, Containers, Deployments and Clusters - IT Consulting