Before containers became a thing, applications were deployed using VMs or directly on the OS. This had problems numerous problems and limitations like cost, efficiency, scalability, portability among others that demanded a new flexible way of deployment leading to the birth of containers and then Kubernetes.
Containers were in use before Kubernetes was in existence and became even more adopted with the creation of Docker which was initially released in 2013. Docker was great and solved many problems faced by the teams at the time including microservice architecture, scaling, continuous deployment among others but there was a problem of managing containers. Google had been deploying large scale containers using some Pre-Kubernetes tools that gave birth to Kuberntes and that's another story on its own. In just 5 years, Kubernetes has been adopted by all major cloud vendors like AWS, IBM, Oracle, and Azure with some vendors tweaking to create their own versions. Kubernetes now has a vibrant community with good support and documentation and support for different container engines including rkt and Docker.
A setup of Kubernetes consists of a cluster made up of a master node and worker nodes that work together to run your workflows/tasks/applications and manage the state of your cluster while communicating over a network managed by Kubernetes. This also includes a few addons that are used for intercoms, management and logs. The master manages the state of the cluster while the nodes run your applications in containers. The master node and worker nodes can be VMs or even actual servers.
The master runs certain processes that manipulate the other nodes. These processes provide the API for interacting with the cluster state, watching and changing cluster state and scheduling and keeping track of resource requirements, status and policies for running each workload.
The workers run the workloads or applications represented in the cluster state. The container images for running your workloads are stored in a container registry like gcr.io and docker.io. The containers are run in an internal network and exposed to the external networks using load balancers.
As you can see, the master node is accessible via the Kubernetes REST API, CLI tools or the Dashboard which still uses the REST API and nodes run containers that are started and stopped by the master and exposed out of the internal network via some load balancer.
While the master node runs 4 primary services:
Each non-master node runs two services and a container engine:
And the cluster setup usually includes addons like:
Kubernetes has several benefits that include all the benefits of using containers. Imagine you have a very complex application that requires realtime monitoring, micro-service architecture, continuous deployment, load balancing, security, auto-scaling, multi-cloud seamless deployment and I can't exhaust them, Kubernetes makes it super easy to achieve all of that. So in a nutshell, here are some of the benefits of using Kubernetes:
As much Kubernetes can be used to deploy any kind of application, I would like to put down where Kubernetes works.
As we have already understood Kubernetes and how it works internally, now we will look at Kubernetes components that it exposes to us to make a cluster work. I have found learning by doing to be the best way of learning so moving forward, we will actually start using Kubenetes and interacting with it.
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day. This is because running a Kubernetes cluster on local environment is resource consuming and has alot of setup and configuration overhead that i better done on a staging or production environment. Both Kubernetes and Minikube expose the tools needed to interact with a cluster. It also supports almost all the Kubernetes features that we may want to learn. Anything that is not supported on Minikube we will learn on the cloud managed services and on the Kubernetes cluster that we will create.
To install Minikube just checkout this link. If the link is broken, browser to Kubernetes official docs.
Kubectl is a command- line utility available to us for interacting with a Kubernetes API while doing deploying applications, monitoring etc. Kubectl basically manipulates the objects that represent the state of the cluster. We will see this in action in the next part of this series.
Now lets look at the entities that represent a state of the cluster. Kubernetes Objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster and its these objects that we manipulate to change the cluster. Specifically, they can describe properties of workloads, services among other object types. Lets look at the significant of these properties that we will be using moving forward.
Spec - This property represents the characteristics of the object as we desire them.
Status - This property describes the actual state of the object, and is supplied and updated by the Kubernetes system
For example, a
Kubernetes Deployment is an object that can represent an application running on your cluster. When you create the
Deployment, you might set the Deployment spec to specify that you want three
replicas of the application to be running. The Kubernetes system reads the
Deployment spec and starts three instances of your desired application–updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction–in this case, starting a replacement instance
When using Kubectl command line utility, we mostly provide the object representation on a .yaml file. Kubectl converts the .yaml JSON when calling Kubernetes API. Lets see an example of a .yaml file.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
Don't worry about what the above
.yaml or what
Replicas are as we will talk about them until you start dreaming about them.
While creating a k8s object representation,. there are some required fields that must be specified and these include:
apiVersion- Which version of the Kubernetes API you’re using to create this object
kind- What kind of object you want to create
metadata- Data that helps uniquely identify the object, including a name string, UID, and optional namespace
spec - The precise format of the object spec is different for every Kubernetes object, and contains nested fields specific to that object. For example, the spec format for a Pod can be found here, and the spec format for a Deployment can be found here.
Lets now look at ways to manage the objects while inside a cluster.
As we have seen above, when managing or changing the state of a cluster, we use can use the dashboard, kubectl commandline utility or the REST API directly to apply change or delete objects. For example, the to apply the above deployment example, we can use:
kubectl create -f our_deployment.yaml
You can read more about object management here And I recommend you read more on this as this will speedup your learning.
We have come to the end of this part. I hope you have now seen the big picture of where K8s comes in in the DevOps chain and what cool things it can do. Lets meet and start deploying some workloads. I love cloud native 🚀 🚀 🚀 🚀
Here are my other articles related to this one.
I am still working on more Koubernetes tutorials and you can checkout my blog for more.
Thank you for finding time to read my post. I hope you found this helpful and it was insightful to you. I enjoy creating content like this for knowledge sharing, my own mastery and reference.
If you want to contribute, you can do any or all of the following 😉. It will go along way! Thanks again and Cheers!