@Version : 4.5.0
@Build : 94d077c24
By using this site, you acknowledge that you have read and understand the Cookie Policy, Privacy Policy, and the Terms. Close

Kubernetes on a Bare-Metal Setup.

Posted Thursday, December 26th, 2019

KubernetesDockerLinux
Kubernetes on a Bare-Metal Setup.

This is a setup course and assumes alot of things. If you are new to Kubernetes and would like to know about Kubernetes read my previous post introducing Kubernetes The goal of this post is to setup a single master, three node Kubernetes cluster with a working Pod network, Network LoadBalancer and Storage Provision. Sit back and relax because this is going to feel great. I mean it was for me as I have tried to setup K8s for longer than I can remember. I did this as part of me learning Kubernetes. I could use cloud kubernetes but the cost involved with running my learning lab on environments like GKE is too high for me.

Setup Considerations

1. Cluster Topolgy

There are three main topologies a K8s cluster can have. The topology to choose will depend on the purpose of the cluster and desired needs.

  • Single Node Cluster Not recommended in production. One of the key reasons one uses Kubernetes is its ability to handle node failures, so building a single-node cluster defeats much of the point of using Kubernetes. If you primarily want the many benefits of containerization, consider only installing Docker on your node or VM, and then running your services with a restart policy. If you need a single node setup for learning, Minikube is your best go.
  • Single Master Multi Node Cluster - Most common topology. Has one Master and Many nodes. This still has one point of failure which is the Master but since your workload don't run on master the idea is to handle Node failures. This is what this post will focus on.
  • High Availability Mode - Has more than one master and many nodes. The idea behind this one is complex at this point but you can read here and setup this. I will do a future guide on this one too.

2. Master-Node System Requirements

For this, I rented a Bare Metal with 24GB RAM and 4-Core Intel CPU and created four VMs as below on top of Proxmox. Accoring K8s docs, 2GB RAM for master and 1GB RAM for Nodes is okay so mine was very enough. You can still use this tutorial to setup more than 3 nodes as you will only add extra servers in my commands.

  • Master - Ubuntu 18.04 LTS - 4GB RAM
  • Node 1 - Ubuntu 18.04 LTS - 4GB RAM
  • Node 2 - Ubuntu 18.04 LTS - 4GB RAM
  • Node 3 - Ubuntu 18.04 LTS - 4GB RAM

3. Pod Network

There are several options supported for cluster networking, See here. I decided to go with Calico after reading this article

.

So you see, Calico all smiles and I hate trouble with networks because I kind of don't like debugging networks :).

4. Storage Volumes

Kubernetes also supports many storage addons that are listed and explained here to choose from. I initially decided to go with StoargeOs and it worked after the initial setup. On the second setup, I had this issue of Pods with PVCs being stuck in pending state with no events while the PVC was perfectly created and mounted. I decided to checkout other options and went through with Longhorn.

I will add procedure for both as the StorageOS issue could have been my own silly mistake that you may not encounter.

5. Network Load Balancer

Kubernetes does not come with a network LB. Cloud providers like Google's GKE, Azure Kubernetes or Amazon's EKS have Network LoadBalancers that any Kubernetes Cluster created through their K8s services plug on to automatically so there is zero setup for this. But for Bare-Metal Kubernetes it was a huge problem because without Network LB it is hard to expose services to the outside, I have only found MetalLB, a young project and is what I will implement here. MetalLB is still young and if you intend to use it in production, you are advised to read this. This is a quote from the creators

MetalLB is a young project. You should treat it as a beta system. The project maturity page explains what that implies.

6. Container Engine

It is also a good thing to know that you have the option to choose what Container engine to use with Kubernetes. On this ofcourse docker is the winner but there are other many options that you could yous. For me Docker is just awesome.

Setting up the cluster.

Disable swap

Disable swap on all hosts that will form part of your cluster. This is because Kubernetes doesn't work with swap on.

sudo swapoff -a

Open /etc/fstab and comment out the part that mounts swap just in case you will reboot.

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda2 during installation
UUID=ee12733a-1b01-44fb-bdad-0081d1e6e3f2 /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda1 during installation
# UUID=b2a408dc-1465-4e4f-9d4a-92c5f954cebd none            swap    sw              0       0

Disable Firewall

Disable ufw if installed. You could allow the ports through the firewall but you cant know all the ports that K8s will need so better disable the firewall and allow ports when we know what ports to target. I did this to save time.

sudo ufw disable

Configure hostnames and IP

Configure desired IPs, make them static on all Hosts and all hostnames so that each node will know each other by names. As for mine z-server-1 points to 192.168.122.100 and is the master. Open /etc/hosts and edit as below for each VM. Change hostnames and IPs as with your scenario.

127.0.0.1	localhost

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

192.168.122.100 z-server-1
192.168.122.101 z-server-2
192.168.122.104 z-server-3
192.168.122.105 z-server-4

Configure iptables

Create /etc/sysctl.d/k8s.conf if not existing and put. This will allows several network related things inside K8s to work like NAT for LBs and Services

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Run sysctl --system to apply as root

Installing packages.

I used Kubeadm, a tool by Kubernetes maintainers that is used to bootstrap a cluster with best practices. Also install docker for the container engine and kubelet

Here is an entire script to install all packages in Ubuntu.

sudo apt-get remove docker docker-engine docker.io containerd runc

sudo apt-get update

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common -y

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io -y

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

sudo apt install kubeadm kubelet kubectl -y

Setup master

Initialize the master with the network you setup your cluster in.

sudo kubeadm init --pod-network-cidr=192.168.122.0/24

Note the output, it has the token needed for joining other nodes to the master. Copy it an place it somewhere safe that you can find it when it is time to join nodes.

To start using your cluster, you need to run the following as a regular user to add the cluster access config that kubectl will need.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install a Pod Network (Calico)

Next, on our Kubernetes master we need to install a Pod Network. As discussed earlier, we will be using Calico. Here are the commands to run on your Kubernetes master to install Calico. First lets install etcd for the storage of cluster related data.

kubectl apply -f https://raw.githubusercontent.com/zemuldo/k8s-setup/master/etcd/manifest.yml

Install calico. I copied the then release to my GitHub to keep the state of my install for easy bootstrapping. I used Calico v3.10 thus you can still use the yaml from their docs.

kubectl apply -f https://raw.githubusercontent.com/zemuldo/k8s-setup/master/calico/v3.10/manifest.yml

Join the Nodes

Now paste the join command that was generated by the master init command to join in the three nodes to the cluster.

sudo kubeadm join <master-cluster-ip>:6443 --token <something.token> --discovery-token-ca-cert-hash sha256:<hash>

My command was

kubeadm join 192.168.122.101:6443 --token dvv9an.gsrke1g821nmntam \
    --discovery-token-ca-cert-hash sha256:7bdbe8d5ef0a72d74ce770f5ea446cf95bcf61d1321cda5e6c0d62d0db988589

A note here. For nodes to join the cluster some states like time must be in sync. I applied a snapshot to one of my VMs and the join command kept failing untill I resynced the time.

Also the token generated in the master init join command may expire. Run the command below to generate a new one if you get invalid token errors.

sudo kubeadm token create --print-join-command

Dashboard (on the master)

The next thing to do is to setup the Dashboard on our cluster. I used the latest version of the dashboard at the time. Current version should work just fine.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml

Expose the dashboard on the master node using proxy and access using ssh from your local machine.

kubectl proxy --accept-hosts='^*$'

Setup SSH tunnel to access dashboard in the master node. Replace the z-server-1 with your master IP or hostname. The command below creates a tunnel on port 8002 to master port 8001 via the SSH tunnel

ssh -fNTL localhost:8002:127.0.0.1:8001 [email protected]

Visit Here to see the dashboard

Create an admin service account called k8s-admin

kubectl --namespace kube-system create serviceaccount k8s-admin
kubectl create clusterrolebinding k8s-admin --serviceaccount=kube-system:k8s-admin --clusterrole=cluster-admin

Get token and use to login. Copy the toke printed by the command below and login to the dashboard.

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep k8s-admin | awk '{print $1}')

You should see something like

Network LoadBalancer

Here we install MetalLB, an software solution for networking out services out of K8s.

To install MetalLB, I used version v0.8.3, the latest at the time.

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml

This will deploy MetalLB into the cluster but we have to create a configMap for MetalLB to use with the range of IP addresses to allocate and the protocol to use. it supports different protocols for integrating in to the network but layer2 is straight forward and thats what I setup.

Here is my configMap. Apply one that goes with your scenario.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.122.150-192.168.122.250

Cluster Storage

Longhorn

Longhorn was pretty easy to setup. All I did was this. The command below creates a number of objects in your cluster with a storage class called longhorn ready to go for PVCs provisioning.

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

Longhorn UI is accessible through a LB type service that by now should have an IP (provisioned by MetalLB) accessible on port 80.

kubectl -n longhorn-system get svc

An the dashboard accessible via http://192.168.122.151 looks like below with some few PVCs already provisioned.

StorageOS

Now lets install StorageOS, the cluster storage volume addon that we choose. I used the official guide to install StorageOS operator to setup storageos.

Install the StorageOS operator using the following yaml manifest.

kubectl create -f https://github.com/storageos/cluster-operator/releases/download/1.5.1/storageos-operator.yaml

Before deploying a StorageOS cluster, create a Secret defining the StorageOS API Username and Password in base64 encoding.

The API username and password are used to create the default StorageOS admin account which can be used with the StorageOS CLI and to login to the StorageOS GUI. The account defined in the secret is also used by Kubernetes to authenticate against the StorageOS API when installing with the native driver.

kubectl create -f - <<END
apiVersion: v1
kind: Secret
metadata:
  name: "storageos-api"
  namespace: "storageos-operator"
  labels:
    app: "storageos"
type: "kubernetes.io/storageos"
data:
  # echo -n '<secret>' | base64
  apiUsername: c3RvcmFnZW9z
  apiPassword: c3RvcmFnZW9z
END

Define a new storageos cluster.

kubectl create -f - <<END
apiVersion: "storageos.com/v1"
kind: StorageOSCluster
metadata:
  name: "z-storageos"
  namespace: "storageos-operator"
spec:
  secretRefName: "storageos-api" # Reference the Secret created in the previous step
  secretRefNamespace: "storageos-operator"  # Namespace of the Secret
  k8sDistro: "kubernetes"
  images:
    nodeContainer: "storageos/node:1.5.1" # StorageOS version
  resources:
    requests:
    memory: "512Mi"
END

After the cluster pods are ready, create a new storage class.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/storageos
parameters:
  pool: default
  description: Kubernetes volume
  fsType: ext4
  adminSecretNamespace: default
  adminSecretName: storageos-secret

Finally, let’s go back to the terminal and make the new StorageClass created in the above command we made as the default one, so that subsequent deployments that need storage classes don’t have to explicitly know about it:

kubectl patch storageclass fast -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

That completes the StorageOS installation. There is also a StorageOS dashboard, which you can access by running the following.

kubectl --namespace storageos port-forward svc/storageos 5705

Again setup an SSH tunnel to the master host port 5705 and bind to 5700 like below.

ssh -fNTL localhost:5700:127.0.0.1:5705 [email protected]

Now just visit http://localhost:5700. Default username and password should be ‘storageos/storageos’. Your dashboard should look something like

And we are done! I hope this has been helpful to you. If you would like to do more stuff like this from you you can checkout my blog. I will update this with next steps as I am working on other related tutorials. Cheers!