What is Kubeadm?
In this post I’m going to introduce you about kubeadm and how to implement it using vagrant to use in your environment. Kubeadm is a tool that you can create a minimum viable Kubernetes cluster that conforms to best practices. It is an excellent tool to set up a working kubernetes cluster in less time and it follows all the configuration best practices for a kubernetes cluster. With kubeadm you can create a production-like cluster locally on a workstation for development and testing purposes making the whole process easy by running a series of prechecks to ensure that the server has all the essential components and configs to run Kubernetes. In summary, it is a simple way for you to try out Kubernetes, possibly for the first time or a way for existing users to automate setting up a cluster and test their application. According official documentation:
“You can install and use kubeadm on various machines: your laptop, a set of cloud servers, a Raspberry Pi, and more. Whether you’re deploying into the cloud or on-premises, you can integrate kubeadm into provisioning systems such as Ansible or Terraform.”
Kubeadm is developed and maintained by the official Kubernetes community.
Is there alternatives instead of kubeadm?
There are many options that developers can interact with cluster kubernetes like minikube, kind, k3s that allowing you create the cluster in a few minutes. If you are completely beginner or a developer that only to deploy your application or execute tests against applications running in K8s during CI/CD I strongly recommended use some of them.
The minikube alternative is usually the first Kubernetes technology found when someone wants to begin. It is a very simple solution to install on your laptop and it is designed for learning and testing. Minikube requires minimal resources to run. So, it is easy to install in your laptop, it is the easiest way to start to familiarize yourself with the command line kubectl. The inconvenience of this solution is this is not possible to add other nodes, and by consequence, to discover how the architecture really works and the full power of Kubernetes, so, if want to have a deep understand how the production like works I strongly recommend you use others alternatives.
The Kind alternative is another tool to deploy a Kubernetes cluster locally. Its specificity is that the cluster will be deployed inside a Docker container. This solution allows you to deploy all type of clusters like 1 master and several workers you want. These clusters are very easy to deploy. It can be done from a very simple YAML file.
The K3S alternative is a light Kubernetes version developed by Rancher. It has been created for production use on small servers, IoT appliances, etc. This is a very good alternative to Kubeadm if your laptop is limited as you can test Kubernetes on smaller VMs.
The kubeadm alternative is the best if you want to deploy a cluster with several nodes, you will be able to discover the full potential of Kubernetes, but you must have more computer resources. It brings a full architecture production level. If you are preparing for Kubernetes certifications like CKA, CKAD, or CKS, you can use the local kubeadm clusters to practice for the certification exam, bootstraping or upgrading a cluster and learn to troubleshoot different components in the cluster.
If you would like deploy kubernetes in Cloud you can use kops or kubespray or even kubeadm deployments methods.
In summary, all of these technologies are a good way to begin with Kubernetes. You just have to make a choice regarding your objectives and resources, but I strongly recommend among these alternatives to use kubeadm because will you have a great overview of Kubernetes Cluster possibilities.
Kubeadm Setup Prerequisites
In our case, we are going to use one Master Node and two Workers Node. If you do not have enought resource, via Vagrant file you will can specify the amount of the reources. For kubeadm, the minimum are two nodes [One master and one worker node] or even if you want to add more nodes, you can too via variables in the Vagrant file. My Vagrant file is available in my github and you can clone as you desire. git. The master node should have a minimum of 2 vCPU and 2GB RAM and the worker nodes, a minimum of 1vCPU and 2 GB RAM is recommended.
The ports requirements are listed bellow:

Ports Requiremnts
The Vagrant File
To allow the environment run in our local machine we are going to use Vagrant to automation the VM’s provision. In our vagrant file, we are automation the minimal steps because the main objective of this post is to show step by step configuring kubeadm to learning purpose. Of coure, it is possible automation everything via vagrant.
If you are new in Vagrant the are many tutorials in the Internet, including offcicial documentation. But now we are going to demonstration the basic of file.
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
IP_NW = "192.168.5."
MASTER_IP_START = 10
NODE_IP_START = 20
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.box_check_update = false
# Provision Master Nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "master" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubernetes-master"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "master"
node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
node.vm.provision "setup-hosts", :type => "shell", :path => "ubuntu/vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
node.vm.provision "install-docker", type: "shell", :path => "ubuntu/install-docker.sh"
node.vm.provision "allow-bridge-nf-traffic", :type => "shell", :path => "ubuntu/allow-bridge-nf-traffic.sh"
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "worker-#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubernetes-worker-#{i}"
vb.memory = 2048
vb.cpus = 1
end
node.vm.hostname = "worker-#{i}"
node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
node.vm.provision "setup-hosts", :type => "shell", :path => "ubuntu/vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
node.vm.provision "install-docker", type: "shell", :path => "ubuntu/install-docker.sh"
node.vm.provision "allow-bridge-nf-traffic", :type => "shell", :path => "ubuntu/allow-bridge-nf-traffic.sh"
end
end
end
In summary, this file specify the information about resources on the master and nodes. In this file we will provision one master and two nodes, but you can change according your needs ( NUM_MASTER_NODE = 1 and NUM_WORKER_NODE = 2). Remember the requirement of the minimum number of nodes ( one master and one node).
There are auxilar files to setup some task on the machines. See the files setup-hosts.sh, update-dns.sh, install-docker.sh and allow-bridge-nf-traffic.sh.
Vagrant creates a simple automation to bring up and tear down Kubernetes clusters on-demand in your local workstation. You can bring up your environment with vagrant up command:
vagrant up
Steps to setup a Kubernetes Cluster using kubeadm
Here we are the following steps in this post:
-
Install Kubeadm & Kubelet & Kubectl on master and workers nodes
-
Setup and initiate kubeadm on Master Node
-
Install the Weave network plugin.
-
Join Worker Nodes To Kubernetes Master Node
-
Deploy a sample app and validate your cluster
Install Kubeadm & Kubelet & Kubectl on master and workers nodes
For kubeadm to work properly, you need to disable swap on all the nodes using the following command.
sudo swapoff -a
Important, if you not switch off the swap here, you shoud specify the following option in the kubeadm init command –ignore-preflight-errors Swap
Remember to remove entry from fstab to make sure the will disabled after reboot
We already installed docker via vagrant, we only need to add docker daemon configurations to use systemd as the cgroup driver.
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
Now you should install the following dependencies
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Now, add the GPG key and apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Now lets install the kubelet, kubeadm and kubectl to the latest version or if you want to install a specific version of kubernetes, you can specify it.
sudo apt-get update
sudo apt-get install -y kubeadm kubectl kubelet
If you would to install a specific version you should to search which version to upgrade you will use:
apt update
apt-cache madison kubeadm
So install them:
sudo apt-get install -y kubelet=1.21.0-00 kubectl=1.21.0-00 kubeadm=1.21.0-00
Don’t forget to hold the packages to prevent upgrades.
sudo apt-mark hold kubelet kubeadm kubectl
Setup kubeadm on Master Node
Here, we need to setup environment variables to prevent kubeadm to be initialized on vagrant interface default.
IPADDR="192.168.5.11" ( Replace 192.168.5.11 with the IP of your master node that you setup in your vagrant file)
NODENAME=$(hostname -s)
sudo kubeadm init --apiserver-advertise-address=$IPADDR --apiserver-cert-extra-sans=$IPADDR --pod-network-cidr=10.100.0.0/16 --node-name $NODENAME
Lets going to explain some options here
-
–apiserver-advertise-address is the IP address the API Server will advertise it is listening on. If not set the default network interface will be used. Beacuse this we have setup the IPADDR environment variables
-
–apiserver-cert-extra-sans is the Subject Alternative Names (SANs) to use for the API Server serving certificate.
-
–pod-network-cidr specify range of IP addresses for the pod network
-
–node-name specify the node name
Now, if all runs successfully, you can set up your kubectl command to interact with your cluster. It can be done on the master or in any machine Linux.
On Master you should use:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
Verify if now you can interact with your cluster on Master.
kubectl cluster-info
Kubernetes control plane is running at https://192.168.5.11:6443
CoreDNS is running at https://192.168.5.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-8b5l4 0/1 Pending 0 18h
kube-system coredns-558bd4d5db-hgrn2 0/1 Pending 0 18h
kube-system etcd-master 1/1 Running 0 18h
kube-system kube-apiserver-master 1/1 Running 0 18h
kube-system kube-controller-manager-master 1/1 Running 0 18h
kube-system kube-proxy-pq9ks 1/1 Running 0 18h
kube-system kube-scheduler-master 1/1 Running 0 18h
Note that the previous output has the Pending state in coredns Pods. See the respective deploy:
kubectl get deploy -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 0/2 2 0 19h
Really, it is not ready, but don’t worry, this behavior is expected until install the network plugin. In our lab, we will install the Weave Network Plugin. Kubeadm does not configure any network plugin. You need to install a network plugin of your choice. Therefore,, execute the following command to install the weave network plugin:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Once installed, now you can check that all pod are work.
vagrant@master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-8b5l4 1/1 Running 0 19h
kube-system coredns-558bd4d5db-hgrn2 1/1 Running 0 19h
kube-system etcd-master 1/1 Running 0 19h
kube-system kube-apiserver-master 1/1 Running 0 19h
kube-system kube-controller-manager-master 1/1 Running 0 19h
kube-system kube-proxy-pq9ks 1/1 Running 0 19h
kube-system kube-scheduler-master 1/1 Running 0 19h
kube-system weave-net-rncxm 2/2 Running 1 60s
Join Worker Nodes To Kubernetes Master Node
After install kubeadm, kubelet, kubectl on all nodes now we have to join all workers node in our cluster to he master node. To accomplish it, we have to use the kube adm join command. This command was output when yoh run the kubeadm init. Don’t worry, you can catch this command at any time. Go to master node and run the follow command:
kubeadm token create --print-join-command
kubeadm join 192.168.5.11:6443 --token w698cw.80hhjgcn7h4lzpn3 --discovery-token-ca-cert-hash sha256:8e6fee693ffb014380180455db7f7a12832be89786579d6a9fd6c76cf199c30d
Now go to workers nodes and run the command listed above as root:
vagrant@worker-1:~$ sudo kubeadm join 192.168.5.11:6443 --token w698cw.80hhjgcn7h4lzpn3 --discovery-token-ca-cert-hash sha256:8e6fee693ffb014380180455db7f7a12832be89786579d6a9fd6c76cf199c30d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster
Now, you if you run the follow command on master node you will see that the nodes already joined in cluster and you have set up your cluster Kubernetes via kubeadm.
vagrant@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 22h v1.21.1
worker-1 Ready <none> 2m32s v1.21.1
worker-2 Ready <none> 89s v1.21.1
Deploy an application to validade our cluster Kubernetes
Now lets hands-on e put our cluster kubernetes to run applications. At this time lets run a simples nginx application e expose it via service.
vagrant@master:/etc/kubernetes/manifests$ kubectl create deploy nginx-deploy --image=nginx --replicas=3 --dry-run=client -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
As you can see the deploy works well and we can see our pods run in specific node. By detault the master node has a taint, so it is not possible to schedule to run there, unless you add a tolerant in your resource ( pod or deploy ) to tolerate that taint or remove the master node taint.
vagrant@master:/etc/kubernetes/manifests$ kubectl create deploy nginx-deploy --image=nginx --replicas=3
deployment.apps/nginx-deploy created
vagrant@master:/etc/kubernetes/manifests$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 3/3 3 3 10s
vagrant@master:/etc/kubernetes/manifests$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-8588f9dfb-4st4b 1/1 Running 0 18s 10.46.0.2 worker-2 <none> <none>
nginx-deploy-8588f9dfb-77p9h 1/1 Running 0 18s 10.32.0.4 worker-1 <none> <none>
nginx-deploy-8588f9dfb-qwk8g 1/1 Running 0 18s 10.32.0.5 worker-1 <none> <none>
If you would like to remove a taint you can use this command:
vagrant@master:/etc/kubernetes/manifests$ kubectl describe node master | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule
vagrant@master:/etc/kubernetes/manifests$ kubectl taint node master node-role.kubernetes.io/master:NoSchedule-
node/master untainted
vagrant@master:/etc/kubernetes/manifests$ kubectl describe node master | grep -i taint
Taints: <none>
vagrant@master:/etc/kubernetes/manifests$
Now we are going to generate our service via yaml
vagrant@master:~$ kubectl expose deploy nginx-deploy --port=80 --name=service-deploy --type=NodePort --dry-run=client -o yaml > svc.yaml
vagrant@master:~$ cat svc.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: service-deploy
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-deploy
type: NodePort
status:
loadBalancer: {}
Now let’s apply the file generated and access the nginx deployed:
vagrant@master:~$ kubectl apply -f svc.yaml
service/service-deploy created
vagrant@master:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
service-deploy NodePort 10.99.225.136 <none> 80:32111/TCP 3m34s
vagrant@master:~$ kubectl describe svc service-deploy
Name: service-deploy
Namespace: default
Labels: app=nginx-deploy
Annotations: <none>
Selector: app=nginx-deploy
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.225.136
IPs: 10.99.225.136
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32111/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80,10.46.0.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Test the service exposed:
vagrant@master:~$ curl http://10.99.225.136:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
If you would like to know additional details on what happen under the hood, whats kubeadm and kubeinit command does, with the aim of sharing knowledge on Kubernetes cluster best practices I strongly recommend you to read this link on kubernetes official documentation: Implementation details.
For now its all, I hope this post was useful.