Installing Kubernetes- The Hard Way

Swapneil Kumar Dash
16 min readJun 1, 2021

--

In this blog I am going to explain about how I was able to setup a simple 4 node(2 master and 2 worker) cluster on my local machine which was really fun to do and gave me a lot of experience and confidence to play around with this technology and is really helpful for my preparation for CK(A,AD and S) trio.

So lets just get started with the setup.

PS: All the setup was done on windows host machine but the same can be done on Linux or mac OS as well.

High Level Architecture for the setup:

Planned HLD

Pre-Requisites:

  1. Minimum 8 GB RAM on the machine to allocate 1 GB RAM and 1 CPU to each nodes. As per the above diagram it has to be 5 VM’s that will be setup 4 for k8 nodes and 1 for the load balancer
  2. Virtual box and vagrant needs to be installed on the host machine. Here is the Link to download virtual-box and here is the the one for vagrant.

Steps for Setup:

Step 1:

Follow the below steps:

Observe that the setup of above mentioned architecture gets automatically started as shown:

vagrant setup for K8 nodes and Load balancer

Step 2:

  • Once the setup is complete we can see 5 VM’s installed and running in our virtual box window as shown:
List of running VM’s
  • Now we can login to any of the VM’s using the vagrant SSH and the individual private key that it has created against each VM as shown:
SSH on to individual boxes using vagrant private keys
  • Now in order for easy access to all the other nodes from one master node for push and pull of files lets just add the public key of master-1 vm onto authorized_keys file of other vm’s as shown below:
SSH public key of master-1 node added to other nodes

(Use command ssh-keygen on master-1 node to generate a public-private key pair and add the public key to the authorized keys of other VM where location for the same would be .ssh/authorized_keys)

  • Now install kubectl on the master-1 node using below commands:

wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Step 3:

Here we will generate all the required list of SSL certificates for all the master node components like kubernetes ca certificate, admin certificate(for K8 admin user), control manger client certificate, scheduler client certificate etc.

  • Generate the CA Certificate and key for K8 cluster that can be used to generate additional TLS certificates:

# Create private key for CA
openssl genrsa -out ca.key 2048

sudo sed -i '0,/RANDFILE/{s/RANDFILE/\#&/}' /etc/ssl/openssl.cnf

# Create CSR using the private key
openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr

# Self sign the csr using its own private key
openssl x509 -req -in ca.csr -signkey ca.key -CAcreateserial -out ca.crt -days 1000

CA Certificate and key generation
  • Now we will generate the certificate and key for admin user for K8:

# Generate private key for admin user
openssl genrsa -out admin.key 2048

# Generate CSR for admin user. Note the OU.
openssl req -new -key admin.key -subj “/CN=admin/O=system:masters” -out admin.csr

# Sign certificate for admin user using CA servers private key
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 1000

SSL key and cert generation for admin user
  • Now we generate and certificate and key for individual components as shown:
Cert and Key for kube-controller-manager
Cert and Key for kube-proxy
Cert and Key for Kube-Scheduler
  • For generating certificate and key for Kube api server, we need to make some changes in the openssl.conf file since there are other components who try to interact with Kubeapi server which include other master servers, load balancers. Hence below is the command that was used:

cat > openssl.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.96.0.1
IP.2 = 192.168.5.11
IP.3 = 192.168.5.12
IP.4 = 192.168.5.30
IP.5 = 127.0.0.1
EOF

openssl genrsa -out kube-apiserver.key 2048
openssl req -new -key kube-apiserver.key -subj “/CN=kube-apiserver” -out kube-apiserver.csr -config openssl.cnf
openssl x509 -req -in kube-apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-apiserver.crt -extensions v3_req -extfile openssl.cnf -days 1000

Cert and key generation for kubeapi server
  • Similar is the process for generation of key and cert for ETCD as shown:
ETCD key and cert generation
  • Service account key and certificate generation(K8 controller manager requires a key pair to generate a signed service account token hence this step is followed):

openssl genrsa -out service-account.key 2048
openssl req -new -key service-account.key -subj “/CN=service-accounts” -out service-account.csr
openssl x509 -req -in service-account.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out service-account.crt -days 1000

Service account cert and key generation

Now distribute these set of key pair to other master nodes(master-2) via scp or other channels.

Step 4:

In this section we are going to set up the Kubeconfig file which is used by the clients(Kube-controller-manager, kube-scheduler, admin user, kube-proxy etc.) to communicate to the kubeapi server

  • Set up the load balancer address, that will be used within the config file:

LOADBALANCER_ADDRESS=192.168.5.30

  • Generate a kubeconfig file for kube-proxy service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${LOADBALANCER_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.crt \
--client-key=kube-proxy.key \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}

Kubeconfig file for kube-proxy
  • Generate a kubeconfig file for kube-controller-manager service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.crt \
--client-key=kube-controller-manager.key \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}

kubeconfig file for kube-controller-manager
  • Generate kubeconfig file for kube-scheduler:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}

Kubeconfig file for kube-scheduler
  • Generate a kubeconfig file for the admin user:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=admin.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig
}

kubeconfig file for admin user
  • Now we can see 4 config files getting created and we need to ship the kube-proxy.config file onto the worker nodes and and remaining once onto each master node.
List of kube-config files created

Step 5:

  • Now we will create an encryption key which we will use to encrypt the data in ETCD service. For this demo setup encryption key is created using below command:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

  • Now we create an encryption config file using the encryption key as shown below:

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
— resources:
— secrets
providers:
— aescbc:
keys:
— name: key1
secret: oXg7R/sphKdrK4a86XtyZiAp3/VZuYUzpIalbcTlS7s=
— identity: {}

Encryption config file creation

Once that file gets created, it needs to distributed across all the different master nodes in the cluster.

Step 6:

In this section we are going to start with installation of ETCD service by downloading and installing the binary. Remeber to follow below steps on all the master nodes(master-1 and master-2)

  • Download the etcd binary as shown below:

wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"

  • Now extract and install the etcd server and etcd command line utility using below command:

{
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
}

ETCD service and ETCCTL download and install
  • Configure the etcd service by creating a folders to copy the certificates and keys related to etcd:

{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.crt etcd-server.key etcd-server.crt /etc/etcd/
}

INTERNAL_IP=<provide the internal IP address of master nodes>(in my case it was 192.168.5.11,192.168.5.12)

ETCD_NAME=$(hostname -s)

  • Now create the etcd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/etcd-server.crt \\
--key-file=/etc/etcd/etcd-server.key \\
--peer-cert-file=/etc/etcd/etcd-server.crt \\
--peer-key-file=/etc/etcd/etcd-server.key \\
--trusted-ca-file=/etc/etcd/ca.crt \\
--peer-trusted-ca-file=/etc/etcd/ca.crt \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster master-1=https://192.168.5.11:2380,master-2=https://192.168.5.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Setup ETCD systemd service
  • Now start the etcd server using below set of commands:

{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}

Start the ETCD Service

To validate if etcd is working fine we run the below command using etcdctl utility:

sudo ETCDCTL_API=3 etcdctl member list — endpoints=https://127.0.0.1:2379 — cacert=/etc/etcd/ca.crt — cert=/etc/etcd/etcd-server.crt — key=/etc/etcd/etcd-server.key

This command gives details of all the master nodes in the cluster:

Testing for proper functioning of etcd service

Step 7:(All the steps mentioned below are followed on both the master nodes)

In this step we are going to start with installation of control plane components.

  • To begin with lets create a folder in master node to store all the config files:

sudo mkdir -p /etc/kubernetes/config

  • Now download the individual service binaries(kubeapi seever, kubectl, kube control manager, kube scheduler):

wget -q — show-progress — https-only — timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-apiserver" \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-controller-manager" \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-scheduler" \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl"

Download master node components
  • Install each of the components as shown below:
Install master node components
  • Once the installation is complete we move to configuration phase where we start with configuration of kube-api server as shown below:

{
sudo mkdir -p /var/lib/kubernetes/

sudo cp ca.crt ca.key kube-apiserver.crt kube-apiserver.key \
service-account.key service-account.crt \
etcd-server.key etcd-server.crt \
encryption-config.yaml /var/lib/kubernetes/
}

Configuration of kube-api server

Now create the kube-apiserver.service systemd file using below configuration:

systemd file for kube-api server
  • Now configure Kube-control manager by first moving kube-controller-manager kubeconfig file to /var/lib/kubernetes folder:

sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/

  • Now create the kube-controller-manager.service systemd file using below configuration:
systemd file for kube-controller-manager
  • Now configure kube-scheduler by first moving kube-scheduler kubeconfig file to /var/lib/kubernetes folder:

sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/

  • Now create the kube-scheduler.service systemd file using below configuration:
systemd file for kube-scheduler
  • Now its time to start the services:

{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}

Starting the services
  • Now we validate if all components are working fine by using below kubectl command:

kubectl get componentstatuses — kubeconfig admin.kubeconfig

Health check for all components

Step 8:

Now its time to set up the load balancer using HA Proxy:

  • The first step involves installing HA proxy:

sudo apt-get update && sudo apt-get install -y haproxy

Install HA Proxy
  • Now create a configuration file for the HA Proxy and restart the service:

cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg
frontend kubernetes
bind 192.168.5.30:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes

backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master-1 192.168.5.11:6443 check fall 3 rise 2
server master-2 192.168.5.12:6443 check fall 3 rise 2
EOF

sudo service haproxy restart

create a config and start the HA proxy service
  • Validating access to kubernetes API via load balancer IP:
Load balancer validation

Step 9:

Now that control plane components have been set up, we move towards setting up the worker node components i.e kubelet and kube-proxy:

  • We begin by creating client cert and key for worker nodes as shown below:
SSL cert and key for worker node 1
  • Now generate a kubeconfig file for first worker node using set of commands:
Kubeconfig file for worker node -1
  • Now send the certificates and config file to the worker-1 node using

scp:scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/

Transfer certificate and config to worker-1 from master-1 using scp
  • Now move to worker-1 node and download the binaries for kubectl, kubelet, kube-proxy:

wget -q — show-progress — https-only — timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet

Download service binaries on worker node
  • Now we Install the downloaded binaries as shown below:
Install the binaries
  • Now we start with configuring the kubelet service by creating a kubelet-config.yaml file:
Kubelet-config.yaml file creation
  • Now create a systemd service for kubelet using the above config file

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
— config=/var/lib/kubelet/kubelet-config.yaml \\
— image-pull-progress-deadline=2m \\
— kubeconfig=/var/lib/kubelet/kubeconfig \\
— tls-cert-file=/var/lib/kubelet/${HOSTNAME}.crt \\
— tls-private-key-file=/var/lib/kubelet/${HOSTNAME}.key \\
— network-plugin=cni \\
— register-node=true \\
— v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

systemd file for Kubelet service
  • Next we setup the kube-proxy for which we create a kube-proxy config file and use that in the systemd file for kube-proxy service as shown:
Create Kube-proxy systemd file
  • Now start the services using below set of commands:
Start tje services
  • Now validate if the worker node is being detected by kubectl from master node:
worker node detected using kubectl

Step 10:

This step focusses on setting up the other worker node but this time making use of TLS bootstrapping method which is a scalable method for setting up worker nodes at enterprise level.

  • Pre-requisites:

kube-apiserver — Ensure bootstrap token based authentication is enabled on the kube-apiserver.

— enable-bootstrap-token-auth=true

kube-controller-manager — The certificate requests are signed by the kube-controller-manager ultimately. The kube-controller-manager requires the CA Certificate and Key to perform these operations.

— cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
— cluster-signing-key-file=/var/lib/kubernetes/ca.key

  • Now download and install the respective binaries for worker node-2:

wget -q — show-progress — https-only — timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet

Download and Install worker node binaries
  • Now move the CA certificate to the respective folder:

sudo mv ca.crt /var/lib/kubernetes/

Copy the CA cert to respective folders
  • Now we will create the Bootstrap Token to be used by Nodes(Kubelets) to invoke Certificate API. The token will be created from master node 1 using the below yaml file:

cat > bootstrap-token-07401b.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form “bootstrap-token-<token id>”
name: bootstrap-token-07401b
namespace: kube-system

# Type MUST be ‘bootstrap.kubernetes.io/token’
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: “The default bootstrap token generated by ‘kubeadm init’.”

# Token ID and secret. Required.
token-id: 07401b
token-secret: f395accd246ae52d

# Expiration. Optional.(this date should be a future date else worker node would fail to connect to api server to fetch certificates)
expiration: 2021–12–12T03:22:11Z

# Allowed usages.
usage-bootstrap-authentication: “true”
usage-bootstrap-signing: “true”

# Extra groups to authenticate the token as. Must start with “system:bootstrappers:”
auth-extra-groups: system:bootstrappers:worker
EOF

Create a yaml file with secret object with type bootstrap.kubernetes.io
create the object using kubectl
  • Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet:

kubectl create clusterrolebinding create-csrs-for-bootstrapping — clusterrole=system:node-bootstrapper — group=system:bootstrappers

Create clusterrolebinding object
  • Now we will authorize workers nodes to approve CSR:
Create a clusterrolebinding to add worker nodes to approve CSR
  • Now we create a new clusterrolebinding for auto renewal of certificate for worker nodes on auto expiry:

kubectl create clusterrolebinding auto-approve-renewals-for-nodes — clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient — group=system:nodes

Create a clusterrolebinding to add worker nodes to auto renew of expired certificate
  • Now we configure the second worker to TLS bootstrap using the token we generated by creating a bootstrap config file in worker 2 node as shown:
bootstrap config file
  • Now create a kubelet config file as shown:
Kubelet config file
  • Now we configure the kubelet service systemd file:

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
— bootstrap-kubeconfig=”/var/lib/kubelet/bootstrap-kubeconfig” \\
— config=/var/lib/kubelet/kubelet-config.yaml \\
— image-pull-progress-deadline=2m \\
— kubeconfig=/var/lib/kubelet/kubeconfig \\
— cert-dir=/var/lib/kubelet/pki/ \\
— rotate-certificates=true \\
— network-plugin=cni \\
— register-node=true \\
— v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

kubelet.service systemd file
  • Now we configure the Kube-proxy service by creating a kube-proxy-config file and using its reference in the systemd service file as shown below:
Setup of kube-proxy service
  • Now start the services as shown below:
start the services
  • Now once its done validate if both the services are running fine using service (kubelet/kube-proxy) status and in case of any issue try debugging using journalctl -u (kubelet/kube-proxy) | tail -n 100
  • Once everything is fine, switch to master-1 node to check if any csr request has arrived, and approve the same if not auto approved as shown below:

kubectl certificate approve <csr name>

  • Once everything is done we can see details of both the worker nodes from kubectl as shown:
list of worker nodes

Step 11:

  • Here we will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.

{
KUBERNETES_LB_ADDRESS=192.168.5.30

kubectl config set-cluster kubernetes-the-hard-way \
— certificate-authority=ca.crt \
— embed-certs=true \
— server=https://${KUBERNETES_LB_ADDRESS}:6443

kubectl config set-credentials admin \
— client-certificate=admin.crt \
— client-key=admin.key

kubectl config set-context kubernetes-the-hard-way \
— cluster=kubernetes-the-hard-way \
— user=admin

kubectl config use-context kubernetes-the-hard-way
}

Kubeconfig file to point to load balancer for connecting to API server

Step 12:

Here we deploy the network plugin to provision pod networking, and for this use case we would choose weave plugin.

  • Download the CNI Plugins required for weave on each of the worker nodes — worker-1 and worker-2:

wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz

Download weave plugin on worker nodes
  • Extract the downloaded tar as shown:
Extracting the tar file
  • Now run below command on the master node once to deploy weave network:

kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”

  • Now validate the ready state of the other nodes:
Ready state of nodes

Step 12:

Now we need to provide permission to kube api server to access the kubelet service on each worker nodes by adding a cluster role and binding that cluster role to kubeapi server.

  • Create a cluster role to access the kubelet API:

cat <<EOF | kubectl apply — kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: “true”
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
— apiGroups:
— “”
resources:
— nodes/proxy
— nodes/stats
— nodes/log
— nodes/spec
— nodes/metrics
verbs:
— “*”
EOF

Create a cluster role for Kubelet api access
  • Bind the role to kube-apiserver user using clusterrolebinding:

cat <<EOF | kubectl apply — kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: “”
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
— apiGroup: rbac.authorization.k8s.io
kind: User
name: system:kube-apiserver
EOF

bind cluster role to kubeapi-server user

Step 13:

Now we setup the coreDNS addon which provides DNS based service discovery:

  • Deploy the coredns cluster add-on:

kubectl apply -f https://raw.githubusercontent.com/mmumshad/kubernetes-the-hard-way/master/deployments/coredns.yaml

Deploy the CoreDNS addon
  • Now that all setup is done we can test our cluster setup by performing some basic operations like create deployment, pods, secrets, services etc as shown below:
Perform basic Kubernets operation on cluster

Conclusion:

It was really fun setting up a cluster from scratch and I got to learn a lot from this along with troubleshooting skills which will be helpfull during the exam. This is a must do for anyone who wants understand the core of Kubernetes but not a mandate of CK(A,AD,S) though.

References:

I followed the guide provide by mumshad here for the whole setup which is very descriptive and clear to understand.

The youtube video series for the same is also here

--

--