Installing Kubernetes- The Hard Way

Planned HLD
  1. Minimum 8 GB RAM on the machine to allocate 1 GB RAM and 1 CPU to each nodes. As per the above diagram it has to be 5 VM’s that will be setup 4 for k8 nodes and 1 for the load balancer
  2. Virtual box and vagrant needs to be installed on the host machine. Here is the Link to download virtual-box and here is the the one for vagrant.
vagrant setup for K8 nodes and Load balancer
  • Once the setup is complete we can see 5 VM’s installed and running in our virtual box window as shown:
List of running VM’s
  • Now we can login to any of the VM’s using the vagrant SSH and the individual private key that it has created against each VM as shown:
SSH on to individual boxes using vagrant private keys
  • Now in order for easy access to all the other nodes from one master node for push and pull of files lets just add the public key of master-1 vm onto authorized_keys file of other vm’s as shown below:
SSH public key of master-1 node added to other nodes
  • Now install kubectl on the master-1 node using below commands:
  • Generate the CA Certificate and key for K8 cluster that can be used to generate additional TLS certificates:
CA Certificate and key generation
  • Now we will generate the certificate and key for admin user for K8:
SSL key and cert generation for admin user
  • Now we generate and certificate and key for individual components as shown:
Cert and Key for kube-controller-manager
Cert and Key for kube-proxy
Cert and Key for Kube-Scheduler
  • For generating certificate and key for Kube api server, we need to make some changes in the openssl.conf file since there are other components who try to interact with Kubeapi server which include other master servers, load balancers. Hence below is the command that was used:
Cert and key generation for kubeapi server
  • Similar is the process for generation of key and cert for ETCD as shown:
ETCD key and cert generation
  • Service account key and certificate generation(K8 controller manager requires a key pair to generate a signed service account token hence this step is followed):
Service account cert and key generation
  • Set up the load balancer address, that will be used within the config file:
  • Generate a kubeconfig file for kube-proxy service:
Kubeconfig file for kube-proxy
  • Generate a kubeconfig file for kube-controller-manager service:
kubeconfig file for kube-controller-manager
  • Generate kubeconfig file for kube-scheduler:
Kubeconfig file for kube-scheduler
  • Generate a kubeconfig file for the admin user:
kubeconfig file for admin user
  • Now we can see 4 config files getting created and we need to ship the kube-proxy.config file onto the worker nodes and and remaining once onto each master node.
List of kube-config files created
  • Now we will create an encryption key which we will use to encrypt the data in ETCD service. For this demo setup encryption key is created using below command:
  • Now we create an encryption config file using the encryption key as shown below:
Encryption config file creation
  • Download the etcd binary as shown below:
  • Now extract and install the etcd server and etcd command line utility using below command:
ETCD service and ETCCTL download and install
  • Configure the etcd service by creating a folders to copy the certificates and keys related to etcd:
  • Now create the etcd.service systemd unit file:
Setup ETCD systemd service
  • Now start the etcd server using below set of commands:
Start the ETCD Service
Testing for proper functioning of etcd service
  • To begin with lets create a folder in master node to store all the config files:
  • Now download the individual service binaries(kubeapi seever, kubectl, kube control manager, kube scheduler):
Download master node components
  • Install each of the components as shown below:
Install master node components
  • Once the installation is complete we move to configuration phase where we start with configuration of kube-api server as shown below:
Configuration of kube-api server
systemd file for kube-api server
  • Now configure Kube-control manager by first moving kube-controller-manager kubeconfig file to /var/lib/kubernetes folder:
  • Now create the kube-controller-manager.service systemd file using below configuration:
systemd file for kube-controller-manager
  • Now configure kube-scheduler by first moving kube-scheduler kubeconfig file to /var/lib/kubernetes folder:
  • Now create the kube-scheduler.service systemd file using below configuration:
systemd file for kube-scheduler
  • Now its time to start the services:
Starting the services
  • Now we validate if all components are working fine by using below kubectl command:
Health check for all components
  • The first step involves installing HA proxy:
Install HA Proxy
  • Now create a configuration file for the HA Proxy and restart the service:
create a config and start the HA proxy service
  • Validating access to kubernetes API via load balancer IP:
Load balancer validation
  • We begin by creating client cert and key for worker nodes as shown below:
SSL cert and key for worker node 1
  • Now generate a kubeconfig file for first worker node using set of commands:
Kubeconfig file for worker node -1
  • Now send the certificates and config file to the worker-1 node using
Transfer certificate and config to worker-1 from master-1 using scp
  • Now move to worker-1 node and download the binaries for kubectl, kubelet, kube-proxy:
Download service binaries on worker node
  • Now we Install the downloaded binaries as shown below:
Install the binaries
  • Now we start with configuring the kubelet service by creating a kubelet-config.yaml file:
Kubelet-config.yaml file creation
  • Now create a systemd service for kubelet using the above config file
systemd file for Kubelet service
  • Next we setup the kube-proxy for which we create a kube-proxy config file and use that in the systemd file for kube-proxy service as shown:
Create Kube-proxy systemd file
  • Now start the services using below set of commands:
Start tje services
  • Now validate if the worker node is being detected by kubectl from master node:
worker node detected using kubectl
  • Pre-requisites:
  • Now download and install the respective binaries for worker node-2:
Download and Install worker node binaries
  • Now move the CA certificate to the respective folder:
Copy the CA cert to respective folders
  • Now we will create the Bootstrap Token to be used by Nodes(Kubelets) to invoke Certificate API. The token will be created from master node 1 using the below yaml file:
Create a yaml file with secret object with type bootstrap.kubernetes.io
create the object using kubectl
  • Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet:
Create clusterrolebinding object
  • Now we will authorize workers nodes to approve CSR:
Create a clusterrolebinding to add worker nodes to approve CSR
  • Now we create a new clusterrolebinding for auto renewal of certificate for worker nodes on auto expiry:
Create a clusterrolebinding to add worker nodes to auto renew of expired certificate
  • Now we configure the second worker to TLS bootstrap using the token we generated by creating a bootstrap config file in worker 2 node as shown:
bootstrap config file
  • Now create a kubelet config file as shown:
Kubelet config file
  • Now we configure the kubelet service systemd file:
kubelet.service systemd file
  • Now we configure the Kube-proxy service by creating a kube-proxy-config file and using its reference in the systemd service file as shown below:
Setup of kube-proxy service
  • Now start the services as shown below:
start the services
  • Now once its done validate if both the services are running fine using service (kubelet/kube-proxy) status and in case of any issue try debugging using journalctl -u (kubelet/kube-proxy) | tail -n 100
  • Once everything is fine, switch to master-1 node to check if any csr request has arrived, and approve the same if not auto approved as shown below:
  • Once everything is done we can see details of both the worker nodes from kubectl as shown:
list of worker nodes
  • Here we will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.
Kubeconfig file to point to load balancer for connecting to API server
  • Download the CNI Plugins required for weave on each of the worker nodes — worker-1 and worker-2:
Download weave plugin on worker nodes
  • Extract the downloaded tar as shown:
Extracting the tar file
  • Now run below command on the master node once to deploy weave network:
  • Now validate the ready state of the other nodes:
Ready state of nodes
  • Create a cluster role to access the kubelet API:
Create a cluster role for Kubelet api access
  • Bind the role to kube-apiserver user using clusterrolebinding:
bind cluster role to kubeapi-server user
  • Deploy the coredns cluster add-on:
Deploy the CoreDNS addon
  • Now that all setup is done we can test our cluster setup by performing some basic operations like create deployment, pods, secrets, services etc as shown below:
Perform basic Kubernets operation on cluster

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Top 10 delivery date and time extensions in magento2

Tips to make your code more readable

Save your time and learn to pick the right tool to build an MVP

Pairing for Outsiders

Code First with Spring Boot, Hibernate and Liquibase

Deciding the best Singleton Approach in Python

Liquid Web Pricing -Review Features and Prices

A Futures Library for Asynchronous Programming in C++

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Swapneil Kumar Dash

Swapneil Kumar Dash

More from Medium

Migrating data from Splunk to the Elastic stack

Red Hat Advanced Cluster Security for Kubernetes Configuration & Api Basics

Conditional Terraform Nested Blocks

What Is a Staging Environment and the Road to Optimized Deployment