Deploy Kubernetes on vSphere

Deploy Kubernetes on vSphere

January 8, 2020 3 By Eric Shanks

If you’re struggling to deploy Kubernetes (k8s) clusters, you’re not alone. There are a bunch of different ways to deploy Kubernetes and there are different settings depending on what cloud provider you’re using. This post will focus on installing Kubernetes on vSphere with Kubeadm. At the end of this post, you should have what you need to manually deploy k8s in a vSphere environment on ubuntu.

Prerequisites

Before we start configuring and deploying Kubernetes, we need to ensure we have the proper environment setup. My lab will consist of three ubuntu virtual machines used for control plane (masters) nodes, and five additional machines for worker nodes. Your environment can vary with the number of nodes, but I’ve chosen to use three control plane nodes to show the high availability configurations. My environment looks roughly like this diagram:

Note that having multiple control plane nodes requires having a load balancer to distribute traffic to these VMs. The load balancer should have a health check on port 6443 on the control plane nodes and a VIP. I’m using a Kemp load balancer in my lab where I’ve created a VIP and a corresponding DNS name for this vip.

  • VIP – 10.10.50.170:6443
  • DNS Name – k8s.hollow.local

These Kubernetes virtual machines should be placed within their own VM folder in vSphere. You can see that I created a folder named “kubernetes” for this post.

Also, the virtual machines that will be used for the Kubernetes cluster need to have an advanced setting changed for each VM. The disk.EnableUUID parameter must be set to true. This can be automated of course, but this post focuses on the manual settings so you can see whats happening.

Right Click the k8s node in vCenter and choose edit settings. From there, go to the VM Options tab and scroll down to find the “Edit Configuration” link in the Configuration Parameters settings.

Find the disk.EnableUUID and set the value to true.

vSphere Permissions

Some components of Kubernetes work better when they are tied with a cloud provider such as vSphere or AWS. For example, when you create a Persistent Volume, it sure would be nice to have the cloud provider provision storage for that volume wouldn’t it? Well before we can couple Kubernetes with vSphere, we need to setup some permissions first.

We will need to create a few roles, of which I will show the permissions granted to each below in a screenshot.

manage-k8s-node-vms

manage-k8s-volumes

k8s-system-read-and-spbm-profile-view

Once these roles have been created with the appropriate permissions assigned, they must be added to the proper entity in vCenter and associated with a user. In my case I’m using a service account called “k8s-vcp“.

Be sure to add the appropriate role to the correct entity and user. The table below shows how they should be assigned.

Role to be AssignedEntityPropagate?
manage-k8s-node-vmsCluster, Hosts, k8s nodes VM FolderYes
manage-k8s-volumesDatastore where new volumes will be createdNo
k8s-system-read-and-spbm-profile-viewvCenterNo
Read-Only (pre-created)Datacenter, Datastore, Cluster, Datastore Storage FolderNo

For example, in my vSphere Cluster where my k8s vms live, I’ve added the “manage-k8s-node-vms” role to my k8s-vcp user and set it to propagate. Do this for each permission in the list above.

Create vSphere Config File

Now that we’ve got our permissions setup, we need to create a file with some login information in it. This config file is used by the k8s control plane to interact with vSphere. Think of it this way, we need to be able to tell the k8s control plane where our VMs live, what datastores to use, and under what user we should be making those calls from.

Create a vsphere.conf file and fill it out with the information from the template below, filling in your own information as appropriate. The file below are the settings used in my environment. For details about how this file can be constructed, visit the VMware.github.io page for details on how you can setup multiple vCenters for example.

Create a Kubeadm Config File

Now we’re pretty much done with the vSphere side. We’re ready to start working on the Kubernetes pieces. The first section we’ll cover is creating the kubeadm.conf file. This file has instructions on how to setup the control plane components when we use kubeadm to bootstrap them. There are a ton of options here so we won’t go into all of them. The important pieces are to note that we need to provide the vsphere.conf file as parameters in this kubeadm configuration. The file below is what I used to setup my cluster. Be sure to update the IP addresses and DNS Names for your load balancer here as well.

Kubernetes VM Setup

Now we can start installing components on the ubuntu VMs before we deploy the cluster. Do this on all virtual machines that will be part of your Kubernetes Cluster.

Disable swap

Install Kubelet, Kubeadm, and Kubectl

Install Docker and change the cgroup driver to systemd.

Once you’ve completed the steps above, copy the vSphere.conf and kubeadm.conf files to /etc/kubernetes/ on your Control Plane VMs.

Once you’ve placed the Kubeadm.conf and vsphere.conf files in the /etc/kubernetes directory, we need to update the configuration of the Kubelet service so that it knows about the vsphere environment as well.

Edit the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add two configuration options.

When you’re done, recycle the kubelet service.

Bootstrap the First K8s Control Plane Node

The time has come to setup the cluster. Login to one of your control plane nodes which will become the first master in the cluster. We’ll run the kubeadm initialization with the kubeadm.conf file that we created earlier and placed in the /etc/kubernetes directory.

It may take a bit for the process to complete. Kubeadm init is ensuring that our api-server, controller-manager, and etcd container images are downloaded as well as creating certificates which you should find in the /etc/kubernetes/pki directory.

When the process is done, you should receive instructions on how to add additional control plane nodes and worker nodes.

Add Additional Control Plane Nodes

We can now take the information provided from our init command and run the kubeadm join provided in our output, on the other two control plane nodes.

Before we add those additional control plan nodes, you’ll need to copy the contents of the pki directory to the other control plane nodes. This is needed because they need those certificates for authentication purposes with the existing control plane node.

Your instructions are going to be different. The command below was what was provided to me.

When you’re done with your additional control plane clsuters, you should see a success message with some instructions on setting up the KUBECONFIG file which we’ll cover later.

Join Worker Nodes to the Cluster

At this point we should have three control plane nodes working in our cluster. Let’s add the worker nodes now by using the other kubeadm join command presented to use after setting up our first control plane node.

Again, yours will be different, but for example purposes mine was:

Run the command on your worker nodes

Setup KUBECONFIG

Log back into your first Kubernetes control plane node and we’ll setup KUBECONFIG so we can issue some commands against our cluster and ensure that it’s working properly.

Run the following to configure your KUBECONFIG file for use:

When you’re done, you can run:

We can see here that we have a cluster created, but the status is not ready. This is because we’re missing a CNI.

Deploy a CNI

There are a variety of Networking interfaces that could be deployed. For this simple example I’ve used calico. Simply apply this manifest from one of your nodes.

When you’re done, you should have a working cluster.

Summary

At this point you should have a very basic Kubernetes cluster up and running and should be able to use storage classes with your vSphere environment. Your next steps should be building cool things on Kubernetes or tinkering around with the builds to use different CNIs, Container Runtimes, and automate it! Good luck!