ClusterAPI Demystified

ClusterAPI Demystified

November 4, 2019 0 By Eric Shanks

Deploying Kubernetes clusters may be the biggest hurdle in learning Kubernetes and one of the challenges in managing Kubernetes. ClusterAPI is a project designed to ease this burden and make the management and deployment of Kubernetes clusters simpler.

The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.

kubernetes-sigs/cluster-api

This post is designed to dive into ClusterAPI to investigate how it works, and how you can use it.

Logical Architecture

Let’s take a look at what ClusterAPI might look like in your environment when it was fully configured.

ClusterAPI uses a management cluster and a series of components deployed within it (we’ll discuss those a bit later) to manage many “workload” clusters across different providers. Think of it this way, the management cluster builds workload clusters for you across vSphere, AWS, and other platforms. All you have to do is apply a desired state configuration file to the management cluster, and it does the rest.

Yeah, thats pretty neat.

How Does it Work?

OK, so we have a management cluster setup. How do we use it to build our workload clusters? Well, we get to leverage the power of control loops which has been discussed here. The ClusterAPI setup can be broken into three phases.

1 – Install ClusterAPI into the Management Cluster

First things first, your management cluster needs some components installed so that a plain old Kubernetes cluster can become a CAPI enabled management cluster. This is done by applying a Kubernetes manifest that has the components defined. The process for this is listed here https://cluster-api.sigs.k8s.io/tasks/installation.html

The install manifest will install the following components:

  • namespace – cluster-api-system
  • custom resource definition – clusters
  • custom resource definition – machinedeployments
  • custom resource definition – machines
  • custom resource definition – machinesets
  • role – CAPI – leader election
  • cluster role – CAPI manager
  • role binding – CAPI leader election
  • deployment – CAPI controller manager

The components above will deploy the appropriate components that make up the ClusterAPI control loops.

2 – Install Bootstrap Components

The bootstrap components will be responsible for turning infrastructure nodes (servers, vms, etc) into a Kuberenetes node through the use of cloud-init. Again, to deploy the bootstrap components, just apply the manifest to the management cluster. When the bootstrap components are deployed, the following resources will be created in the management cluster.

  • namespace – cabpk-system
  • custom resource definition – kubeadmconfigs.bootstrap
  • custom resource definition – kubeadmconfigtemplates.bootstrap
  • role – cabpk-leader-election-role
  • cluster role – cabpk-manager-role
  • cluster role – cabpk-proxy-role
  • role binding – cabpk-leader-election-rolebinding
  • cluster role binding – cabpk-manager-rolebinding
  • cluster role binding – capbk-proxy-rolebinding
  • service – cabpk-controller-manager
  • deployment – capbk-controller-manager

At this point, the components necessary to configure workload clusters is ready to go, the next piece is needed to actually deploy the infrastructure resources for us to run the bootstrap components on.

3 – Install Infrastructure Components

Now, after the ClusterAPI components are installed the next step is to install the infrastructure components. These are specific to the cloud provider that you’ll be installing workload clusters on. This means you’ll need to get the right bootstrap components for the particular cloud you plan to install clusters on.

AWS

If you plan to deploy workloads on an AWS cloud provider you will find the following resources:

  • namespace – capa-system
  • custom resource definition – awsclusters.infrastructure
  • custom resource definition – awsmachines.infrastructure
  • custom resource definition – awsmachinetemplates.infrastructure
  • role – capa-leader-election-role
  • cluster role – capa-manager-role
  • cluster role – capa-proxy-role
  • role binding capa-leader-election-rolebinding
  • cluster role binding – capa-manager-rolebinding
  • cluster role binding – capa-proxy-rolebinding
  • secret – capa-manager-bootstrap-credentials
  • service – capa-controller-manager-metrics-service
  • deployment – capa-controller-manager

vSphere

If you plan to deploy vSphere workload clusters then the following components will be deployed in the management cluster:

  • namespace – capv-system
  • custom resource definition – vsphereclusters.infrastructure
  • custom resource definition – vspheremachines.infrastructure
  • custom resource definition – vspheremachinetemplates.infrastructure
  • role – capv-leader-election-role
  • cluster role – capv-manager-role
  • cluster role – capv-proxy-role
  • role binding – capv-leader-election-rolebinding
  • cluster role binding – capv-manager-rolebinding
  • cluster role binding – capv-proxy-rolebinding
  • service – capv-controller-manager-metrics-service
  • deployment – capv-controller-manager

Deploy a Workload Cluster

At this point our management cluster should be ClusterAPI enabled and control loops patiently waiting for some instructions. The next step would be to provide some configurations to the management cluster and have them spring into action and deploy our clusters.

To do this, we want to provide several more Kubernetes manifests specific to the target cloud provider. Here is a look at some examples for a vSphere cluster.

vSphere Manifests

First the Cluster manifest that describes our Kubernetes cluster object to be created:

Next, the control plane object which will define our Kubernetes control-plane nodes and configurations:

And then finally the machine deployments which lists our worker nodes for this cluster:

The end result of applying these manifests to the management cluster is that CAPI will deploy our new workload cluster components as we’ve specified. The below screenshot is the result of applying these manifests to the management cluster to create a workload cluster in my vSphere environment.

AWS Manifests

The AWS Manifests are very similar to the vSphere manifests but of course the underlying infrastructure is different so they must be modified a bit. Here are the AWS manifests used in my cluster. NOTE: these manifests have some info from my environment like my ssh-key name so you can’t use them as is.

Kubernetes Cluster Objects:

Control Plane Manifest:

Machine Deployments:

Once those manifests had been deployed to my management cluster, my AWS account started spinning up a workload cluster.

Management Cluster Objects

At this point, a pair of clusters are deployed and we can see this from running the kubectl get clusters command in our management Kubernetes cluster.

And if we look at our machines object through the Kubernetes API we can see our machines that are deployed.

The cool part here is we can modify our manifests and let the Kubernetes Control Loops fix our clusters. Think about if we need to add nodes or remove nodes!

In fact, this blog post is over and I should spin down my AWS nodes so I don’t have to keep paying for them when I’m not using those resources. I’ll just delete the resources out of my management cluster and I’ll spin them up again later.

Bootstrapping Management Clusters

I know what you’re thinking. This solution builds Kubernetes clusters for me, but I need a Kubernetes cluster built before I can even use this. Well, you’re right, but there is a workaround for that if you need to get your management cluster setup.

Right from our own laptop, we can use the “kind” project that I’ve written about previously to deploy our management components.

From your laptop you’d install a tool like clusterctl which leverages kind to build a Kubernetes cluster on docker images inside our laptop. The ClusterAPI components discussed above are deployed into this cluster and they in turn build our management cluster and “pivot” (or move) the components from our kind cluster to the management cluster. Basically we spin up a temporary cluster on our laptop which deploys a management cluster and copies the components to it for us to then manage our workloads. The process looks similar to the diagram below for setting up a management cluster.

Summary

ClusterAPI will let you manage your Kubernetes clusters via desired state configuration manifests just like the applications you deploy on top of these clusters, giving users a familiar experience. The project provides a quick way to stand up, modify, and delete clusters in your environment to alleviate the headaches around cluster management.