ClusterAPI Demystified
November 4, 2019Deploying Kubernetes clusters may be the biggest hurdle in learning Kubernetes and one of the challenges in managing Kubernetes. ClusterAPI is a project designed to ease this burden and make the management and deployment of Kubernetes clusters simpler.
The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.
kubernetes-sigs/cluster-api
This post is designed to dive into ClusterAPI to investigate how it works, and how you can use it.
Logical Architecture
Let’s take a look at what ClusterAPI might look like in your environment when it was fully configured.
ClusterAPI uses a management cluster and a series of components deployed within it (we’ll discuss those a bit later) to manage many “workload” clusters across different providers. Think of it this way, the management cluster builds workload clusters for you across vSphere, AWS, and other platforms. All you have to do is apply a desired state configuration file to the management cluster, and it does the rest.
Yeah, thats pretty neat.
How Does it Work?
OK, so we have a management cluster setup. How do we use it to build our workload clusters? Well, we get to leverage the power of control loops which has been discussed here. The ClusterAPI setup can be broken into three phases.
1 – Install ClusterAPI into the Management Cluster
First things first, your management cluster needs some components installed so that a plain old Kubernetes cluster can become a CAPI enabled management cluster. This is done by applying a Kubernetes manifest that has the components defined. The process for this is listed here https://cluster-api.sigs.k8s.io/tasks/installation.html
The install manifest will install the following components:
- namespace – cluster-api-system
- custom resource definition – clusters
- custom resource definition – machinedeployments
- custom resource definition – machines
- custom resource definition – machinesets
- role – CAPI – leader election
- cluster role – CAPI manager
- role binding – CAPI leader election
- deployment – CAPI controller manager
The components above will deploy the appropriate components that make up the ClusterAPI control loops.
2 – Install Bootstrap Components
The bootstrap components will be responsible for turning infrastructure nodes (servers, vms, etc) into a Kuberenetes node through the use of cloud-init. Again, to deploy the bootstrap components, just apply the manifest to the management cluster. When the bootstrap components are deployed, the following resources will be created in the management cluster.
- namespace – cabpk-system
- custom resource definition – kubeadmconfigs.bootstrap
- custom resource definition – kubeadmconfigtemplates.bootstrap
- role – cabpk-leader-election-role
- cluster role – cabpk-manager-role
- cluster role – cabpk-proxy-role
- role binding – cabpk-leader-election-rolebinding
- cluster role binding – cabpk-manager-rolebinding
- cluster role binding – capbk-proxy-rolebinding
- service – cabpk-controller-manager
- deployment – capbk-controller-manager
At this point, the components necessary to configure workload clusters is ready to go, the next piece is needed to actually deploy the infrastructure resources for us to run the bootstrap components on.
3 – Install Infrastructure Components
Now, after the ClusterAPI components are installed the next step is to install the infrastructure components. These are specific to the cloud provider that you’ll be installing workload clusters on. This means you’ll need to get the right bootstrap components for the particular cloud you plan to install clusters on.
AWS
If you plan to deploy workloads on an AWS cloud provider you will find the following resources:
- namespace – capa-system
- custom resource definition – awsclusters.infrastructure
- custom resource definition – awsmachines.infrastructure
- custom resource definition – awsmachinetemplates.infrastructure
- role – capa-leader-election-role
- cluster role – capa-manager-role
- cluster role – capa-proxy-role
- role binding capa-leader-election-rolebinding
- cluster role binding – capa-manager-rolebinding
- cluster role binding – capa-proxy-rolebinding
- secret – capa-manager-bootstrap-credentials
- service – capa-controller-manager-metrics-service
- deployment – capa-controller-manager
vSphere
If you plan to deploy vSphere workload clusters then the following components will be deployed in the management cluster:
- namespace – capv-system
- custom resource definition – vsphereclusters.infrastructure
- custom resource definition – vspheremachines.infrastructure
- custom resource definition – vspheremachinetemplates.infrastructure
- role – capv-leader-election-role
- cluster role – capv-manager-role
- cluster role – capv-proxy-role
- role binding – capv-leader-election-rolebinding
- cluster role binding – capv-manager-rolebinding
- cluster role binding – capv-proxy-rolebinding
- service – capv-controller-manager-metrics-service
- deployment – capv-controller-manager
Deploy a Workload Cluster
At this point our management cluster should be ClusterAPI enabled and control loops patiently waiting for some instructions. The next step would be to provide some configurations to the management cluster and have them spring into action and deploy our clusters.
To do this, we want to provide several more Kubernetes manifests specific to the target cloud provider. Here is a look at some examples for a vSphere cluster.
vSphere Manifests
First the Cluster manifest that describes our Kubernetes cluster object to be created:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: vsphere-cluster-1
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 100.96.0.0/11
serviceDomain: cluster.local
services:
cidrBlocks:
- 100.64.0.0/13
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereCluster
name: vsphere-cluster-1
namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereCluster
metadata:
name: vsphere-cluster-1
namespace: default
spec:
cloudProviderConfiguration:
global:
insecure: true
secretName: cloud-provider-vsphere-credentials
secretNamespace: kube-system
network:
name: VMs-170
providerConfig:
cloud:
controllerImage: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.0.0
storage:
attacherImage: quay.io/k8scsi/csi-attacher:v1.1.1
controllerImage: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1
livenessProbeImage: quay.io/k8scsi/livenessprobe:v1.1.0
metadataSyncerImage: gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1
nodeDriverImage: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1
provisionerImage: quay.io/k8scsi/csi-provisioner:v1.2.1
registrarImage: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
virtualCenter:
10.10.50.11:
datacenters: HollowLab
workspace:
datacenter: HollowLab
datastore: Synology02-NFS01
folder: kubernetes
resourcePool: 'HollowCluster/Resources/capv-workload'
server: 10.10.50.11
server: 10.10.50.11
Code language: JavaScript (javascript)
Next, the control plane object which will define our Kubernetes control-plane nodes and configurations:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: vsphere-cluster-1-controlplane-0
namespace: default
spec:
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: external
controllerManager:
extraArgs:
cloud-provider: external
imageRepository: k8s.gcr.io
initConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cloud-provider: external
name: '{{ ds.meta_data.hostname }}'
preKubeadmCommands:
- hostname "{{ ds.meta_data.hostname }}"
- echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts
- echo "127.0.0.1 localhost {{ ds.meta_data.hostname }}" >>/etc/hosts
- echo "{{ ds.meta_data.hostname }}" >/etc/hostname
users:
- name: capv
sshAuthorizedKeys:
- ssh-rsa OMITTED
sudo: ALL=(ALL) NOPASSWD:ALL
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
labels:
cluster.x-k8s.io/cluster-name: vsphere-cluster-1
cluster.x-k8s.io/control-plane: "true"
name: vsphere-cluster-1-controlplane-0
namespace: default
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: vsphere-cluster-1-controlplane-0
namespace: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereMachine
name: vsphere-cluster-1-controlplane-0
namespace: default
version: 1.15.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereMachine
metadata:
labels:
cluster.x-k8s.io/cluster-name: vsphere-cluster-1
cluster.x-k8s.io/control-plane: "true"
name: vsphere-cluster-1-controlplane-0
namespace: default
spec:
datacenter: HollowLab
diskGiB: 50
memoryMiB: 2048
network:
devices:
- dhcp4: true
dhcp6: false
networkName: VMs-170
numCPUs: 2
template: ubuntu-1804-kube-v1.15.3
Code language: JavaScript (javascript)
And then finally the machine deployments which lists our worker nodes for this cluster:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfigTemplate
metadata:
name: vsphere-cluster-1-md-0
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cloud-provider: external
name: '{{ ds.meta_data.hostname }}'
preKubeadmCommands:
- hostname "{{ ds.meta_data.hostname }}"
- echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts
- echo "127.0.0.1 localhost {{ ds.meta_data.hostname }}" >>/etc/hosts
- echo "{{ ds.meta_data.hostname }}" >/etc/hostname
users:
- name: capv
sshAuthorizedKeys:
- ssh-rsa OMITTED
sudo: ALL=(ALL) NOPASSWD:ALL
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: MachineDeployment
metadata:
labels:
cluster.x-k8s.io/cluster-name: vsphere-cluster-1
name: vsphere-cluster-1-md-0
namespace: default
spec:
replicas: 3
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: vsphere-cluster-1
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: vsphere-cluster-1
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfigTemplate
name: vsphere-cluster-1-md-0
namespace: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereMachineTemplate
name: vsphere-cluster-1-md-0
namespace: default
version: 1.15.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: VSphereMachineTemplate
metadata:
name: vsphere-cluster-1-md-0
namespace: default
spec:
template:
spec:
datacenter: HollowLab
diskGiB: 50
memoryMiB: 2048
network:
devices:
- dhcp4: true
dhcp6: false
networkName: VMs-170
numCPUs: 2
template: ubuntu-1804-kube-v1.15.3
Code language: JavaScript (javascript)
The end result of applying these manifests to the management cluster is that CAPI will deploy our new workload cluster components as we’ve specified. The below screenshot is the result of applying these manifests to the management cluster to create a workload cluster in my vSphere environment.
AWS Manifests
The AWS Manifests are very similar to the vSphere manifests but of course the underlying infrastructure is different so they must be modified a bit. Here are the AWS manifests used in my cluster. NOTE: these manifests have some info from my environment like my ssh-key name so you can’t use them as is.
Kubernetes Cluster Objects:
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: aws-cluster-1
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: aws-cluster-1
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
name: aws-cluster-1
spec:
region: us-east-2
sshKeyName: vmc-cna-admin
Code language: JavaScript (javascript)
Control Plane Manifest:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
name: aws-cluster-1-controlplane-0
labels:
cluster.x-k8s.io/control-plane: "true"
cluster.x-k8s.io/cluster-name: "aws-cluster-1"
spec:
version: 1.15.3
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: aws-cluster-1-controlplane-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
name: aws-cluster-1-controlplane-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
name: aws-cluster-1-controlplane-0
spec:
instanceType: t2.medium
ami:
id: ami-0ca9e222761de069b
iamInstanceProfile: "control-plane.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: "vmc-cna-admin"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: aws-cluster-1-controlplane-0
spec:
initConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.hostname }}'
kubeletExtraArgs:
cloud-provider: aws
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: aws
controllerManager:
extraArgs:
cloud-provider: aws
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
name: aws-cluster-1-controlplane-1
labels:
cluster.x-k8s.io/control-plane: "true"
cluster.x-k8s.io/cluster-name: "aws-cluster-1"
spec:
version: 1.15.3
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: aws-cluster-1-controlplane-1
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
name: aws-cluster-1-controlplane-1
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
name: aws-cluster-1-controlplane-1
spec:
instanceType: t2.medium
ami:
id: ami-0ca9e222761de069b
iamInstanceProfile: "control-plane.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: "vmc-cna-admin"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: aws-cluster-1-controlplane-1
spec:
joinConfiguration:
controlPlane: {}
nodeRegistration:
name: '{{ ds.meta_data.hostname }}'
kubeletExtraArgs:
cloud-provider: aws
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
name: aws-cluster-1-controlplane-2
labels:
cluster.x-k8s.io/control-plane: "true"
cluster.x-k8s.io/cluster-name: "aws-cluster-1"
spec:
version: 1.15.3
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: aws-cluster-1-controlplane-2
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
name: aws-cluster-1-controlplane-2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
name: aws-cluster-1-controlplane-2
spec:
instanceType: t2.medium
ami:
id: ami-0ca9e222761de069b
iamInstanceProfile: "control-plane.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: "vmc-cna-admin"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: aws-cluster-1-controlplane-2
spec:
joinConfiguration:
controlPlane: {}
nodeRegistration:
name: '{{ ds.meta_data.hostname }}'
kubeletExtraArgs:
cloud-provider: aws
Code language: JavaScript (javascript)
Machine Deployments:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: MachineDeployment
metadata:
name: aws-cluster-1-md-0
labels:
cluster.x-k8s.io/cluster-name: aws-cluster-1
nodepool: nodepool-0
spec:
replicas: 3
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: aws-cluster-1
nodepool: nodepool-0
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: aws-cluster-1
nodepool: nodepool-0
spec:
version: 1.15.3
bootstrap:
configRef:
name: aws-cluster-1-md-0
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfigTemplate
infrastructureRef:
name: aws-cluster-1-md-0
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachineTemplate
metadata:
name: aws-cluster-1-md-0
spec:
template:
spec:
instanceType: t2.medium
ami:
id: ami-0ca9e222761de069b
iamInstanceProfile: "nodes.cluster-api-provider-aws.sigs.k8s.io"
sshKeyName: "vmc-cna-admin"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfigTemplate
metadata:
name: aws-cluster-1-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.hostname }}'
kubeletExtraArgs:
cloud-provider: aws
Code language: JavaScript (javascript)
Once those manifests had been deployed to my management cluster, my AWS account started spinning up a workload cluster.
Management Cluster Objects
At this point, a pair of clusters are deployed and we can see this from running the kubectl get clusters command in our management Kubernetes cluster.
And if we look at our machines object through the Kubernetes API we can see our machines that are deployed.
The cool part here is we can modify our manifests and let the Kubernetes Control Loops fix our clusters. Think about if we need to add nodes or remove nodes!
In fact, this blog post is over and I should spin down my AWS nodes so I don’t have to keep paying for them when I’m not using those resources. I’ll just delete the resources out of my management cluster and I’ll spin them up again later.
kubectl delete -f machinedeployment.yaml
kubectl delete -f controlplane.yaml
kubectl delete -f aws-cluster.yaml
Code language: JavaScript (javascript)
Bootstrapping Management Clusters
I know what you’re thinking. This solution builds Kubernetes clusters for me, but I need a Kubernetes cluster built before I can even use this. Well, you’re right, but there is a workaround for that if you need to get your management cluster setup.
Right from our own laptop, we can use the “kind” project that I’ve written about previously to deploy our management components.
From your laptop you’d install a tool like clusterctl which leverages kind to build a Kubernetes cluster on docker images inside our laptop. The ClusterAPI components discussed above are deployed into this cluster and they in turn build our management cluster and “pivot” (or move) the components from our kind cluster to the management cluster. Basically we spin up a temporary cluster on our laptop which deploys a management cluster and copies the components to it for us to then manage our workloads. The process looks similar to the diagram below for setting up a management cluster.
Summary
ClusterAPI will let you manage your Kubernetes clusters via desired state configuration manifests just like the applications you deploy on top of these clusters, giving users a familiar experience. The project provides a quick way to stand up, modify, and delete clusters in your environment to alleviate the headaches around cluster management.
[…] new clusters for applications to be build on top of. TKG is built upon the ClusterAPI project so this post pretty accurately describes the architecture that TKG […]