Using YTT to Customize TKG Deployments
November 9, 2020If you’ve worked with Kubernetes for very long, you’ve surely run into a need to manage YAML files. There are a bunch of options out there with their own benefits and drawbacks. One of these tools is called ytt
and comes as part of the Carvel tools (formerly k14s).
If you’re working with the Tanzu Kubernetes Grid product from VMware, you’re likely to be using ytt
to mange your TKG YAML manifests. This post aims to help you get started with using ytt
for your own customizations.
How Does YTT Work?
Before we get too far, you’re going to need to know some basics about how ytt
works. There are a ton of things ytt
can do for you to update yaml manifests so we’ll only cover a few that are commonly used by TKG to get started. If you want to learn more, please check out the Carvel tools page and their examples which you can play with on your own.
Overlays
One of the main things that TKG uses ytt
for is the use of overlays. An overlay file describes how another YAML manifests should be altered. In this way, we can take a normal YAML manifest such as a Kubernetes Deployment file, and make changes to the “base” template with whats in the overlay template. TKG uses overlays to alter the default TKG deployment files. This can be really handy to alter your TKG deployments to include your own customizations like adding web proxies, or custom registries which we’ll see later.
Here’s our first example of a basic substitutions using non tech files. The file below is the “BASE” template which we’ll alter with our overlay files.
--- pizzatype: thin --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: pancakes --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - ketchup
We’ll also create a values file which will be used to store some custom variables that we might want to reference later on. That values file can be seen below.
#@data/values --- pizza: deep dish breakfast_side: bacon breakfast_food: waffles breakfast_drinks: coffee condiments: #@ ["mustard", "onions", "pickle spear", "tomatoes", "celery salt", "relish"]
Our first job will be to create an overlay file that can change the pizzatype
entry in the base template. To do this we’ll create our first overlay file where the goal will be to replace the pizzatype
value of thin
to the correct value of Deep Dish
.
Here is our first overlay file. Lets take a look at some of the entries here for our first file.
- Line1 – Used to load the overlay modules for ytt
- Line2 – Used to load the data values from our variables file
- Line3 – Used to find a section in our base template. In this case looking for a key/value pair of
pizzatype:thin
- Line5 – Command to replace the values found in the line below
- Line6 – What we want the pizzatype value to be and it retrieves that value from the values file. The value in that file is
deep dish
#@ load("@ytt:overlay", "overlay") #@ load("@ytt:data", "data") #@overlay/match by=overlay.subset({"pizzatype": "thin"}) --- #@overlay/replace pizzatype: #@ data.values.pizza
Lets look at the results of running these files through ytt
. I’ve installed the ytt
binary and ran ytt -f values.yaml -f overlay.yaml -f meals.yaml
As you can see from the results below the key pizzatype
has a new value from the base template. Now its properly set to deep dish
pizzatype: deep dish --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: pancakes --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - ketchup
Lets keep going. Next, we’ll take a look at the breakfast section which looks more like a Kubernetes YAML structure just to get familiar with it. In this case, we need to do a couple of things. First, we’ll replace pancakes
with waffles
and we’ll add a side of bacon
.
Let’s update the existing overlay file so it looks like the code below.
#@ load("@ytt:overlay", "overlay") #@ load("@ytt:data", "data") #@overlay/match by=overlay.subset({"pizzatype": "thin"}) --- #@overlay/replace pizzatype: #@ data.values.pizza #@overlay/match by=overlay.subset({"spec":{"meal":"breakfast"}}) --- spec: items: #@overlay/match missing_ok=True side: #@ data.values.breakfast_side food: waffles drinks: coffee
Notice that in the overlay file above, we have a new match condition where we’re looking for {"spec":{"meal":"breakfast"}}
in our base template. If ytt
finds this section we want to update the base template with our new values. You can see that food
should be waffles
and a new item named side
is added and the values come from the values file. Also notice that food was hard coded and didn’t come from any other value files.
The result of running the ytt commands (ytt -f values.yaml -f overlay.yaml -f meals.yaml
) are:
pizzatype: deep dish --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: waffles side: bacon --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - ketchup
And one last example, where we’ll update lunch. The lunch items from our base template have food: hotdog
which is fine Chicago food, but the condiments are clearly the wrong values. In this case we’ll update our overlay file to pass a list of values for condiments.
Here is our updated overlay file.
#@ load("@ytt:overlay", "overlay") #@ load("@ytt:data", "data") #@overlay/match by=overlay.subset({"pizzatype": "thin"}) --- #@overlay/replace pizzatype: #@ data.values.pizza #@overlay/match by=overlay.subset({"spec":{"meal":"breakfast"}}) --- spec: items: #@overlay/match missing_ok=True side: #@ data.values.breakfast_side food: waffles drinks: coffee #@overlay/match by=overlay.subset({"spec":{"items":{"food":"hotdog"}}}), expects="0+" --- spec: items: food: hotdog #@overlay/replace condiments: #@ data.values.condiments
In this case the overlay has a new matching object {"spec":{"items":{"food":"hotdog"}}}
which says that our overlay commands will effect items where food = hotdog, but nothing else. You’ll notice here that there is a , expects="0+"
command which states that it won’t error out if it can’t find hotdog. Think about it, this routine isn’t needed if the food type is a hamburger or something. We only want to update if the food item is a hotdog. Then we can apply the appropriate condiments.
pizzatype: deep dish --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: waffles side: bacon --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - mustard - onions - pickle spear - tomatoes - celery salt - relish
Tanzu Kubernetes Grid Customizations
Now that you’ve had a crash course in ytt
you can use some of your new skills, and some additional help from the Carvel tools site, to customize a TKG deployment. This has been written about before such as this post from Chip Zoller when he used ytt
to add a custom image registry. Below are some additional customizations that I’ve commonly built for customers, and you may find useful when deploying your Tanzu Kubernetes Grid clusters within your own environment.
Custom Image Registry Settings
The file below was created in the providers/infrastucture-[cloud]/ytt
folder to specify a custom image registry for the containerd daemon. You would need to replace your own image registry settings within the file.
#@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"}), expects="1+" --- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane spec: kubeadmConfigSpec: #@overlay/match missing_ok=True files: - path: /etc/containerd/config.toml content: | version = 2 [plugins] [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.tkg.vmware.run/pause:3.2" [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.hollow.local"] endpoint = ["http://harbor.hollow.local"] #@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}), expects="1+" --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 kind: KubeadmConfigTemplate spec: template: spec: #@overlay/match missing_ok=True files: - path: /etc/containerd/config.toml content: | version = 2 [plugins] [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.tkg.vmware.run/pause:3.2" [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.hollow.local"] endpoint = ["http://harbor.hollow.local"]
Set Web Proxies for TKG Control Plane Nodes
The file below was created in the providers/infrastructure-[cloud]/ytt
folder. You would need to update the proxy and no_proxy rules for your own environment.
#@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"}), expects="1+" --- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane spec: kubeadmConfigSpec: #@overlay/match missing_ok=True preKubeadmCommands: #! Add HTTP_PROXY to containerd configuration file - echo '[Service]' > /etc/systemd/system/containerd.service.d/http-proxy.conf - echo 'Environment="HTTP_PROXY=http://10.0.4.168:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf - echo 'Environment="NO_PROXY=.hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf - echo 'PROXY_ENABLED="yes"' > /etc/sysconfig/proxy - echo 'HTTP_PROXY="http://10.0.4.168:3128"' >> /etc/sysconfig/proxy - echo 'NO_PROXY=".hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11"' >> /etc/sysconfig/proxy - systemctl daemon-reload - systemctl restart containerd
Setting Web Proxies for TKG Worker Nodes
The file below was created in the providers/infrastructure-[cloud]/ytt
folder. You would need to update the proxy and no_proxy rules for your own environment.
#@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}), expects="1+" --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 kind: KubeadmConfigTemplate spec: template: spec: #@overlay/match missing_ok=True preKubeadmCommands: #! Add HTTP_PROXY to containerd configuration file - echo '[Service]' > /etc/systemd/system/containerd.service.d/http-proxy.conf - echo 'Environment="HTTP_PROXY=http://10.0.4.168:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf - echo 'Environment="NO_PROXY=.hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf - echo 'PROXY_ENABLED="yes"' > /etc/sysconfig/proxy - echo 'HTTP_PROXY="http://10.0.4.168:3128"' >> /etc/sysconfig/proxy - echo 'NO_PROXY=".hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11"' >> /etc/sysconfig/proxy - systemctl daemon-reload - systemctl restart containerd
Using Internal Load Balancers within AWS
Sometimes you don’t want to expose your TKG cluster with a Public IP Address. To deploy internal load balancers, this config can be added to your providers/infrastructure-aws/ytt
folder.
#@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind":"AWSCluster"}) --- #! Use a Private Load Balancer instead of the default Load Balancers apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AWSCluster spec: #@overlay/match missing_ok=True controlPlaneLoadBalancer: scheme: internal
Summary
This was a long post and it required you to learn a little ytt
but now you should have a good start on customizing your Tanzu Kubernetes Grid deployments. Cluster-API does a great job of deploying Kubernetes clusters, but there are always customizations that you will need to make to integrate it into your environments. I hope the primer and examples from this post will be enough to get you on your way to building your own customizations.
[…] Eric Shanks shows folks how to use ytt to customize a TKG deployment. It’s a bit of a whirlwind introduction, but Eric also supplies some practical use […]