<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Kubernetes on The IT Hollow</title>
    <link>https://theithollow.com/tags/kubernetes/</link>
    <description>Recent content in Kubernetes on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 22 Sep 2021 13:31:53 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/tags/kubernetes/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Configure a Private Registry for Tanzu Kubernetes Clusters</title>
      <link>https://theithollow.com/2021/09/22/configure-a-private-registry-for-tanzu-kubernetes-clusters/</link>
      <pubDate>Wed, 22 Sep 2021 13:31:53 +0000</pubDate>
      <guid>https://theithollow.com/2021/09/22/configure-a-private-registry-for-tanzu-kubernetes-clusters/</guid>
      <description>&lt;p&gt;A really common task after deploying a Kubernetes cluster is to configure it to use a container registry where the container images are stored. A Tanzu Kubernetes Cluster (TKC) is no exception to this rule. vSphere 7 with Tanzu comes with an embedded harbor registry that can be used, but in many cases you all ready have your own container registry and so you&amp;rsquo;d like to continue using that instead.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vSphere 7 with Tanzu Updates</title>
      <link>https://theithollow.com/2021/05/13/vsphere-7-with-tanzu-updates/</link>
      <pubDate>Thu, 13 May 2021 14:45:54 +0000</pubDate>
      <guid>https://theithollow.com/2021/05/13/vsphere-7-with-tanzu-updates/</guid>
      <description>&lt;p&gt;At some point, you&amp;rsquo;ll be faced with an upgrade request. New Kubernetes features, new security patches, or just to maintain your support. A vSphere 7 with Tanzu deployment has several components that may need to be updated and most of which can be updated independently of one another. In this post we&amp;rsquo;ll walk through an update to vSphere, then update the Supervisor namespace, and then finally the Tanzu Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Customize vSphere 7 with Tanzu Guest Clusters</title>
      <link>https://theithollow.com/2021/02/01/customize-vsphere-7-with-tanzu-guest-clusters/</link>
      <pubDate>Mon, 01 Feb 2021 15:29:00 +0000</pubDate>
      <guid>https://theithollow.com/2021/02/01/customize-vsphere-7-with-tanzu-guest-clusters/</guid>
      <description>&lt;p&gt;Kubernetes clusters can come in many shapes and sizes. Over the past 18 months I&amp;rsquo;ve deployed quite a few Kubernetes clusters for customers but these clusters all have different requirements. What image registry am I connecting to? Do we need to configure proxies? Will we need to install new certificates to the nodes? Do we need to tweak some containerd configurations? During many of my customer engagements the answer to the above questions is, &amp;ldquo;yes&amp;rdquo;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Resizing Tanzu Kubernetes Grid Cluster Nodes</title>
      <link>https://theithollow.com/2020/12/09/resizing-tanzu-kubernetes-grid-cluster-nodes/</link>
      <pubDate>Wed, 09 Dec 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/12/09/resizing-tanzu-kubernetes-grid-cluster-nodes/</guid>
      <description>&lt;p&gt;Have you ever missed when trying to properly size an Kubernetes environment? Maybe the requirements changed, maybe there were wrong assumptions, or maybe the project took off and it just needs more resources. Under normal circumstances, I might suggest to you to build a new Tanzu Kubernetes Grid (TKG) cluster and re-deploy your apps. Unfortunately, as much as I want to treat Kubernetes clusters as ephemeral, they can&amp;rsquo;t always be treated this way. If you need to resize your TKG nodes without re-deploying a new cluster, then keep reading.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using YTT to Customize TKG Deployments</title>
      <link>https://theithollow.com/2020/11/09/using-ytt-to-customize-tkg-deployments/</link>
      <pubDate>Mon, 09 Nov 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/11/09/using-ytt-to-customize-tkg-deployments/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve worked with Kubernetes for very long, you&amp;rsquo;ve surely run into a need to manage YAML files. There are a bunch of options out there with their own benefits and drawbacks. One of these tools is called &lt;code&gt;ytt&lt;/code&gt; and comes as part of the &lt;a href=&#34;https://carvel.dev/&#34;&gt;Carvel&lt;/a&gt; tools (formerly k14s).&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re working with the Tanzu Kubernetes Grid product from VMware, you&amp;rsquo;re likely to be using &lt;code&gt;ytt&lt;/code&gt; to mange your TKG YAML manifests. This post aims to help you get started with using &lt;code&gt;ytt&lt;/code&gt; for your own customizations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Ingress Routing - TKG Clusters</title>
      <link>https://theithollow.com/2020/09/15/ingress-routing-tkg-clusters/</link>
      <pubDate>Tue, 15 Sep 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/09/15/ingress-routing-tkg-clusters/</guid>
      <description>&lt;p&gt;If you have been following &lt;a href=&#34;https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/&#34;&gt;the series&lt;/a&gt; so far, you should have a TKG guest cluster in your lab now. The next step is to show how to deploy a simple application and access it through a web browser. This is a pretty trivial task for most Kubernetes operators, but its a good idea to know whats happening in NSX to make these applications available. We&amp;rsquo;ll walk through that in this post.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploying Tanzu Kubernetes Clusters on vSphere 7</title>
      <link>https://theithollow.com/2020/09/09/deploying-tanzu-kubernetes-clusters-on-vsphere-7/</link>
      <pubDate>Wed, 09 Sep 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/09/09/deploying-tanzu-kubernetes-clusters-on-vsphere-7/</guid>
      <description>&lt;p&gt;This post will focus on deploying Tanzu Kubernetes Grid (TKG) clusters in your vSphere 7 with Tanzu environment. These TKG clusters are the individual Kubernetes clusters that can be shared with teams for their development purposes.&lt;/p&gt;
&lt;p&gt;I know what you&amp;rsquo;re thinking. Didn&amp;rsquo;t we already create a Kubernetes cluster when we setup our Supervisor cluster? The short answer is yes. However the Supervisor cluster is a unique Kubernetes cluster that probably shouldn&amp;rsquo;t be used for normal workloads. We&amp;rsquo;ll discuss this in more detail in a follow-up post. For now, let&amp;rsquo;s focus on how to create them, and later we&amp;rsquo;ll discuss when to use them vs the Supervisor cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Replace vSphere 7 with Tanzu Certificates</title>
      <link>https://theithollow.com/2020/08/31/replace-vsphere-7-with-tanzu-certificates/</link>
      <pubDate>Mon, 31 Aug 2020 14:45:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/31/replace-vsphere-7-with-tanzu-certificates/</guid>
      <description>&lt;p&gt;When setting up your vSphere 7 with Tanzu environment, its a good idea to update the default certificate shipped from VMware with your own certificate. This is a good security practice to ensure that your credentials are protected during logins, and nobody likes to see those pesky certificate warnings in their browsers anyway, am I right?&lt;/p&gt;
&lt;h2 id=&#34;create-and-trust-certificate-authority&#34;&gt;Create and Trust Certificate Authority&lt;/h2&gt;
&lt;p&gt;This section of the blog post is to create a root certificate. In many situations, you won&amp;rsquo;t need to do this since your organization probably already has a certificate authority that can be used to sign certificates as needed. Since I&amp;rsquo;m doing this in a lab, I&amp;rsquo;m going to create a root certificate and make sure my workstation trusts this cert first. After this, we can use the root certificate to sign our vSphere 7 certificates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Connecting to a Supervisor Namespace</title>
      <link>https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/</link>
      <pubDate>Mon, 24 Aug 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll finally connect to our Supervisor Cluster Namespace through the Kubernetes cli and run some commands for the first time.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/2020/08/17/creating-supervisor-namespaces/&#34;&gt;last post&lt;/a&gt; we created a namespace within the Supervisor Cluster and assigned some resource allocations and permissions for our example development user. Now it&amp;rsquo;s time to access that namespace so that real work can be done using the platform.&lt;/p&gt;
&lt;p&gt;First, login to vCenter again with the &lt;a href=&#34;mailto:administrator@vsphere.local&#34;&gt;administrator@vsphere.local&lt;/a&gt; account and navigate to the namespace that was previously created. You should see a similar screen where we configured our permissions. In the &lt;code&gt;Status&lt;/code&gt; tile, click one of the links to either open in a browser or copy the URL to then open in a browser.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Creating Supervisor Namespaces</title>
      <link>https://theithollow.com/2020/08/17/creating-supervisor-namespaces/</link>
      <pubDate>Mon, 17 Aug 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/17/creating-supervisor-namespaces/</guid>
      <description>&lt;p&gt;Congratulations, you&amp;rsquo;ve deployed the Workload Management components for your vSphere 7 cluster. If you&amp;rsquo;ve been following along with the series so far, you&amp;rsquo;ll have left off with a workload management cluster created and ready to being configuring your cluster for use with Kubernetes.&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/08/namespaces0-3.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The next step in the process is to create a namespace. Before we do that, it&amp;rsquo;s probably useful to recap what a namespace is used for.&lt;/p&gt;
&lt;h2 id=&#34;namespaces-the-theory&#34;&gt;Namespaces the Theory&lt;/h2&gt;
&lt;p&gt;Depending on your past experiences, a namespace will likely seem familiar to you in some fashion. If you have a kubernetes background, you&amp;rsquo;ll be familiar with namespaces as a way to set permissions for a group of users (or a project, etc) and for assigning resources. Alternatively, if you have a vSphere background, you&amp;rsquo;re used to using things like Resource Pools to set resource allocation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vSphere 7 with Tanzu - Getting Started Guide</title>
      <link>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/</link>
      <pubDate>Tue, 14 Jul 2020 14:16:18 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/</guid>
      <description>&lt;p&gt;VMware released the new version of vSphere with functionality to build and manage Kubernetes clusters. This series details how to deploy, configure, and use a lab running vSphere 7 with Kubernetes enabled.&lt;/p&gt;
&lt;p&gt;The instructions within this post are broken out into sections. vSphere 7 requires pre-requisites at the vSphere level as well as a full NSX-T deployment. Follow these steps in order to build your own vSphere 7 with Kubernetes lab and start using Kubernetes built right into vSphere.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Enable Workload Management</title>
      <link>https://theithollow.com/2020/07/14/enable-workload-management/</link>
      <pubDate>Tue, 14 Jul 2020 13:44:36 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/enable-workload-management/</guid>
      <description>&lt;p&gt;This post focuses on enabling the workload management components for vSphere 7 with Kubernetes. It is assumed that the vSphere environment is already in place and the NSX-T configuration has been deployed.&lt;/p&gt;
&lt;p&gt;To enable workload management, login to your vCenter as the &lt;a href=&#34;mailto:administrator@vsphere.local&#34;&gt;administrator@vsphere.local&lt;/a&gt; account. Then in the Menu, select Work&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/07/image-40.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;Within the Workload Management screen, click the &lt;code&gt;ENABLE&lt;/code&gt; button.&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/07/image-30-1024x409.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The first screen in the wizard, will list your compatible vSphere clusters. These clusters must have HA and DRS enabled in fully automated mode. If you are missing clusters, make sure you have ESXi hosts on version 7 with HA and DRS enabled. You&amp;rsquo;ll also need a Distributed switch on version 7 for these clusters.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vSphere 7 with Kubernetes Environment and Prerequisites</title>
      <link>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-environment-and-prerequisites/</link>
      <pubDate>Tue, 14 Jul 2020 13:42:33 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-environment-and-prerequisites/</guid>
      <description>&lt;p&gt;This post describes the lab environment we&amp;rsquo;ll be working with to build our vSphere 7 with Kubernetes lab and additional prerequisites that you&amp;rsquo;ll need to be aware of before starting. This is not the only topology that would work for vSphere 7 with Kubernetes, but it is a robust homelab that would mimic many production deployments except for the HA features. For example, we&amp;rsquo;ll only install one (singular) NSX Manager for the lab where in a production environment would have three.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tier-0 Gateway</title>
      <link>https://theithollow.com/2020/07/14/tier-0-gateway/</link>
      <pubDate>Tue, 14 Jul 2020 13:39:41 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/tier-0-gateway/</guid>
      <description>&lt;p&gt;This post will review the deployment and configuration of a Tier-0 gateway to provide north/south routing into the NSX-T overlay networks.&lt;/p&gt;
&lt;p&gt;The Tier-0 (T0) gateway is where we&amp;rsquo;ll finally connect our new NSX-T backed overlay segments to the physical network through an NSX-T Edge which was previously deployed.&lt;/p&gt;
&lt;p&gt;The Tier-0 gateway will connect directly to a physical VLAN and on the other side to our T1 router deployed in the previous post. From there, we should have all the plumbing we need to route to our hosts and begin using NSX-T to do some cooler stuff. In the end, the network topology will look something like this:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tier-1 Gateway and NSX Segments</title>
      <link>https://theithollow.com/2020/07/14/tier-1-gateway-and-nsx-segments/</link>
      <pubDate>Tue, 14 Jul 2020 13:36:56 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/tier-1-gateway-and-nsx-segments/</guid>
      <description>&lt;p&gt;This post will focus on deploying our first NSX Gateway/Router and setting up our overlay segments. Before you can start these steps, the Edge nodes should be up and running so that they can support the Tier-1 gateways.&lt;/p&gt;
&lt;p&gt;NSX uses two types of routers/gateways. We&amp;rsquo;ll start by using a Tier-1 (T1) router. These routers are usually used to pass traffic between NSX overlay segments. We could create NSX segments without any routers, but it would require a router to pass traffic between these segments so we will create a T1 router first.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy NSX-T Edge Nodes</title>
      <link>https://theithollow.com/2020/07/14/deploy-nsx-t-edge-nodes/</link>
      <pubDate>Tue, 14 Jul 2020 13:26:22 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/deploy-nsx-t-edge-nodes/</guid>
      <description>&lt;p&gt;NSX-T Edge nodes are used for security and gateway services that can&amp;rsquo;t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use &lt;code&gt;Tier0&lt;/code&gt; or &lt;code&gt;Tier1&lt;/code&gt; routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>NSX Pools, Zones, and Nodes Setup</title>
      <link>https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/</link>
      <pubDate>Tue, 14 Jul 2020 13:23:46 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/&#34;&gt;previous post&lt;/a&gt; we deployed an NSX Manager. Now it&amp;rsquo;s time to start configuring NSX so that we can build cool routes, firewall zones, segments, and all the other NSX goodies. And even if we don&amp;rsquo;t want to build some of these things, we&amp;rsquo;ll need this setup for vSphere 7 with Kubernetes.&lt;/p&gt;
&lt;h2 id=&#34;add-an-ip-pool&#34;&gt;Add an IP Pool&lt;/h2&gt;
&lt;p&gt;The first thing we&amp;rsquo;ll setup is an IP Pool. As you might guess, an IP Pool is just a group of IP Addresses that we can use for things. Specifically, we&amp;rsquo;ll use these IP Addresses to assign Tunnel Endpoints (Called TEPs previously called VTEPs in NSX-V parlance) to each of our ESXi hosts that are participating in the NSX Overlay networks. The TEP becomes the point in which encapsulation and decapsulation takes place on each of the ESXi hosts. Think of it this way, when encapsulated traffic needs to be routed to a VM on a host, what IP Address do we need to send the traffic to, so that it can reach that VM. This is the TEP. We need to setup a TEP on each host, and the IP Addresses for these TEPs come from an IP Pool. Since I have three hosts, and expect to deploy 1 edge nodes, I&amp;rsquo;ll need a TEP Pool with at least 4 IP Addresses. Size your environment appropriately.&lt;/p&gt;</description>
    </item>
    <item>
      <title>NSX Installation</title>
      <link>https://theithollow.com/2020/07/14/nsx-installation/</link>
      <pubDate>Tue, 14 Jul 2020 13:18:52 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/nsx-installation/</guid>
      <description>&lt;p&gt;This post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing.&lt;/p&gt;
&lt;h2 id=&#34;deploy-the-nsx-manager&#34;&gt;Deploy the NSX Manager&lt;/h2&gt;
&lt;p&gt;The first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you&amp;rsquo;ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you&amp;rsquo;ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Validating Admission Controllers</title>
      <link>https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/</link>
      <pubDate>Tue, 26 May 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/</guid>
      <description>&lt;p&gt;Hey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don&amp;rsquo;t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster?&lt;/p&gt;
&lt;p&gt;If any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers.&lt;/p&gt;
&lt;h2 id=&#34;validating-admission-controllers---the-theory&#34;&gt;Validating Admission Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Admission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you&amp;rsquo;ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is &lt;a href=&#34;https://www.openpolicyagent.org/&#34;&gt;Open Policy Agent (OPA)&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Liveness and Readiness Probes</title>
      <link>https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/</link>
      <pubDate>Mon, 18 May 2020 14:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/</guid>
      <description>&lt;p&gt;Just because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not.&lt;/p&gt;
&lt;h2 id=&#34;liveness-and-readiness-probes---the-theory&#34;&gt;Liveness and Readiness Probes - The Theory&lt;/h2&gt;
&lt;p&gt;On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node&amp;rsquo;s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Pod Auto-scaling</title>
      <link>https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/</link>
      <pubDate>Mon, 04 May 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve built your Kubernetes cluster(s). You&amp;rsquo;ve built your apps in containers. You&amp;rsquo;ve architected your services so that losing a single instance doesn&amp;rsquo;t cause an outage. And you&amp;rsquo;re ready for cloud scale. You deploy your application and are waiting to sit back and &amp;ldquo;profit.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;When your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn&amp;rsquo;t it be better to deploy the application and let the platform scale the application automatically?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Resource Requests and Limits</title>
      <link>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</link>
      <pubDate>Mon, 20 Apr 2020 15:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</guid>
      <description>&lt;p&gt;Containerizing applications and running them on Kubernetes doesn&amp;rsquo;t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the &amp;ldquo;noisy neighbor&amp;rdquo; problem in a Kubernetes Cluster.&lt;/p&gt;
&lt;h2 id=&#34;resource-requests-and-limits---the-theory&#34;&gt;Resource Requests and Limits - The Theory&lt;/h2&gt;
&lt;p&gt;Kubernetes uses the concept of a &amp;ldquo;Resource Request&amp;rdquo; and a &amp;ldquo;Resource Limit&amp;rdquo; when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests.&lt;/p&gt;</description>
    </item>
    <item>
      <title>In-tree vs Out-of-tree Kubernetes Cloud Providers</title>
      <link>https://theithollow.com/2020/04/14/in-tree-vs-out-of-tree-kubernetes-cloud-providers/</link>
      <pubDate>Tue, 14 Apr 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/14/in-tree-vs-out-of-tree-kubernetes-cloud-providers/</guid>
      <description>&lt;p&gt;VMware offers a Kubernetes Cloud Provider that allows Kubernetes (k8s) administrators to manage parts of the vSphere infrastructure by interacting with the Kubernetes Control Plane. Why is this needed? Well, being able to spin up some new virtual disks and attaching them to your k8s cluster is especially useful when your pods need access to persistent storage for example.&lt;/p&gt;
&lt;p&gt;The Cloud providers (AWS, vSphere, Azure, GCE) obviously differ between vendors. Each cloud provider has different functionality that might be exposed in some way to the Kubernetes control plane. For example, Amazon Web Services provides a load balancer that can be configured with k8s on demand if you are using the AWS provider, but vSphere does not (unless you&amp;rsquo;re using NSX).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploying Tanzu Kubernetes Grid Management Clusters - vSphere</title>
      <link>https://theithollow.com/2020/04/06/deploying-tanzu-kubernetes-grid-management-clusters-vsphere/</link>
      <pubDate>Mon, 06 Apr 2020 14:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/06/deploying-tanzu-kubernetes-grid-management-clusters-vsphere/</guid>
      <description>&lt;p&gt;VMware recently released the 1.0 release of Tanzu Kubernetes Grid (TKG) which aims at decreasing the difficulty of deploying conformant Kubernetes clusters across infrastructure. This post demonstrates how to use TKG to deploy a management cluster to vSphere.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re not familiar with TKG yet, you might be curious about what a Management Cluster is. The management cluster is used to manage one to many workload clusters. The management cluster is used to spin up VMs on different cloud providers, and lay down the Kubernetes bits on those VMs, thus creating new clusters for applications to be build on top of. TKG is built upon the &lt;a href=&#34;https://github.com/kubernetes-sigs/cluster-api&#34;&gt;ClusterAPI project&lt;/a&gt; so &lt;a href=&#34;https://theithollow.com/2019/11/04/clusterapi-demystified/&#34;&gt;this post&lt;/a&gt; pretty accurately describes the architecture that TKG uses.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Access Policies</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-access-policies/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-access-policies/</guid>
      <description>&lt;p&gt;Controlling access to a Kubernetes cluster is an ongoing activity that must be done in conjunction with developer needs and is often maintained by operations or security teams. Tanzu Mission Control (TMC) can help use setup and manage these access policies across fleets of Kubernetes clusters, making everyone&amp;rsquo;s life a little bit easier.&lt;/p&gt;
&lt;h2 id=&#34;setup-users&#34;&gt;Setup Users&lt;/h2&gt;
&lt;p&gt;Before we can assign permissions to a user or group, we need to have a user or group to assign these permissions. By logging into the VMware Cloud Services portal (cloud.VMware.com) and going to the Identity and Access Management Tab we can create and invite new users. You can see I&amp;rsquo;ve created a user.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Attach Clusters</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-attach-clusters/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-attach-clusters/</guid>
      <description>&lt;p&gt;What do you do if you&amp;rsquo;ve already provisioned some Kubernetes clusters before you got Tanzu Mission Control? Or maybe you&amp;rsquo;re inheriting some new clusters through an acquisition? Or a new team came on board and were using their own installation? Whatever the case, Tanzu Mission Control will let you manage a conformant Kubernetes cluster but you must first attach it.&lt;/p&gt;
&lt;h2 id=&#34;attach-an-existing-cluster&#34;&gt;Attach An Existing Cluster&lt;/h2&gt;
&lt;p&gt;For this example, I&amp;rsquo;ll be attaching a pre-existing Kubernetes cluster on vSphere infrastructure. This cluster was deployed via kubeadm as documented in this previous article about deploying &lt;a href=&#34;https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/&#34;&gt;Kubernetes on vSphere&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Cluster Upgrade</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-cluster-upgrade/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-cluster-upgrade/</guid>
      <description>&lt;p&gt;Kubernetes releases a new minor version every quarter and updating your existing clusters can be a chore. With updates coming at you pretty quickly and new functionality being added all the time, having a way to upgrade your clusters is a must, especially if you are managing multiples of clusters. Tanzu Mission Control can take the pain out of upgrading these clusters.&lt;/p&gt;
&lt;p&gt;It should be mentioned that the cluster upgrade procedure only works for clusters that were previously deployed through Tanzu Mission Control. If an existing cluster is attached to TMC after deployment, these cluster lifecycle steps won&amp;rsquo;t work.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Conformance Tests</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-conformance-tests/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-conformance-tests/</guid>
      <description>&lt;p&gt;No matter what flavor of Kubernetes you&amp;rsquo;re using, the cluster should have some high level of common functionality with the upstream version. To ensure this is the case Kubernetes conformance tests can validate your clusters. These tests are run by Sonobuoy which is an open source community standard. Tanzu Mission Control can run these tests on your clusters to ensure this conformance. They are a great way to make sure your cluster was installed, configured and operating properly.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Deploying Clusters</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-deploying-clusters/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-deploying-clusters/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve written about deploying clusters in the past, but if you are a TMC customer, those steps can be skipped altogether. TMC will let us deploy a Kubernetes cluster and add it to management, all from the GUI or CLI.&lt;/p&gt;
&lt;p&gt;For this example, I&amp;rsquo;ll create a new Kubernetes cluster within my AWS account. Before we setup the cluster, we need to configure access to our AWS Account so that TMC can manage resources for us.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Namespace Management</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-namespace-management/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-namespace-management/</guid>
      <description>&lt;p&gt;When we need to segment resources within a Kubernetes cluster, we often use a &lt;a href=&#34;https://theithollow.com/2019/02/06/kubernetes-namespaces/&#34;&gt;namespace&lt;/a&gt;. Namespaces can be excellent resources to create a boundary for either networking, role based access, or simply for organizational purposes. It may be common to have some standard namespaces across all of your clusters. Maybe you have corporate monitoring standards and the tools live in a specific namespace, or you always have an ingress namespace thats off limits to developers or something. Managing namespaces across cluster could be tedious, but Tanzu Mission Control lets us manage these namespaces centrally from the TMC console.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tanzu Mission Control - Resize Clusters</title>
      <link>https://theithollow.com/2020/03/10/tanzu-mission-control-resize-clusters/</link>
      <pubDate>Tue, 10 Mar 2020 18:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/10/tanzu-mission-control-resize-clusters/</guid>
      <description>&lt;p&gt;A pretty common task that a Kubernetes administrator must do is to resize the cluster. We need more nodes to handle more workloads, or we&amp;rsquo;ve overprovisioned a cluster and are trying to save costs. This usually took some custom automation scripts such as node autoscaler, or it was done manually based on request.&lt;/p&gt;
&lt;p&gt;Tanzu Mission Control can resize our cluster very simply from the TMC portal.&lt;/p&gt;
&lt;h2 id=&#34;scale-out-a-cluster&#34;&gt;Scale Out a Cluster&lt;/h2&gt;
&lt;p&gt;Within the TMC Portal, find the cluster that needs to be resized. Within the cluster screen, find the &amp;ldquo;Node pools&amp;rdquo; menu. Node pools define the worker nodes that are part of the Kubernetes cluster thats been deployed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use a Private Registry with Kubernetes</title>
      <link>https://theithollow.com/2020/03/03/use-a-private-registry-with-kubernetes/</link>
      <pubDate>Tue, 03 Mar 2020 21:09:07 +0000</pubDate>
      <guid>https://theithollow.com/2020/03/03/use-a-private-registry-with-kubernetes/</guid>
      <description>&lt;p&gt;Most of the blog posts I write about Kubernetes have examples using publicly available images from public image registries like DockerHub or Google Container Registry. But in the real world, companies use private registries for storing their container images. There are a list of reasons why you might want to do this including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Custom code is inside the container such as business logic or other intellectual property.&lt;/li&gt;
&lt;li&gt;On-premises private repos provide solutions to bandwidth or firewall restrictions.&lt;/li&gt;
&lt;li&gt;Custom scanning software is being integrated for vulnerability management.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In this post, we&amp;rsquo;ll setup our Kubernetes cluster to be able to use a private container registry.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Highly Available Envoy Proxies for the Kubernetes Control Plane</title>
      <link>https://theithollow.com/2020/02/24/highly-available-envoy-proxies-for-the-kubernetes-control-plane/</link>
      <pubDate>Mon, 24 Feb 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/02/24/highly-available-envoy-proxies-for-the-kubernetes-control-plane/</guid>
      <description>&lt;p&gt;Recently I was tasked with setting up some virtual machines to be used as a load balancer for a Kubernetes cluster. The environment we were deploying our Kubernetes cluster didn&amp;rsquo;t have a load balancer available, so we thought we&amp;rsquo;d just throw some envoy proxies on some VMs to do the job. This post will show you how the following tasks were completed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Deploy Envoy on a pair of CentOS7 virtual machines.&lt;/li&gt;
&lt;li&gt;Configure Envoy with health checks for the Kubernetes Control Plane&lt;/li&gt;
&lt;li&gt;Install keepalived on both servers to manage failover.&lt;/li&gt;
&lt;li&gt;Configure keepalived to failover if a server goes offline, or the envoy service is not started.&lt;/li&gt;
&lt;/ol&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/02/image-61-1024x495.png&#34;/&gt; 
&lt;/figure&gt;

&lt;h2 id=&#34;deploy-envoy&#34;&gt;Deploy Envoy&lt;/h2&gt;
&lt;p&gt;The first step will be to setup a pair of CentOS 7 servers. I&amp;rsquo;ve used virtual servers for this post, but baremetal would work the same. Also, similar steps could be used if you prefer debian as your linux flavor.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes HA on vSphere</title>
      <link>https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/</link>
      <pubDate>Mon, 27 Jan 2020 15:15:12 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been on the operations side of the IT house, you know that one of your primary job functions is to ensure High Availability (HA) of production workloads. This blog post focuses on making sure applications deployed on a vSphere Kubernetes cluster will be highly available.&lt;/p&gt;
&lt;h2 id=&#34;the-control-plane&#34;&gt;The Control Plane&lt;/h2&gt;
&lt;p&gt;Ok, before we talk about workloads, we should discuss the Kubernetes Control plane components. When we deploy Kubernetes on virtual machines, we have to make sure that the brains of the Kubernetes cluster will continue working even if there is a hardware failure. The first step is to make sure that your control plane components are deployed on different physical (ESXi) hosts. This can be done with a vSphere Host Affinity Rule to keep k8s VMs pinned to groups of hosts or anti-affinity rules to make sure two control plane nodes aren&amp;rsquo;t placed on the same host. After this is done, your Load Balancer should be configured to point to your k8s control plane VMs and a health check is configured for the /healthz path.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Active Directory Authentication for Kubernetes Clusters</title>
      <link>https://theithollow.com/2020/01/21/active-directory-authentication-for-kubernetes-clusters/</link>
      <pubDate>Tue, 21 Jan 2020 15:15:24 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/21/active-directory-authentication-for-kubernetes-clusters/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve stood up your Kubernetes (k8s) cluster and are really looking forward to all of your coworkers deploying containers on it. How will you get everyone logged in? Creating local service accounts and distributing KUBECONFIG files (securely), seems like a real chore. This post will show how you can use Active Directory authentication for Kubernetes Clusters.&lt;/p&gt;
&lt;p&gt;This post will use two projects, &lt;a href=&#34;https://github.com/dexidp/dex&#34;&gt;dex&lt;/a&gt; and &lt;a href=&#34;https://github.com/heptiolabs/gangway&#34;&gt;gangway&lt;/a&gt;, to perform the authentication against ldap and return the Kubernetes login information to the user&amp;rsquo;s browser. The end result will look something like the screen below. The authenticated user will receive instructions on installing the client and setting up certificates for authentication.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes on AWS</title>
      <link>https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/</link>
      <pubDate>Mon, 13 Jan 2020 15:15:39 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/</guid>
      <description>&lt;p&gt;The way you deploy Kubernetes (k8s) on AWS will be similar to how it was done in a &lt;a href=&#34;https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/&#34;&gt;previous post on vSphere&lt;/a&gt;. You still setup nodes, you still deploy kubeadm, and kubectl but there are a few differences when you change your cloud provider. For instance on AWS we can use the LoadBalancer resource against the k8s API and have AWS provision an elastic load balancer for us. These features take a few extra tweaks in AWS.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes on vSphere</title>
      <link>https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/</link>
      <pubDate>Wed, 08 Jan 2020 15:00:04 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re struggling to deploy Kubernetes (k8s) clusters, you&amp;rsquo;re not alone. There are a bunch of different ways to deploy Kubernetes and there are different settings depending on what cloud provider you&amp;rsquo;re using. This post will focus on installing Kubernetes on vSphere with Kubeadm. At the end of this post, you should have what you need to manually deploy k8s in a vSphere environment on ubuntu.&lt;/p&gt;
&lt;h2 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This tutorial uses the &amp;ldquo;in-tree&amp;rdquo; cloud provider for vSphere. This is not the preferred method for deployment going forward. More details can be found &lt;a href=&#34;https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html&#34;&gt;here&lt;/a&gt; for reference.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Jobs and CronJobs</title>
      <link>https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/</link>
      <pubDate>Mon, 16 Dec 2019 15:05:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/</guid>
      <description>&lt;p&gt;Sometimes we need to run a container to do a specific task, and when its completed, we want it to quit. Many containers are deployed and continuously run, such as a web server. But other times we want to accomplish a single task and then quit. This is where a Job is a good choice.&lt;/p&gt;
&lt;h2 id=&#34;jobs-and-cronjobs---the-theory&#34;&gt;Jobs and CronJobs - The Theory&lt;/h2&gt;
&lt;p&gt;Perhaps, we need to run a batch process on demand. Maybe we built an automation routine for something and want to kick it off through the use of a container. We can do this by submitting a job to the Kubernetes API. Kubernetes will run the job to completion and then quit.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Jetstack Cert-Manager</title>
      <link>https://theithollow.com/2019/12/02/jetstack-cert-manager/</link>
      <pubDate>Mon, 02 Dec 2019 15:05:06 +0000</pubDate>
      <guid>https://theithollow.com/2019/12/02/jetstack-cert-manager/</guid>
      <description>&lt;p&gt;One of my least favorite parts of computers is dealing with certificate creation. In fact, ya know those tweets about what you&amp;rsquo;d tweet if you were kidnapped and didn&amp;rsquo;t want to tip off the kidnapers?&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2019/11/certs-tweet-1024x293.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;Yeah, I&amp;rsquo;d tweet about how I love working with certificates. They are just not a fun thing for me. So when I found a new project where I needed certificates created, I was not really excited.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Security Policies</title>
      <link>https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/</link>
      <pubDate>Tue, 19 Nov 2019 15:05:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/</guid>
      <description>&lt;p&gt;Securing and hardening our Kubernetes clusters is a must do activity. We need to remember that containers are still just processes running on the host machines. Sometimes these processes can get more privileges on the Kubernetes node than they should, if you don&amp;rsquo;t properly setup some pod security. This post explains how this could be done for your own clusters.&lt;/p&gt;
&lt;h2 id=&#34;pod-security-policies---the-theory&#34;&gt;Pod Security Policies - The Theory&lt;/h2&gt;
&lt;p&gt;Pod Security policies are designed to limit what can be run on a Kubernetes cluster. Typical things that you might want to limit are: pods that have privileged access, pods with access to the host network, and pods that have access to the host processes just to name a few. Remember that a container isn&amp;rsquo;t as isolated as a VM so we should take care to ensure our containers aren&amp;rsquo;t adversely affecting our nodes&amp;rsquo;s health and security.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Modularized Kubernetes Environments with Jenkins</title>
      <link>https://theithollow.com/2019/11/11/modularized-kubernetes-environments-with-jenkins/</link>
      <pubDate>Mon, 11 Nov 2019 15:05:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/11/11/modularized-kubernetes-environments-with-jenkins/</guid>
      <description>&lt;p&gt;There are a myraid of ways to deploy Kubernetes clusters these days.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kelseyhightower/kubernetes-the-hard-way&#34;&gt;Kubernetes the Hard Way&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://theithollow.com/2019/11/04/clusterapi-demystified/&#34;&gt;Cluster API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://theithollow.com/2019/11/04/clusterapi-demystified/https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/&#34;&gt;Kubeadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes-sigs/kubespray&#34;&gt;Kubespray&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/kubernetes/kops&#34;&gt;kops&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Those are just a few of the ways and I&amp;rsquo;m sure you&amp;rsquo;ll have a favorite. But for the work I&amp;rsquo;ve been doing lately, I don&amp;rsquo;t want to spend a bunch of time cloning repos, updating configs, running ansible scripts and the like, just to get another clean kubernetes cluster in my lab to break. So, I took the individual parts of a Kubernetes build and created a list of ordered jobs in my Jenkins server.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ClusterAPI Demystified</title>
      <link>https://theithollow.com/2019/11/04/clusterapi-demystified/</link>
      <pubDate>Mon, 04 Nov 2019 15:05:05 +0000</pubDate>
      <guid>https://theithollow.com/2019/11/04/clusterapi-demystified/</guid>
      <description>&lt;p&gt;Deploying Kubernetes clusters may be the biggest hurdle in learning Kubernetes and one of the challenges in managing Kubernetes. ClusterAPI is a project designed to ease this burden and make the management and deployment of Kubernetes clusters simpler.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.&lt;/p&gt;
&lt;p&gt;kubernetes-sigs/cluster-api&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This post is designed to dive into ClusterAPI to investigate how it works, and how you can use it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Network Policies</title>
      <link>https://theithollow.com/2019/10/21/kubernetes-network-policies/</link>
      <pubDate>Mon, 21 Oct 2019 14:05:08 +0000</pubDate>
      <guid>https://theithollow.com/2019/10/21/kubernetes-network-policies/</guid>
      <description>&lt;p&gt;In the traditional server world, we&amp;rsquo;ve taken great lengths to ensure that we can micro-segment our servers instead of relying on a few select firewalls at strategically defined chokepoints. What do we do in the container world though? This is where network policies come into play.&lt;/p&gt;
&lt;h2 id=&#34;network-policies---the-theory&#34;&gt;Network Policies - The Theory&lt;/h2&gt;
&lt;p&gt;In a default deployment of a Kubernetes cluster, all of the pods deployed on the nodes can communicate with each other. Some security folks might not like to hear that, but never fear, we have ways to limit the communications between pods and they&amp;rsquo;re called network policies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A Kind Way to Learn Kubernetes</title>
      <link>https://theithollow.com/2019/10/07/a-kind-way-to-learn-kubernetes/</link>
      <pubDate>Mon, 07 Oct 2019 14:10:13 +0000</pubDate>
      <guid>https://theithollow.com/2019/10/07/a-kind-way-to-learn-kubernetes/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m not going to lie to you, as of the time of this writing, maybe the biggest hurdle to learning Kubernetes is getting a cluster stood up. Right now there are a myriad of ways so stand up a cluster, but none of them are really straight forward yet. If you&amp;rsquo;re interested in learning how Kubernetes works, and just want to setup a basic cluster to poke around in, this post is for you.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Desired State and Control Loops</title>
      <link>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</link>
      <pubDate>Mon, 16 Sep 2019 14:05:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve just gotten started with Kubernetes, you might be curious to know how the desired state is achieved? Think about it, you pass a YAML file to the API server and magically stuff happens. Not only that, but when disaster strikes (e.g. pod crashes) Kubernetes also makes it right again so that it matches the desired state.&lt;/p&gt;
&lt;p&gt;The mechanism that allows for Kubernetes to enforce this desired state is the control loop. The basics of this are pretty simple. A control loop can be though of in three stages.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Visually - With VMware Octant</title>
      <link>https://theithollow.com/2019/08/20/kubernetes-visually-with-vmware-octant/</link>
      <pubDate>Tue, 20 Aug 2019 14:10:35 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/20/kubernetes-visually-with-vmware-octant/</guid>
      <description>&lt;p&gt;I don&amp;rsquo;t know about you, but I learn things best when I have a visual to reference. Many of my posts in this blog are purposefully built with visuals, not only because I think its helpful for the readers to &amp;ldquo;get the picture&amp;rdquo;, but also because that&amp;rsquo;s how I learn.&lt;/p&gt;
&lt;p&gt;Kubernetes can feel like a daunting technology to start learning, especially since you&amp;rsquo;ll be working with code and the command line for virtually all of it. That can be a scary proposition to an operations person who is trying to break into something brand new. But last week I was introduced to a project from VMware called &lt;a href=&#34;https://github.com/vmware/octant&#34;&gt;Octant&lt;/a&gt;, that helps visualize whats actually going on in our Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DaemonSets</title>
      <link>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</link>
      <pubDate>Tue, 13 Aug 2019 14:10:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</guid>
      <description>&lt;p&gt;DaemonSets can be a really useful tool for managing the health and operation of the pods within a Kubernetes cluster. In this post we&amp;rsquo;ll explore a use case for a DaemonSet, why we need them, and an example in the lab.&lt;/p&gt;
&lt;h2 id=&#34;daemonsets---the-theory&#34;&gt;DaemonSets - The Theory&lt;/h2&gt;
&lt;p&gt;DaemonSets are actually pretty easy to explain. A DaemonSet is a Kubernetes construct that ensures a pod is running on every node (where eligible) in a cluster. This means that if we were to create a DaemonSet on our six node cluster (3 master, 3 workers), the DaemonSet would schedule the defined pods on each of the nodes for a total of six pods. Now, this assumes there are either no &lt;a href=&#34;https://theithollow.com/?p=9736&#34;&gt;taints on the nodes, or there are tolerations&lt;/a&gt; on the DaemonSets.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Sysdig Secure 2.4 Announced</title>
      <link>https://theithollow.com/2019/08/06/sysdig-secure-2-4-announced/</link>
      <pubDate>Tue, 06 Aug 2019 13:17:20 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/06/sysdig-secure-2-4-announced/</guid>
      <description>&lt;p&gt;Today Sysdig announced a new update to their Cloud Native Visibility and Security Platform, with the release of Sysdig Secure 2.4.&lt;/p&gt;
&lt;p&gt;The new version of the Secure product includes some pretty nifty enhancements.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Runtime profiling with machine learning -&lt;/strong&gt; New containers will be profiled after deployment to give insights into the processes, file system activity, networking and system calls. Once the profiling is complete, these profiles can be used to create policy sets for the expected behavior. Sysdig also offers a confidence level of the profile. Consistent behavior generating a higher confidence level whereas variable behavior would have a lower level.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Taints and Tolerations</title>
      <link>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</link>
      <pubDate>Mon, 29 Jul 2019 14:15:22 +0000</pubDate>
      <guid>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</guid>
      <description>&lt;p&gt;One of the best things about Kubernetes, is that I don&amp;rsquo;t have to think about which piece of hardware my container will run on when I deploy it. The Kubernetes scheduler can make that decision for me. This is great until I actually DO care about what node my container runs on. This post will examine one solution to pod placement, through taints and tolerations.&lt;/p&gt;
&lt;h2 id=&#34;taints---the-theory&#34;&gt;Taints - The Theory&lt;/h2&gt;
&lt;p&gt;Suppose we had a Kubernetes cluster where we didn&amp;rsquo;t want any pods to run on a specific node. You might need to do this for a variety of reasons, such as:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Test Your Kubernetes Cluster Conformance</title>
      <link>https://theithollow.com/2019/07/16/test-your-kubernetes-cluster-conformance/</link>
      <pubDate>Tue, 16 Jul 2019 13:56:08 +0000</pubDate>
      <guid>https://theithollow.com/2019/07/16/test-your-kubernetes-cluster-conformance/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve been dabbling in the world of Kubernetes for a while now and have probably noticed there are a whole lot of vendors packaging their own version of Kubernetes.&lt;/p&gt;
&lt;p&gt;You might be having a fun time comparing the upstream Kubernetes version vs the packaged versions put out by Redhat, VMware, and others. But how do we know that those packaged versions are supporting the required APIs so that all Kubernetes clusters have the same baseline of features?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Monitoring Kubernetes with Sysdig Monitor</title>
      <link>https://theithollow.com/2019/06/23/monitoring-kubernetes-with-sysdig-monitor/</link>
      <pubDate>Sun, 23 Jun 2019 14:10:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/23/monitoring-kubernetes-with-sysdig-monitor/</guid>
      <description>&lt;p&gt;Any system that&amp;rsquo;s going to be deployed for the enterprise needs to have at least a basic level of monitoring in place to manage it. Kubernetes is no exception to this rule. When we, as a community, underwent the shift from physical servers to virtual infrastructure, we didn&amp;rsquo;t ignore the new VMs and just keep monitoring the hardware, we had to come up with new products to monitor our infrastructure. &lt;a href=&#34;https://sysdig.com/&#34;&gt;Sysdig&lt;/a&gt; is building these new solutions for the Kubernetes world.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Helm</title>
      <link>https://theithollow.com/2019/06/10/kubernetes-helm/</link>
      <pubDate>Mon, 10 Jun 2019 14:02:52 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/10/kubernetes-helm/</guid>
      <description>&lt;p&gt;The Kubernetes series has now ventured into some non-native k8s discussions. Helm is a relatively common tool used in the industry and it makes sense to talk about why that is. This post covers the basics of Helm so we can make our own evaluations about its use in our Kubernetes environment.&lt;/p&gt;
&lt;h2 id=&#34;helm---the-theory&#34;&gt;Helm - The Theory&lt;/h2&gt;
&lt;p&gt;So what is Helm? In the most simplest terms its a package manager for Kubernetes.&lt;br&gt;
Think of Helm this way, Helm is to Kubernetes as yum/apt is to Linux. Yeah, sounds pretty neat now doesn&amp;rsquo;t it?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Backups</title>
      <link>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</link>
      <pubDate>Mon, 03 Jun 2019 14:00:58 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</guid>
      <description>&lt;p&gt;The focus of this post is on pod based backups, but this could also go for Deployments, replica sets, etc. This is not a post about how to backup your Kubernetes cluster including things like etcd, but rather the resources that have been deployed on the cluster. Pods have been used as an example to walk through how we can take backups of our applications once deployed in a Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - StatefulSets</title>
      <link>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</link>
      <pubDate>Mon, 01 Apr 2019 14:20:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</guid>
      <description>&lt;p&gt;We love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.&lt;/p&gt;
&lt;h2 id=&#34;statefulsets---the-theory&#34;&gt;StatefulSets - The Theory&lt;/h2&gt;
&lt;p&gt;StatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Cloud Providers and Storage Classes</title>
      <link>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</link>
      <pubDate>Wed, 13 Mar 2019 14:20:23 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/?p=9598&#34;&gt;previous post&lt;/a&gt; we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn&amp;rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn&amp;rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we&amp;rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Persistent Volumes</title>
      <link>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</link>
      <pubDate>Mon, 04 Mar 2019 15:00:51 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</guid>
      <description>&lt;p&gt;Containers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.&lt;/p&gt;
&lt;h2 id=&#34;volumes---the-theory&#34;&gt;Volumes - The Theory&lt;/h2&gt;
&lt;p&gt;In the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Secrets</title>
      <link>https://theithollow.com/2019/02/25/kubernetes-secrets/</link>
      <pubDate>Mon, 25 Feb 2019 15:00:56 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/25/kubernetes-secrets/</guid>
      <description>&lt;p&gt;Secret, Secret, I&amp;rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post we used ConfigMaps&lt;/a&gt; to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a &amp;ldquo;secret&amp;rdquo;.&lt;/p&gt;
&lt;h2 id=&#34;secrets---the-theory&#34;&gt;Secrets - The Theory&lt;/h2&gt;
&lt;p&gt;The short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post&lt;/a&gt; in this &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;series&lt;/a&gt;, but with non-clear text.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - ConfigMaps</title>
      <link>https://theithollow.com/2019/02/20/kubernetes-configmaps/</link>
      <pubDate>Wed, 20 Feb 2019 15:00:40 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/20/kubernetes-configmaps/</guid>
      <description>&lt;p&gt;Sometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.&lt;/p&gt;
&lt;h2 id=&#34;configmaps---the-theory&#34;&gt;ConfigMaps - The Theory&lt;/h2&gt;
&lt;p&gt;Not all of our applications can be as simple as the basic nginx containers we&amp;rsquo;ve deployed earlier in &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;this series&lt;/a&gt;. In some cases, we need to pass configuration files, variables, or other information to our apps.&lt;/p&gt;
&lt;p&gt;The theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DNS</title>
      <link>https://theithollow.com/2019/02/18/kubernetes-dns/</link>
      <pubDate>Mon, 18 Feb 2019 15:00:16 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/18/kubernetes-dns/</guid>
      <description>&lt;p&gt;DNS is a critical service in any system. Kubernetes is no different, but Kubernetes will implement its own domain naming system that&amp;rsquo;s implemented within your Kubernetes cluster. This post explores the details that you need to know to operate a k8s cluster properly.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes-dns---the-theory&#34;&gt;Kubernetes DNS - The theory&lt;/h2&gt;
&lt;p&gt;I don&amp;rsquo;t want to dive into DNS too much since it&amp;rsquo;s a core service most should be familiar with. But at a really high level, DNS translates an IP address that might be changing, with an easily remember-able name such as &amp;ldquo;theithollow.com&amp;rdquo;. Every network has a DNS server, but Kubernetes implements their own DNS within the cluster to make connecting to containers a simple task.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Ingress</title>
      <link>https://theithollow.com/2019/02/13/kubernetes-ingress/</link>
      <pubDate>Wed, 13 Feb 2019 15:00:46 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/13/kubernetes-ingress/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s time to look closer at how we access our containers from outside the Kubernetes cluster. We&amp;rsquo;ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.&lt;/p&gt;
&lt;h2 id=&#34;ingress-controllers---the-theory&#34;&gt;Ingress Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Lets first talk about why we&amp;rsquo;d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/%22&#34;&gt;http://store-name.com/&amp;quot;&lt;/a&gt; and a shopping cart page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/cart%22&#34;&gt;http://store-name.com/cart&amp;quot;&lt;/a&gt; and an api URI at &amp;ldquo;&lt;a href=&#34;http://store-name.com/api%22&#34;&gt;http://store-name.com/api&amp;quot;&lt;/a&gt;. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we&amp;rsquo;re starting to drift off the point which hopefully you get now.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - KUBECONFIG and Context</title>
      <link>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</link>
      <pubDate>Mon, 11 Feb 2019 15:00:26 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve been working with Kubernetes for a while now and no doubt you have lots of clusters and namespaces to deal with now. This might be a good time to introduce Kubernetes KUBECONFIG files and context so you can more easily use all of these different resources.&lt;/p&gt;
&lt;h2 id=&#34;kubeconfig-and-context---the-theory&#34;&gt;KUBECONFIG and Context - The Theory&lt;/h2&gt;
&lt;p&gt;When you first setup your Kubernetes cluster you created a config file likely stored in your $HOME/.kube directory. This is the KUBECONFIG file and it is used to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG file. This is why you would&amp;rsquo;ve needed to add this file to your $PATH variable so that it could be used correctly by the kubectl commands.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Namespaces</title>
      <link>https://theithollow.com/2019/02/06/kubernetes-namespaces/</link>
      <pubDate>Wed, 06 Feb 2019 15:00:13 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/06/kubernetes-namespaces/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll start exploring ways that you might be able to better manage your Kubernetes cluster for security or organizational purposes. Namespaces become a big piece of how your Kubernetes cluster operates and who sees what inside your cluster.&lt;/p&gt;
&lt;h2 id=&#34;namespaces---the-theory&#34;&gt;Namespaces - The Theory&lt;/h2&gt;
&lt;p&gt;The easiest way to think of a namespace is that its a logical separation of your Kubernetes Cluster. Just like you might have segmented a physical server into several virtual severs, we can segment our Kubernetes cluster into namespaces. Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the &amp;ldquo;get pods&amp;rdquo; command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Service Publishing</title>
      <link>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</link>
      <pubDate>Tue, 05 Feb 2019 16:30:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</guid>
      <description>&lt;p&gt;A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Endpoints</title>
      <link>https://theithollow.com/2019/02/04/kubernetes-endpoints/</link>
      <pubDate>Mon, 04 Feb 2019 15:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/04/kubernetes-endpoints/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you&amp;rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we&amp;rsquo;ll cover the basics of Kubernetes endpoints in this post.&lt;/p&gt;
&lt;h2 id=&#34;endpoints---the-theory&#34;&gt;Endpoints - The Theory&lt;/h2&gt;
&lt;p&gt;During the &lt;a href=&#34;https://theithollow.com/?p=9427&#34;&gt;post&lt;/a&gt; where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you&amp;rsquo;re doing, you don&amp;rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven&amp;rsquo;t covered yet).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Services and Labels</title>
      <link>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</link>
      <pubDate>Thu, 31 Jan 2019 15:00:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;the series&lt;/a&gt;, you may be thinking that we&amp;rsquo;ve built ourselves a problem. You&amp;rsquo;ll recall that we&amp;rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven&amp;rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn&amp;rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we&amp;rsquo;ll access one of our pods &amp;hellip; finally.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Deployments</title>
      <link>https://theithollow.com/2019/01/30/kubernetes-deployments/</link>
      <pubDate>Wed, 30 Jan 2019 15:01:37 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/30/kubernetes-deployments/</guid>
      <description>&lt;p&gt;After following the previous posts, we should feel pretty good about deploying our &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;pods&lt;/a&gt; and ensuring they are highly available. We&amp;rsquo;ve learned about naked pods and then &lt;a href=&#34;https://theithollow.com/2019/01/28/kubernetes-replica-sets/&#34;&gt;replica sets&lt;/a&gt; to make those pods more HA, but what about when we need to create a new version of our pods? We don&amp;rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where &amp;ldquo;Deployments&amp;rdquo; comes into play.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Replica Sets</title>
      <link>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</link>
      <pubDate>Mon, 28 Jan 2019 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;previous post&lt;/a&gt; we covered the use of pods and deployed some &amp;ldquo;naked pods&amp;rdquo; in our Kubernetes cluster. In this post we&amp;rsquo;ll expand our use of pods with Replica Sets.&lt;/p&gt;
&lt;h2 id=&#34;replica-sets---the-theory&#34;&gt;Replica Sets - The Theory&lt;/h2&gt;
&lt;p&gt;One of the biggest reasons that we don&amp;rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can&amp;rsquo;t count on them to always be running. Kubernetes doesn&amp;rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let&amp;rsquo;s plan for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pods</title>
      <link>https://theithollow.com/2019/01/21/kubernetes-pods/</link>
      <pubDate>Mon, 21 Jan 2019 16:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/21/kubernetes-pods/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got a Kubernetes cluster setup and we&amp;rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we&amp;rsquo;ll need to understand a little bit about pods.&lt;/p&gt;
&lt;h2 id=&#34;pods---the-theory&#34;&gt;Pods - The Theory&lt;/h2&gt;
&lt;p&gt;In a docker environment, the smallest unit you&amp;rsquo;d deal with is a container. In the Kubernetes world, you&amp;rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes Using Kubeadm - CentOS7</title>
      <link>https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/</link>
      <pubDate>Mon, 14 Jan 2019 15:35:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been wanting to have a playground to mess around with Kubernetes (k8s) deployments for a while and didn&amp;rsquo;t want to spend the money on a cloud solution like &lt;a href=&#34;https://aws.amazon.com/eks/?nc2=h_m1&#34;&gt;AWS Elastic Container Service for Kubernetes&lt;/a&gt; or &lt;a href=&#34;https://cloud.google.com/kubernetes-engine/&#34;&gt;Google Kubernetes Engine&lt;/a&gt; . While these hosted solutions provide additional features such as the ability to spin up a load balancer, they also cost money every hour they&amp;rsquo;re available and I&amp;rsquo;m planning on leaving my cluster running. Also, from a learning perspective, there is no greater way to learn the underpinnings of a solution than having to deploy and manage it on your own. Therefore, I set out to deploy k8s in my vSphere home lab on some CentOS 7 virtual machines using Kubeadm. I found several articles on how to do this but somehow I got off track a few times and thought another blog post with step by step instructions and screenshots would help others. Hopefully it helps you. Let&amp;rsquo;s begin.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Mac Client</title>
      <link>https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/</link>
      <pubDate>Tue, 31 Jul 2018 14:06:02 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Windows Client</title>
      <link>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</link>
      <pubDate>Mon, 30 Jul 2018 16:05:09 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
