<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Kubernetes on The IT Hollow</title>
    <link>https://theithollow.com/categories/kubernetes/</link>
    <description>Recent content in Kubernetes on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 26 May 2020 15:05:00 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/categories/kubernetes/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Kubernetes Validating Admission Controllers</title>
      <link>https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/</link>
      <pubDate>Tue, 26 May 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/</guid>
      <description>&lt;p&gt;Hey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don&amp;rsquo;t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster?&lt;/p&gt;
&lt;p&gt;If any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers.&lt;/p&gt;
&lt;h2 id=&#34;validating-admission-controllers---the-theory&#34;&gt;Validating Admission Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Admission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you&amp;rsquo;ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is &lt;a href=&#34;https://www.openpolicyagent.org/&#34;&gt;Open Policy Agent (OPA)&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Liveness and Readiness Probes</title>
      <link>https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/</link>
      <pubDate>Mon, 18 May 2020 14:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/</guid>
      <description>&lt;p&gt;Just because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not.&lt;/p&gt;
&lt;h2 id=&#34;liveness-and-readiness-probes---the-theory&#34;&gt;Liveness and Readiness Probes - The Theory&lt;/h2&gt;
&lt;p&gt;On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node&amp;rsquo;s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Pod Auto-scaling</title>
      <link>https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/</link>
      <pubDate>Mon, 04 May 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve built your Kubernetes cluster(s). You&amp;rsquo;ve built your apps in containers. You&amp;rsquo;ve architected your services so that losing a single instance doesn&amp;rsquo;t cause an outage. And you&amp;rsquo;re ready for cloud scale. You deploy your application and are waiting to sit back and &amp;ldquo;profit.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;When your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn&amp;rsquo;t it be better to deploy the application and let the platform scale the application automatically?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Resource Requests and Limits</title>
      <link>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</link>
      <pubDate>Mon, 20 Apr 2020 15:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</guid>
      <description>&lt;p&gt;Containerizing applications and running them on Kubernetes doesn&amp;rsquo;t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the &amp;ldquo;noisy neighbor&amp;rdquo; problem in a Kubernetes Cluster.&lt;/p&gt;
&lt;h2 id=&#34;resource-requests-and-limits---the-theory&#34;&gt;Resource Requests and Limits - The Theory&lt;/h2&gt;
&lt;p&gt;Kubernetes uses the concept of a &amp;ldquo;Resource Request&amp;rdquo; and a &amp;ldquo;Resource Limit&amp;rdquo; when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests.&lt;/p&gt;</description>
    </item>
    <item>
      <title>In-tree vs Out-of-tree Kubernetes Cloud Providers</title>
      <link>https://theithollow.com/2020/04/14/in-tree-vs-out-of-tree-kubernetes-cloud-providers/</link>
      <pubDate>Tue, 14 Apr 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/14/in-tree-vs-out-of-tree-kubernetes-cloud-providers/</guid>
      <description>&lt;p&gt;VMware offers a Kubernetes Cloud Provider that allows Kubernetes (k8s) administrators to manage parts of the vSphere infrastructure by interacting with the Kubernetes Control Plane. Why is this needed? Well, being able to spin up some new virtual disks and attaching them to your k8s cluster is especially useful when your pods need access to persistent storage for example.&lt;/p&gt;
&lt;p&gt;The Cloud providers (AWS, vSphere, Azure, GCE) obviously differ between vendors. Each cloud provider has different functionality that might be exposed in some way to the Kubernetes control plane. For example, Amazon Web Services provides a load balancer that can be configured with k8s on demand if you are using the AWS provider, but vSphere does not (unless you&amp;rsquo;re using NSX).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploying Tanzu Kubernetes Grid Management Clusters - vSphere</title>
      <link>https://theithollow.com/2020/04/06/deploying-tanzu-kubernetes-grid-management-clusters-vsphere/</link>
      <pubDate>Mon, 06 Apr 2020 14:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/06/deploying-tanzu-kubernetes-grid-management-clusters-vsphere/</guid>
      <description>&lt;p&gt;VMware recently released the 1.0 release of Tanzu Kubernetes Grid (TKG) which aims at decreasing the difficulty of deploying conformant Kubernetes clusters across infrastructure. This post demonstrates how to use TKG to deploy a management cluster to vSphere.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re not familiar with TKG yet, you might be curious about what a Management Cluster is. The management cluster is used to manage one to many workload clusters. The management cluster is used to spin up VMs on different cloud providers, and lay down the Kubernetes bits on those VMs, thus creating new clusters for applications to be build on top of. TKG is built upon the &lt;a href=&#34;https://github.com/kubernetes-sigs/cluster-api&#34;&gt;ClusterAPI project&lt;/a&gt; so &lt;a href=&#34;https://theithollow.com/2019/11/04/clusterapi-demystified/&#34;&gt;this post&lt;/a&gt; pretty accurately describes the architecture that TKG uses.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes on AWS</title>
      <link>https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/</link>
      <pubDate>Mon, 13 Jan 2020 15:15:39 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/</guid>
      <description>&lt;p&gt;The way you deploy Kubernetes (k8s) on AWS will be similar to how it was done in a &lt;a href=&#34;https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/&#34;&gt;previous post on vSphere&lt;/a&gt;. You still setup nodes, you still deploy kubeadm, and kubectl but there are a few differences when you change your cloud provider. For instance on AWS we can use the LoadBalancer resource against the k8s API and have AWS provision an elastic load balancer for us. These features take a few extra tweaks in AWS.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes on vSphere</title>
      <link>https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/</link>
      <pubDate>Wed, 08 Jan 2020 15:00:04 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re struggling to deploy Kubernetes (k8s) clusters, you&amp;rsquo;re not alone. There are a bunch of different ways to deploy Kubernetes and there are different settings depending on what cloud provider you&amp;rsquo;re using. This post will focus on installing Kubernetes on vSphere with Kubeadm. At the end of this post, you should have what you need to manually deploy k8s in a vSphere environment on ubuntu.&lt;/p&gt;
&lt;h2 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This tutorial uses the &amp;ldquo;in-tree&amp;rdquo; cloud provider for vSphere. This is not the preferred method for deployment going forward. More details can be found &lt;a href=&#34;https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html&#34;&gt;here&lt;/a&gt; for reference.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Jobs and CronJobs</title>
      <link>https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/</link>
      <pubDate>Mon, 16 Dec 2019 15:05:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/</guid>
      <description>&lt;p&gt;Sometimes we need to run a container to do a specific task, and when its completed, we want it to quit. Many containers are deployed and continuously run, such as a web server. But other times we want to accomplish a single task and then quit. This is where a Job is a good choice.&lt;/p&gt;
&lt;h2 id=&#34;jobs-and-cronjobs---the-theory&#34;&gt;Jobs and CronJobs - The Theory&lt;/h2&gt;
&lt;p&gt;Perhaps, we need to run a batch process on demand. Maybe we built an automation routine for something and want to kick it off through the use of a container. We can do this by submitting a job to the Kubernetes API. Kubernetes will run the job to completion and then quit.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Security Policies</title>
      <link>https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/</link>
      <pubDate>Tue, 19 Nov 2019 15:05:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/</guid>
      <description>&lt;p&gt;Securing and hardening our Kubernetes clusters is a must do activity. We need to remember that containers are still just processes running on the host machines. Sometimes these processes can get more privileges on the Kubernetes node than they should, if you don&amp;rsquo;t properly setup some pod security. This post explains how this could be done for your own clusters.&lt;/p&gt;
&lt;h2 id=&#34;pod-security-policies---the-theory&#34;&gt;Pod Security Policies - The Theory&lt;/h2&gt;
&lt;p&gt;Pod Security policies are designed to limit what can be run on a Kubernetes cluster. Typical things that you might want to limit are: pods that have privileged access, pods with access to the host network, and pods that have access to the host processes just to name a few. Remember that a container isn&amp;rsquo;t as isolated as a VM so we should take care to ensure our containers aren&amp;rsquo;t adversely affecting our nodes&amp;rsquo;s health and security.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Network Policies</title>
      <link>https://theithollow.com/2019/10/21/kubernetes-network-policies/</link>
      <pubDate>Mon, 21 Oct 2019 14:05:08 +0000</pubDate>
      <guid>https://theithollow.com/2019/10/21/kubernetes-network-policies/</guid>
      <description>&lt;p&gt;In the traditional server world, we&amp;rsquo;ve taken great lengths to ensure that we can micro-segment our servers instead of relying on a few select firewalls at strategically defined chokepoints. What do we do in the container world though? This is where network policies come into play.&lt;/p&gt;
&lt;h2 id=&#34;network-policies---the-theory&#34;&gt;Network Policies - The Theory&lt;/h2&gt;
&lt;p&gt;In a default deployment of a Kubernetes cluster, all of the pods deployed on the nodes can communicate with each other. Some security folks might not like to hear that, but never fear, we have ways to limit the communications between pods and they&amp;rsquo;re called network policies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Desired State and Control Loops</title>
      <link>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</link>
      <pubDate>Mon, 16 Sep 2019 14:05:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve just gotten started with Kubernetes, you might be curious to know how the desired state is achieved? Think about it, you pass a YAML file to the API server and magically stuff happens. Not only that, but when disaster strikes (e.g. pod crashes) Kubernetes also makes it right again so that it matches the desired state.&lt;/p&gt;
&lt;p&gt;The mechanism that allows for Kubernetes to enforce this desired state is the control loop. The basics of this are pretty simple. A control loop can be though of in three stages.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DaemonSets</title>
      <link>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</link>
      <pubDate>Tue, 13 Aug 2019 14:10:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</guid>
      <description>&lt;p&gt;DaemonSets can be a really useful tool for managing the health and operation of the pods within a Kubernetes cluster. In this post we&amp;rsquo;ll explore a use case for a DaemonSet, why we need them, and an example in the lab.&lt;/p&gt;
&lt;h2 id=&#34;daemonsets---the-theory&#34;&gt;DaemonSets - The Theory&lt;/h2&gt;
&lt;p&gt;DaemonSets are actually pretty easy to explain. A DaemonSet is a Kubernetes construct that ensures a pod is running on every node (where eligible) in a cluster. This means that if we were to create a DaemonSet on our six node cluster (3 master, 3 workers), the DaemonSet would schedule the defined pods on each of the nodes for a total of six pods. Now, this assumes there are either no &lt;a href=&#34;https://theithollow.com/?p=9736&#34;&gt;taints on the nodes, or there are tolerations&lt;/a&gt; on the DaemonSets.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Taints and Tolerations</title>
      <link>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</link>
      <pubDate>Mon, 29 Jul 2019 14:15:22 +0000</pubDate>
      <guid>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</guid>
      <description>&lt;p&gt;One of the best things about Kubernetes, is that I don&amp;rsquo;t have to think about which piece of hardware my container will run on when I deploy it. The Kubernetes scheduler can make that decision for me. This is great until I actually DO care about what node my container runs on. This post will examine one solution to pod placement, through taints and tolerations.&lt;/p&gt;
&lt;h2 id=&#34;taints---the-theory&#34;&gt;Taints - The Theory&lt;/h2&gt;
&lt;p&gt;Suppose we had a Kubernetes cluster where we didn&amp;rsquo;t want any pods to run on a specific node. You might need to do this for a variety of reasons, such as:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Helm</title>
      <link>https://theithollow.com/2019/06/10/kubernetes-helm/</link>
      <pubDate>Mon, 10 Jun 2019 14:02:52 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/10/kubernetes-helm/</guid>
      <description>&lt;p&gt;The Kubernetes series has now ventured into some non-native k8s discussions. Helm is a relatively common tool used in the industry and it makes sense to talk about why that is. This post covers the basics of Helm so we can make our own evaluations about its use in our Kubernetes environment.&lt;/p&gt;
&lt;h2 id=&#34;helm---the-theory&#34;&gt;Helm - The Theory&lt;/h2&gt;
&lt;p&gt;So what is Helm? In the most simplest terms its a package manager for Kubernetes.&lt;br&gt;
Think of Helm this way, Helm is to Kubernetes as yum/apt is to Linux. Yeah, sounds pretty neat now doesn&amp;rsquo;t it?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Backups</title>
      <link>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</link>
      <pubDate>Mon, 03 Jun 2019 14:00:58 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</guid>
      <description>&lt;p&gt;The focus of this post is on pod based backups, but this could also go for Deployments, replica sets, etc. This is not a post about how to backup your Kubernetes cluster including things like etcd, but rather the resources that have been deployed on the cluster. Pods have been used as an example to walk through how we can take backups of our applications once deployed in a Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Role Based Access</title>
      <link>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</link>
      <pubDate>Mon, 20 May 2019 14:10:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</guid>
      <description>&lt;p&gt;As with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn&amp;rsquo;t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other&amp;rsquo;s work, and we can do that through role based access controls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - StatefulSets</title>
      <link>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</link>
      <pubDate>Mon, 01 Apr 2019 14:20:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</guid>
      <description>&lt;p&gt;We love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.&lt;/p&gt;
&lt;h2 id=&#34;statefulsets---the-theory&#34;&gt;StatefulSets - The Theory&lt;/h2&gt;
&lt;p&gt;StatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Cloud Providers and Storage Classes</title>
      <link>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</link>
      <pubDate>Wed, 13 Mar 2019 14:20:23 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/?p=9598&#34;&gt;previous post&lt;/a&gt; we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn&amp;rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn&amp;rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we&amp;rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Persistent Volumes</title>
      <link>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</link>
      <pubDate>Mon, 04 Mar 2019 15:00:51 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</guid>
      <description>&lt;p&gt;Containers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.&lt;/p&gt;
&lt;h2 id=&#34;volumes---the-theory&#34;&gt;Volumes - The Theory&lt;/h2&gt;
&lt;p&gt;In the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Secrets</title>
      <link>https://theithollow.com/2019/02/25/kubernetes-secrets/</link>
      <pubDate>Mon, 25 Feb 2019 15:00:56 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/25/kubernetes-secrets/</guid>
      <description>&lt;p&gt;Secret, Secret, I&amp;rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post we used ConfigMaps&lt;/a&gt; to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a &amp;ldquo;secret&amp;rdquo;.&lt;/p&gt;
&lt;h2 id=&#34;secrets---the-theory&#34;&gt;Secrets - The Theory&lt;/h2&gt;
&lt;p&gt;The short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post&lt;/a&gt; in this &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;series&lt;/a&gt;, but with non-clear text.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - ConfigMaps</title>
      <link>https://theithollow.com/2019/02/20/kubernetes-configmaps/</link>
      <pubDate>Wed, 20 Feb 2019 15:00:40 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/20/kubernetes-configmaps/</guid>
      <description>&lt;p&gt;Sometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.&lt;/p&gt;
&lt;h2 id=&#34;configmaps---the-theory&#34;&gt;ConfigMaps - The Theory&lt;/h2&gt;
&lt;p&gt;Not all of our applications can be as simple as the basic nginx containers we&amp;rsquo;ve deployed earlier in &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;this series&lt;/a&gt;. In some cases, we need to pass configuration files, variables, or other information to our apps.&lt;/p&gt;
&lt;p&gt;The theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DNS</title>
      <link>https://theithollow.com/2019/02/18/kubernetes-dns/</link>
      <pubDate>Mon, 18 Feb 2019 15:00:16 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/18/kubernetes-dns/</guid>
      <description>&lt;p&gt;DNS is a critical service in any system. Kubernetes is no different, but Kubernetes will implement its own domain naming system that&amp;rsquo;s implemented within your Kubernetes cluster. This post explores the details that you need to know to operate a k8s cluster properly.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes-dns---the-theory&#34;&gt;Kubernetes DNS - The theory&lt;/h2&gt;
&lt;p&gt;I don&amp;rsquo;t want to dive into DNS too much since it&amp;rsquo;s a core service most should be familiar with. But at a really high level, DNS translates an IP address that might be changing, with an easily remember-able name such as &amp;ldquo;theithollow.com&amp;rdquo;. Every network has a DNS server, but Kubernetes implements their own DNS within the cluster to make connecting to containers a simple task.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Ingress</title>
      <link>https://theithollow.com/2019/02/13/kubernetes-ingress/</link>
      <pubDate>Wed, 13 Feb 2019 15:00:46 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/13/kubernetes-ingress/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s time to look closer at how we access our containers from outside the Kubernetes cluster. We&amp;rsquo;ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.&lt;/p&gt;
&lt;h2 id=&#34;ingress-controllers---the-theory&#34;&gt;Ingress Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Lets first talk about why we&amp;rsquo;d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/%22&#34;&gt;http://store-name.com/&amp;quot;&lt;/a&gt; and a shopping cart page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/cart%22&#34;&gt;http://store-name.com/cart&amp;quot;&lt;/a&gt; and an api URI at &amp;ldquo;&lt;a href=&#34;http://store-name.com/api%22&#34;&gt;http://store-name.com/api&amp;quot;&lt;/a&gt;. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we&amp;rsquo;re starting to drift off the point which hopefully you get now.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - KUBECONFIG and Context</title>
      <link>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</link>
      <pubDate>Mon, 11 Feb 2019 15:00:26 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve been working with Kubernetes for a while now and no doubt you have lots of clusters and namespaces to deal with now. This might be a good time to introduce Kubernetes KUBECONFIG files and context so you can more easily use all of these different resources.&lt;/p&gt;
&lt;h2 id=&#34;kubeconfig-and-context---the-theory&#34;&gt;KUBECONFIG and Context - The Theory&lt;/h2&gt;
&lt;p&gt;When you first setup your Kubernetes cluster you created a config file likely stored in your $HOME/.kube directory. This is the KUBECONFIG file and it is used to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG file. This is why you would&amp;rsquo;ve needed to add this file to your $PATH variable so that it could be used correctly by the kubectl commands.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Namespaces</title>
      <link>https://theithollow.com/2019/02/06/kubernetes-namespaces/</link>
      <pubDate>Wed, 06 Feb 2019 15:00:13 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/06/kubernetes-namespaces/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll start exploring ways that you might be able to better manage your Kubernetes cluster for security or organizational purposes. Namespaces become a big piece of how your Kubernetes cluster operates and who sees what inside your cluster.&lt;/p&gt;
&lt;h2 id=&#34;namespaces---the-theory&#34;&gt;Namespaces - The Theory&lt;/h2&gt;
&lt;p&gt;The easiest way to think of a namespace is that its a logical separation of your Kubernetes Cluster. Just like you might have segmented a physical server into several virtual severs, we can segment our Kubernetes cluster into namespaces. Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the &amp;ldquo;get pods&amp;rdquo; command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Service Publishing</title>
      <link>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</link>
      <pubDate>Tue, 05 Feb 2019 16:30:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</guid>
      <description>&lt;p&gt;A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Endpoints</title>
      <link>https://theithollow.com/2019/02/04/kubernetes-endpoints/</link>
      <pubDate>Mon, 04 Feb 2019 15:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/04/kubernetes-endpoints/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you&amp;rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we&amp;rsquo;ll cover the basics of Kubernetes endpoints in this post.&lt;/p&gt;
&lt;h2 id=&#34;endpoints---the-theory&#34;&gt;Endpoints - The Theory&lt;/h2&gt;
&lt;p&gt;During the &lt;a href=&#34;https://theithollow.com/?p=9427&#34;&gt;post&lt;/a&gt; where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you&amp;rsquo;re doing, you don&amp;rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven&amp;rsquo;t covered yet).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Services and Labels</title>
      <link>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</link>
      <pubDate>Thu, 31 Jan 2019 15:00:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;the series&lt;/a&gt;, you may be thinking that we&amp;rsquo;ve built ourselves a problem. You&amp;rsquo;ll recall that we&amp;rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven&amp;rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn&amp;rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we&amp;rsquo;ll access one of our pods &amp;hellip; finally.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Deployments</title>
      <link>https://theithollow.com/2019/01/30/kubernetes-deployments/</link>
      <pubDate>Wed, 30 Jan 2019 15:01:37 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/30/kubernetes-deployments/</guid>
      <description>&lt;p&gt;After following the previous posts, we should feel pretty good about deploying our &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;pods&lt;/a&gt; and ensuring they are highly available. We&amp;rsquo;ve learned about naked pods and then &lt;a href=&#34;https://theithollow.com/2019/01/28/kubernetes-replica-sets/&#34;&gt;replica sets&lt;/a&gt; to make those pods more HA, but what about when we need to create a new version of our pods? We don&amp;rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where &amp;ldquo;Deployments&amp;rdquo; comes into play.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Replica Sets</title>
      <link>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</link>
      <pubDate>Mon, 28 Jan 2019 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;previous post&lt;/a&gt; we covered the use of pods and deployed some &amp;ldquo;naked pods&amp;rdquo; in our Kubernetes cluster. In this post we&amp;rsquo;ll expand our use of pods with Replica Sets.&lt;/p&gt;
&lt;h2 id=&#34;replica-sets---the-theory&#34;&gt;Replica Sets - The Theory&lt;/h2&gt;
&lt;p&gt;One of the biggest reasons that we don&amp;rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can&amp;rsquo;t count on them to always be running. Kubernetes doesn&amp;rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let&amp;rsquo;s plan for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Getting Started with Kubernetes</title>
      <link>https://theithollow.com/2019/01/26/getting-started-with-kubernetes/</link>
      <pubDate>Sat, 26 Jan 2019 22:38:39 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/26/getting-started-with-kubernetes/</guid>
      <description>&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2019/01/kubernetesguide-1024x610.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The following posts are meant to get a beginner started with the process of understanding Kubernetes. They include basic level information to start understanding the concepts of the Kubernetes service and include both theory and examples.&lt;/p&gt;
&lt;p&gt;To follow along with the series, a Kubernetes cluster should be deployed and admin permissions are needed to perform many of the steps. If you wish to follow along with each of the posts, a cluster with cloud provider integration may be needed. In some cases we need a Load Balancer and elastic storage options.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pods</title>
      <link>https://theithollow.com/2019/01/21/kubernetes-pods/</link>
      <pubDate>Mon, 21 Jan 2019 16:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/21/kubernetes-pods/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got a Kubernetes cluster setup and we&amp;rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we&amp;rsquo;ll need to understand a little bit about pods.&lt;/p&gt;
&lt;h2 id=&#34;pods---the-theory&#34;&gt;Pods - The Theory&lt;/h2&gt;
&lt;p&gt;In a docker environment, the smallest unit you&amp;rsquo;d deal with is a container. In the Kubernetes world, you&amp;rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
