<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Containers on The IT Hollow</title>
    <link>https://theithollow.com/categories/containers/</link>
    <description>Recent content in Containers on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 20 May 2019 14:10:48 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/categories/containers/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Kubernetes - Role Based Access</title>
      <link>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</link>
      <pubDate>Mon, 20 May 2019 14:10:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</guid>
      <description>&lt;p&gt;As with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn&amp;rsquo;t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other&amp;rsquo;s work, and we can do that through role based access controls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - StatefulSets</title>
      <link>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</link>
      <pubDate>Mon, 01 Apr 2019 14:20:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</guid>
      <description>&lt;p&gt;We love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.&lt;/p&gt;
&lt;h2 id=&#34;statefulsets---the-theory&#34;&gt;StatefulSets - The Theory&lt;/h2&gt;
&lt;p&gt;StatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Cloud Providers and Storage Classes</title>
      <link>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</link>
      <pubDate>Wed, 13 Mar 2019 14:20:23 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/?p=9598&#34;&gt;previous post&lt;/a&gt; we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn&amp;rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn&amp;rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we&amp;rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Persistent Volumes</title>
      <link>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</link>
      <pubDate>Mon, 04 Mar 2019 15:00:51 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</guid>
      <description>&lt;p&gt;Containers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.&lt;/p&gt;
&lt;h2 id=&#34;volumes---the-theory&#34;&gt;Volumes - The Theory&lt;/h2&gt;
&lt;p&gt;In the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Secrets</title>
      <link>https://theithollow.com/2019/02/25/kubernetes-secrets/</link>
      <pubDate>Mon, 25 Feb 2019 15:00:56 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/25/kubernetes-secrets/</guid>
      <description>&lt;p&gt;Secret, Secret, I&amp;rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post we used ConfigMaps&lt;/a&gt; to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a &amp;ldquo;secret&amp;rdquo;.&lt;/p&gt;
&lt;h2 id=&#34;secrets---the-theory&#34;&gt;Secrets - The Theory&lt;/h2&gt;
&lt;p&gt;The short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post&lt;/a&gt; in this &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;series&lt;/a&gt;, but with non-clear text.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - ConfigMaps</title>
      <link>https://theithollow.com/2019/02/20/kubernetes-configmaps/</link>
      <pubDate>Wed, 20 Feb 2019 15:00:40 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/20/kubernetes-configmaps/</guid>
      <description>&lt;p&gt;Sometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.&lt;/p&gt;
&lt;h2 id=&#34;configmaps---the-theory&#34;&gt;ConfigMaps - The Theory&lt;/h2&gt;
&lt;p&gt;Not all of our applications can be as simple as the basic nginx containers we&amp;rsquo;ve deployed earlier in &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;this series&lt;/a&gt;. In some cases, we need to pass configuration files, variables, or other information to our apps.&lt;/p&gt;
&lt;p&gt;The theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Service Publishing</title>
      <link>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</link>
      <pubDate>Tue, 05 Feb 2019 16:30:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</guid>
      <description>&lt;p&gt;A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Endpoints</title>
      <link>https://theithollow.com/2019/02/04/kubernetes-endpoints/</link>
      <pubDate>Mon, 04 Feb 2019 15:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/04/kubernetes-endpoints/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you&amp;rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we&amp;rsquo;ll cover the basics of Kubernetes endpoints in this post.&lt;/p&gt;
&lt;h2 id=&#34;endpoints---the-theory&#34;&gt;Endpoints - The Theory&lt;/h2&gt;
&lt;p&gt;During the &lt;a href=&#34;https://theithollow.com/?p=9427&#34;&gt;post&lt;/a&gt; where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you&amp;rsquo;re doing, you don&amp;rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven&amp;rsquo;t covered yet).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Services and Labels</title>
      <link>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</link>
      <pubDate>Thu, 31 Jan 2019 15:00:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;the series&lt;/a&gt;, you may be thinking that we&amp;rsquo;ve built ourselves a problem. You&amp;rsquo;ll recall that we&amp;rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven&amp;rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn&amp;rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we&amp;rsquo;ll access one of our pods &amp;hellip; finally.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Deployments</title>
      <link>https://theithollow.com/2019/01/30/kubernetes-deployments/</link>
      <pubDate>Wed, 30 Jan 2019 15:01:37 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/30/kubernetes-deployments/</guid>
      <description>&lt;p&gt;After following the previous posts, we should feel pretty good about deploying our &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;pods&lt;/a&gt; and ensuring they are highly available. We&amp;rsquo;ve learned about naked pods and then &lt;a href=&#34;https://theithollow.com/2019/01/28/kubernetes-replica-sets/&#34;&gt;replica sets&lt;/a&gt; to make those pods more HA, but what about when we need to create a new version of our pods? We don&amp;rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where &amp;ldquo;Deployments&amp;rdquo; comes into play.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Replica Sets</title>
      <link>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</link>
      <pubDate>Mon, 28 Jan 2019 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;previous post&lt;/a&gt; we covered the use of pods and deployed some &amp;ldquo;naked pods&amp;rdquo; in our Kubernetes cluster. In this post we&amp;rsquo;ll expand our use of pods with Replica Sets.&lt;/p&gt;
&lt;h2 id=&#34;replica-sets---the-theory&#34;&gt;Replica Sets - The Theory&lt;/h2&gt;
&lt;p&gt;One of the biggest reasons that we don&amp;rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can&amp;rsquo;t count on them to always be running. Kubernetes doesn&amp;rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let&amp;rsquo;s plan for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pods</title>
      <link>https://theithollow.com/2019/01/21/kubernetes-pods/</link>
      <pubDate>Mon, 21 Jan 2019 16:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/21/kubernetes-pods/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got a Kubernetes cluster setup and we&amp;rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we&amp;rsquo;ll need to understand a little bit about pods.&lt;/p&gt;
&lt;h2 id=&#34;pods---the-theory&#34;&gt;Pods - The Theory&lt;/h2&gt;
&lt;p&gt;In a docker environment, the smallest unit you&amp;rsquo;d deal with is a container. In the Kubernetes world, you&amp;rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes Using Kubeadm - CentOS7</title>
      <link>https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/</link>
      <pubDate>Mon, 14 Jan 2019 15:35:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been wanting to have a playground to mess around with Kubernetes (k8s) deployments for a while and didn&amp;rsquo;t want to spend the money on a cloud solution like &lt;a href=&#34;https://aws.amazon.com/eks/?nc2=h_m1&#34;&gt;AWS Elastic Container Service for Kubernetes&lt;/a&gt; or &lt;a href=&#34;https://cloud.google.com/kubernetes-engine/&#34;&gt;Google Kubernetes Engine&lt;/a&gt; . While these hosted solutions provide additional features such as the ability to spin up a load balancer, they also cost money every hour they&amp;rsquo;re available and I&amp;rsquo;m planning on leaving my cluster running. Also, from a learning perspective, there is no greater way to learn the underpinnings of a solution than having to deploy and manage it on your own. Therefore, I set out to deploy k8s in my vSphere home lab on some CentOS 7 virtual machines using Kubeadm. I found several articles on how to do this but somehow I got off track a few times and thought another blog post with step by step instructions and screenshots would help others. Hopefully it helps you. Let&amp;rsquo;s begin.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Mac Client</title>
      <link>https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/</link>
      <pubDate>Tue, 31 Jul 2018 14:06:02 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Windows Client</title>
      <link>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</link>
      <pubDate>Mon, 30 Jul 2018 16:05:09 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
