<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Containers on The IT Hollow</title>
    <link>https://theithollow.com/tags/containers/</link>
    <description>Recent content in Containers on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 04 Jan 2021 21:50:57 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/tags/containers/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Enable the Harbor Registry on vSphere 7 with Tanzu</title>
      <link>https://theithollow.com/2021/01/04/enable-the-harbor-registry-on-vsphere-7-with-tanzu/</link>
      <pubDate>Mon, 04 Jan 2021 21:50:57 +0000</pubDate>
      <guid>https://theithollow.com/2021/01/04/enable-the-harbor-registry-on-vsphere-7-with-tanzu/</guid>
      <description>&lt;p&gt;Your Kubernetes clusters are up and running on vSphere 7 with Tanzu and you can&amp;rsquo;t wait to get started on your first project. But before you get to that, you might want to enable the Harbor registry so that you can privately store your own container images and use them with your clusters. Luckily, in vSphere 7 with Tanzu, the Harbor project has been integrated into the solution. You just have to turn it on and set it up.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Role Based Access</title>
      <link>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</link>
      <pubDate>Mon, 20 May 2019 14:10:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</guid>
      <description>&lt;p&gt;As with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn&amp;rsquo;t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other&amp;rsquo;s work, and we can do that through role based access controls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Cloud Providers and Storage Classes</title>
      <link>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</link>
      <pubDate>Wed, 13 Mar 2019 14:20:23 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/?p=9598&#34;&gt;previous post&lt;/a&gt; we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn&amp;rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn&amp;rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we&amp;rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Service Publishing</title>
      <link>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</link>
      <pubDate>Tue, 05 Feb 2019 16:30:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</guid>
      <description>&lt;p&gt;A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Endpoints</title>
      <link>https://theithollow.com/2019/02/04/kubernetes-endpoints/</link>
      <pubDate>Mon, 04 Feb 2019 15:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/04/kubernetes-endpoints/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you&amp;rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we&amp;rsquo;ll cover the basics of Kubernetes endpoints in this post.&lt;/p&gt;
&lt;h2 id=&#34;endpoints---the-theory&#34;&gt;Endpoints - The Theory&lt;/h2&gt;
&lt;p&gt;During the &lt;a href=&#34;https://theithollow.com/?p=9427&#34;&gt;post&lt;/a&gt; where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you&amp;rsquo;re doing, you don&amp;rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven&amp;rsquo;t covered yet).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Services and Labels</title>
      <link>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</link>
      <pubDate>Thu, 31 Jan 2019 15:00:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;the series&lt;/a&gt;, you may be thinking that we&amp;rsquo;ve built ourselves a problem. You&amp;rsquo;ll recall that we&amp;rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven&amp;rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn&amp;rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we&amp;rsquo;ll access one of our pods &amp;hellip; finally.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Deployments</title>
      <link>https://theithollow.com/2019/01/30/kubernetes-deployments/</link>
      <pubDate>Wed, 30 Jan 2019 15:01:37 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/30/kubernetes-deployments/</guid>
      <description>&lt;p&gt;After following the previous posts, we should feel pretty good about deploying our &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;pods&lt;/a&gt; and ensuring they are highly available. We&amp;rsquo;ve learned about naked pods and then &lt;a href=&#34;https://theithollow.com/2019/01/28/kubernetes-replica-sets/&#34;&gt;replica sets&lt;/a&gt; to make those pods more HA, but what about when we need to create a new version of our pods? We don&amp;rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where &amp;ldquo;Deployments&amp;rdquo; comes into play.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Replica Sets</title>
      <link>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</link>
      <pubDate>Mon, 28 Jan 2019 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;previous post&lt;/a&gt; we covered the use of pods and deployed some &amp;ldquo;naked pods&amp;rdquo; in our Kubernetes cluster. In this post we&amp;rsquo;ll expand our use of pods with Replica Sets.&lt;/p&gt;
&lt;h2 id=&#34;replica-sets---the-theory&#34;&gt;Replica Sets - The Theory&lt;/h2&gt;
&lt;p&gt;One of the biggest reasons that we don&amp;rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can&amp;rsquo;t count on them to always be running. Kubernetes doesn&amp;rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let&amp;rsquo;s plan for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pods</title>
      <link>https://theithollow.com/2019/01/21/kubernetes-pods/</link>
      <pubDate>Mon, 21 Jan 2019 16:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/21/kubernetes-pods/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got a Kubernetes cluster setup and we&amp;rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we&amp;rsquo;ll need to understand a little bit about pods.&lt;/p&gt;
&lt;h2 id=&#34;pods---the-theory&#34;&gt;Pods - The Theory&lt;/h2&gt;
&lt;p&gt;In a docker environment, the smallest unit you&amp;rsquo;d deal with is a container. In the Kubernetes world, you&amp;rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Windows Client</title>
      <link>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</link>
      <pubDate>Mon, 30 Jul 2018 16:05:09 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Orchestrating Containers with Nirmata</title>
      <link>https://theithollow.com/2017/07/27/orchestrating-containers-nirmata/</link>
      <pubDate>Thu, 27 Jul 2017 15:06:19 +0000</pubDate>
      <guid>https://theithollow.com/2017/07/27/orchestrating-containers-nirmata/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/logo-white-200x43.png&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/logo-white-200x43-150x32.png&#34;&gt;&lt;/a&gt; I had high expectations for the sessions being presented during &lt;a href=&#34;http://techfieldday.com/event/cfd2/&#34;&gt;Cloud Field Day 2&lt;/a&gt; hosted by GestaltIT in Silicon Valley during the week of June 26th-28th. The first of the sessions presented was from a company that I hadn&amp;rsquo;t heard of before called &lt;a href=&#34;http://nirmata.io&#34;&gt;Nirmata&lt;/a&gt;. I had no idea what the company did, but after the session I found out the name is an Indo-Aryan word meaning Architect or Director which makes a lot of sense considering what they do.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
