<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Series on The IT Hollow</title>
    <link>https://theithollow.com/categories/series/</link>
    <description>Recent content in Series on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 24 Aug 2020 14:15:00 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/categories/series/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Connecting to a Supervisor Namespace</title>
      <link>https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/</link>
      <pubDate>Mon, 24 Aug 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll finally connect to our Supervisor Cluster Namespace through the Kubernetes cli and run some commands for the first time.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/2020/08/17/creating-supervisor-namespaces/&#34;&gt;last post&lt;/a&gt; we created a namespace within the Supervisor Cluster and assigned some resource allocations and permissions for our example development user. Now it&amp;rsquo;s time to access that namespace so that real work can be done using the platform.&lt;/p&gt;
&lt;p&gt;First, login to vCenter again with the &lt;a href=&#34;mailto:administrator@vsphere.local&#34;&gt;administrator@vsphere.local&lt;/a&gt; account and navigate to the namespace that was previously created. You should see a similar screen where we configured our permissions. In the &lt;code&gt;Status&lt;/code&gt; tile, click one of the links to either open in a browser or copy the URL to then open in a browser.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Creating Supervisor Namespaces</title>
      <link>https://theithollow.com/2020/08/17/creating-supervisor-namespaces/</link>
      <pubDate>Mon, 17 Aug 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/17/creating-supervisor-namespaces/</guid>
      <description>&lt;p&gt;Congratulations, you&amp;rsquo;ve deployed the Workload Management components for your vSphere 7 cluster. If you&amp;rsquo;ve been following along with the series so far, you&amp;rsquo;ll have left off with a workload management cluster created and ready to being configuring your cluster for use with Kubernetes.&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/08/namespaces0-3.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The next step in the process is to create a namespace. Before we do that, it&amp;rsquo;s probably useful to recap what a namespace is used for.&lt;/p&gt;
&lt;h2 id=&#34;namespaces-the-theory&#34;&gt;Namespaces the Theory&lt;/h2&gt;
&lt;p&gt;Depending on your past experiences, a namespace will likely seem familiar to you in some fashion. If you have a kubernetes background, you&amp;rsquo;ll be familiar with namespaces as a way to set permissions for a group of users (or a project, etc) and for assigning resources. Alternatively, if you have a vSphere background, you&amp;rsquo;re used to using things like Resource Pools to set resource allocation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vSphere 7 with Tanzu - Getting Started Guide</title>
      <link>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/</link>
      <pubDate>Tue, 14 Jul 2020 14:16:18 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/</guid>
      <description>&lt;p&gt;VMware released the new version of vSphere with functionality to build and manage Kubernetes clusters. This series details how to deploy, configure, and use a lab running vSphere 7 with Kubernetes enabled.&lt;/p&gt;
&lt;p&gt;The instructions within this post are broken out into sections. vSphere 7 requires pre-requisites at the vSphere level as well as a full NSX-T deployment. Follow these steps in order to build your own vSphere 7 with Kubernetes lab and start using Kubernetes built right into vSphere.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Validating Admission Controllers</title>
      <link>https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/</link>
      <pubDate>Tue, 26 May 2020 15:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/</guid>
      <description>&lt;p&gt;Hey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don&amp;rsquo;t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster?&lt;/p&gt;
&lt;p&gt;If any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers.&lt;/p&gt;
&lt;h2 id=&#34;validating-admission-controllers---the-theory&#34;&gt;Validating Admission Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Admission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you&amp;rsquo;ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is &lt;a href=&#34;https://www.openpolicyagent.org/&#34;&gt;Open Policy Agent (OPA)&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Liveness and Readiness Probes</title>
      <link>https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/</link>
      <pubDate>Mon, 18 May 2020 14:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/</guid>
      <description>&lt;p&gt;Just because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not.&lt;/p&gt;
&lt;h2 id=&#34;liveness-and-readiness-probes---the-theory&#34;&gt;Liveness and Readiness Probes - The Theory&lt;/h2&gt;
&lt;p&gt;On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node&amp;rsquo;s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Pod Auto-scaling</title>
      <link>https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/</link>
      <pubDate>Mon, 04 May 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve built your Kubernetes cluster(s). You&amp;rsquo;ve built your apps in containers. You&amp;rsquo;ve architected your services so that losing a single instance doesn&amp;rsquo;t cause an outage. And you&amp;rsquo;re ready for cloud scale. You deploy your application and are waiting to sit back and &amp;ldquo;profit.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;When your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn&amp;rsquo;t it be better to deploy the application and let the platform scale the application automatically?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Resource Requests and Limits</title>
      <link>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</link>
      <pubDate>Mon, 20 Apr 2020 15:00:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/</guid>
      <description>&lt;p&gt;Containerizing applications and running them on Kubernetes doesn&amp;rsquo;t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the &amp;ldquo;noisy neighbor&amp;rdquo; problem in a Kubernetes Cluster.&lt;/p&gt;
&lt;h2 id=&#34;resource-requests-and-limits---the-theory&#34;&gt;Resource Requests and Limits - The Theory&lt;/h2&gt;
&lt;p&gt;Kubernetes uses the concept of a &amp;ldquo;Resource Request&amp;rdquo; and a &amp;ldquo;Resource Limit&amp;rdquo; when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes on AWS</title>
      <link>https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/</link>
      <pubDate>Mon, 13 Jan 2020 15:15:39 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/</guid>
      <description>&lt;p&gt;The way you deploy Kubernetes (k8s) on AWS will be similar to how it was done in a &lt;a href=&#34;https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/&#34;&gt;previous post on vSphere&lt;/a&gt;. You still setup nodes, you still deploy kubeadm, and kubectl but there are a few differences when you change your cloud provider. For instance on AWS we can use the LoadBalancer resource against the k8s API and have AWS provision an elastic load balancer for us. These features take a few extra tweaks in AWS.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes on vSphere</title>
      <link>https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/</link>
      <pubDate>Wed, 08 Jan 2020 15:00:04 +0000</pubDate>
      <guid>https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re struggling to deploy Kubernetes (k8s) clusters, you&amp;rsquo;re not alone. There are a bunch of different ways to deploy Kubernetes and there are different settings depending on what cloud provider you&amp;rsquo;re using. This post will focus on installing Kubernetes on vSphere with Kubeadm. At the end of this post, you should have what you need to manually deploy k8s in a vSphere environment on ubuntu.&lt;/p&gt;
&lt;h2 id=&#34;prerequisites&#34;&gt;Prerequisites&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; This tutorial uses the &amp;ldquo;in-tree&amp;rdquo; cloud provider for vSphere. This is not the preferred method for deployment going forward. More details can be found &lt;a href=&#34;https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html&#34;&gt;here&lt;/a&gt; for reference.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Jobs and CronJobs</title>
      <link>https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/</link>
      <pubDate>Mon, 16 Dec 2019 15:05:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/</guid>
      <description>&lt;p&gt;Sometimes we need to run a container to do a specific task, and when its completed, we want it to quit. Many containers are deployed and continuously run, such as a web server. But other times we want to accomplish a single task and then quit. This is where a Job is a good choice.&lt;/p&gt;
&lt;h2 id=&#34;jobs-and-cronjobs---the-theory&#34;&gt;Jobs and CronJobs - The Theory&lt;/h2&gt;
&lt;p&gt;Perhaps, we need to run a batch process on demand. Maybe we built an automation routine for something and want to kick it off through the use of a container. We can do this by submitting a job to the Kubernetes API. Kubernetes will run the job to completion and then quit.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Security Policies</title>
      <link>https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/</link>
      <pubDate>Tue, 19 Nov 2019 15:05:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/</guid>
      <description>&lt;p&gt;Securing and hardening our Kubernetes clusters is a must do activity. We need to remember that containers are still just processes running on the host machines. Sometimes these processes can get more privileges on the Kubernetes node than they should, if you don&amp;rsquo;t properly setup some pod security. This post explains how this could be done for your own clusters.&lt;/p&gt;
&lt;h2 id=&#34;pod-security-policies---the-theory&#34;&gt;Pod Security Policies - The Theory&lt;/h2&gt;
&lt;p&gt;Pod Security policies are designed to limit what can be run on a Kubernetes cluster. Typical things that you might want to limit are: pods that have privileged access, pods with access to the host network, and pods that have access to the host processes just to name a few. Remember that a container isn&amp;rsquo;t as isolated as a VM so we should take care to ensure our containers aren&amp;rsquo;t adversely affecting our nodes&amp;rsquo;s health and security.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Network Policies</title>
      <link>https://theithollow.com/2019/10/21/kubernetes-network-policies/</link>
      <pubDate>Mon, 21 Oct 2019 14:05:08 +0000</pubDate>
      <guid>https://theithollow.com/2019/10/21/kubernetes-network-policies/</guid>
      <description>&lt;p&gt;In the traditional server world, we&amp;rsquo;ve taken great lengths to ensure that we can micro-segment our servers instead of relying on a few select firewalls at strategically defined chokepoints. What do we do in the container world though? This is where network policies come into play.&lt;/p&gt;
&lt;h2 id=&#34;network-policies---the-theory&#34;&gt;Network Policies - The Theory&lt;/h2&gt;
&lt;p&gt;In a default deployment of a Kubernetes cluster, all of the pods deployed on the nodes can communicate with each other. Some security folks might not like to hear that, but never fear, we have ways to limit the communications between pods and they&amp;rsquo;re called network policies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Desired State and Control Loops</title>
      <link>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</link>
      <pubDate>Mon, 16 Sep 2019 14:05:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve just gotten started with Kubernetes, you might be curious to know how the desired state is achieved? Think about it, you pass a YAML file to the API server and magically stuff happens. Not only that, but when disaster strikes (e.g. pod crashes) Kubernetes also makes it right again so that it matches the desired state.&lt;/p&gt;
&lt;p&gt;The mechanism that allows for Kubernetes to enforce this desired state is the control loop. The basics of this are pretty simple. A control loop can be though of in three stages.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DaemonSets</title>
      <link>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</link>
      <pubDate>Tue, 13 Aug 2019 14:10:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</guid>
      <description>&lt;p&gt;DaemonSets can be a really useful tool for managing the health and operation of the pods within a Kubernetes cluster. In this post we&amp;rsquo;ll explore a use case for a DaemonSet, why we need them, and an example in the lab.&lt;/p&gt;
&lt;h2 id=&#34;daemonsets---the-theory&#34;&gt;DaemonSets - The Theory&lt;/h2&gt;
&lt;p&gt;DaemonSets are actually pretty easy to explain. A DaemonSet is a Kubernetes construct that ensures a pod is running on every node (where eligible) in a cluster. This means that if we were to create a DaemonSet on our six node cluster (3 master, 3 workers), the DaemonSet would schedule the defined pods on each of the nodes for a total of six pods. Now, this assumes there are either no &lt;a href=&#34;https://theithollow.com/?p=9736&#34;&gt;taints on the nodes, or there are tolerations&lt;/a&gt; on the DaemonSets.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Taints and Tolerations</title>
      <link>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</link>
      <pubDate>Mon, 29 Jul 2019 14:15:22 +0000</pubDate>
      <guid>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</guid>
      <description>&lt;p&gt;One of the best things about Kubernetes, is that I don&amp;rsquo;t have to think about which piece of hardware my container will run on when I deploy it. The Kubernetes scheduler can make that decision for me. This is great until I actually DO care about what node my container runs on. This post will examine one solution to pod placement, through taints and tolerations.&lt;/p&gt;
&lt;h2 id=&#34;taints---the-theory&#34;&gt;Taints - The Theory&lt;/h2&gt;
&lt;p&gt;Suppose we had a Kubernetes cluster where we didn&amp;rsquo;t want any pods to run on a specific node. You might need to do this for a variety of reasons, such as:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Helm</title>
      <link>https://theithollow.com/2019/06/10/kubernetes-helm/</link>
      <pubDate>Mon, 10 Jun 2019 14:02:52 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/10/kubernetes-helm/</guid>
      <description>&lt;p&gt;The Kubernetes series has now ventured into some non-native k8s discussions. Helm is a relatively common tool used in the industry and it makes sense to talk about why that is. This post covers the basics of Helm so we can make our own evaluations about its use in our Kubernetes environment.&lt;/p&gt;
&lt;h2 id=&#34;helm---the-theory&#34;&gt;Helm - The Theory&lt;/h2&gt;
&lt;p&gt;So what is Helm? In the most simplest terms its a package manager for Kubernetes.&lt;br&gt;
Think of Helm this way, Helm is to Kubernetes as yum/apt is to Linux. Yeah, sounds pretty neat now doesn&amp;rsquo;t it?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Backups</title>
      <link>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</link>
      <pubDate>Mon, 03 Jun 2019 14:00:58 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</guid>
      <description>&lt;p&gt;The focus of this post is on pod based backups, but this could also go for Deployments, replica sets, etc. This is not a post about how to backup your Kubernetes cluster including things like etcd, but rather the resources that have been deployed on the cluster. Pods have been used as an example to walk through how we can take backups of our applications once deployed in a Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Role Based Access</title>
      <link>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</link>
      <pubDate>Mon, 20 May 2019 14:10:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/05/20/kubernetes-role-based-access/</guid>
      <description>&lt;p&gt;As with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn&amp;rsquo;t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other&amp;rsquo;s work, and we can do that through role based access controls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - StatefulSets</title>
      <link>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</link>
      <pubDate>Mon, 01 Apr 2019 14:20:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</guid>
      <description>&lt;p&gt;We love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.&lt;/p&gt;
&lt;h2 id=&#34;statefulsets---the-theory&#34;&gt;StatefulSets - The Theory&lt;/h2&gt;
&lt;p&gt;StatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Cloud Providers and Storage Classes</title>
      <link>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</link>
      <pubDate>Wed, 13 Mar 2019 14:20:23 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/?p=9598&#34;&gt;previous post&lt;/a&gt; we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn&amp;rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn&amp;rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we&amp;rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Persistent Volumes</title>
      <link>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</link>
      <pubDate>Mon, 04 Mar 2019 15:00:51 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</guid>
      <description>&lt;p&gt;Containers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.&lt;/p&gt;
&lt;h2 id=&#34;volumes---the-theory&#34;&gt;Volumes - The Theory&lt;/h2&gt;
&lt;p&gt;In the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Secrets</title>
      <link>https://theithollow.com/2019/02/25/kubernetes-secrets/</link>
      <pubDate>Mon, 25 Feb 2019 15:00:56 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/25/kubernetes-secrets/</guid>
      <description>&lt;p&gt;Secret, Secret, I&amp;rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post we used ConfigMaps&lt;/a&gt; to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a &amp;ldquo;secret&amp;rdquo;.&lt;/p&gt;
&lt;h2 id=&#34;secrets---the-theory&#34;&gt;Secrets - The Theory&lt;/h2&gt;
&lt;p&gt;The short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post&lt;/a&gt; in this &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;series&lt;/a&gt;, but with non-clear text.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - ConfigMaps</title>
      <link>https://theithollow.com/2019/02/20/kubernetes-configmaps/</link>
      <pubDate>Wed, 20 Feb 2019 15:00:40 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/20/kubernetes-configmaps/</guid>
      <description>&lt;p&gt;Sometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.&lt;/p&gt;
&lt;h2 id=&#34;configmaps---the-theory&#34;&gt;ConfigMaps - The Theory&lt;/h2&gt;
&lt;p&gt;Not all of our applications can be as simple as the basic nginx containers we&amp;rsquo;ve deployed earlier in &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;this series&lt;/a&gt;. In some cases, we need to pass configuration files, variables, or other information to our apps.&lt;/p&gt;
&lt;p&gt;The theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DNS</title>
      <link>https://theithollow.com/2019/02/18/kubernetes-dns/</link>
      <pubDate>Mon, 18 Feb 2019 15:00:16 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/18/kubernetes-dns/</guid>
      <description>&lt;p&gt;DNS is a critical service in any system. Kubernetes is no different, but Kubernetes will implement its own domain naming system that&amp;rsquo;s implemented within your Kubernetes cluster. This post explores the details that you need to know to operate a k8s cluster properly.&lt;/p&gt;
&lt;h2 id=&#34;kubernetes-dns---the-theory&#34;&gt;Kubernetes DNS - The theory&lt;/h2&gt;
&lt;p&gt;I don&amp;rsquo;t want to dive into DNS too much since it&amp;rsquo;s a core service most should be familiar with. But at a really high level, DNS translates an IP address that might be changing, with an easily remember-able name such as &amp;ldquo;theithollow.com&amp;rdquo;. Every network has a DNS server, but Kubernetes implements their own DNS within the cluster to make connecting to containers a simple task.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Ingress</title>
      <link>https://theithollow.com/2019/02/13/kubernetes-ingress/</link>
      <pubDate>Wed, 13 Feb 2019 15:00:46 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/13/kubernetes-ingress/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s time to look closer at how we access our containers from outside the Kubernetes cluster. We&amp;rsquo;ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.&lt;/p&gt;
&lt;h2 id=&#34;ingress-controllers---the-theory&#34;&gt;Ingress Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Lets first talk about why we&amp;rsquo;d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/%22&#34;&gt;http://store-name.com/&amp;quot;&lt;/a&gt; and a shopping cart page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/cart%22&#34;&gt;http://store-name.com/cart&amp;quot;&lt;/a&gt; and an api URI at &amp;ldquo;&lt;a href=&#34;http://store-name.com/api%22&#34;&gt;http://store-name.com/api&amp;quot;&lt;/a&gt;. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we&amp;rsquo;re starting to drift off the point which hopefully you get now.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - KUBECONFIG and Context</title>
      <link>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</link>
      <pubDate>Mon, 11 Feb 2019 15:00:26 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve been working with Kubernetes for a while now and no doubt you have lots of clusters and namespaces to deal with now. This might be a good time to introduce Kubernetes KUBECONFIG files and context so you can more easily use all of these different resources.&lt;/p&gt;
&lt;h2 id=&#34;kubeconfig-and-context---the-theory&#34;&gt;KUBECONFIG and Context - The Theory&lt;/h2&gt;
&lt;p&gt;When you first setup your Kubernetes cluster you created a config file likely stored in your $HOME/.kube directory. This is the KUBECONFIG file and it is used to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG file. This is why you would&amp;rsquo;ve needed to add this file to your $PATH variable so that it could be used correctly by the kubectl commands.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Namespaces</title>
      <link>https://theithollow.com/2019/02/06/kubernetes-namespaces/</link>
      <pubDate>Wed, 06 Feb 2019 15:00:13 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/06/kubernetes-namespaces/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll start exploring ways that you might be able to better manage your Kubernetes cluster for security or organizational purposes. Namespaces become a big piece of how your Kubernetes cluster operates and who sees what inside your cluster.&lt;/p&gt;
&lt;h2 id=&#34;namespaces---the-theory&#34;&gt;Namespaces - The Theory&lt;/h2&gt;
&lt;p&gt;The easiest way to think of a namespace is that its a logical separation of your Kubernetes Cluster. Just like you might have segmented a physical server into several virtual severs, we can segment our Kubernetes cluster into namespaces. Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the &amp;ldquo;get pods&amp;rdquo; command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Service Publishing</title>
      <link>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</link>
      <pubDate>Tue, 05 Feb 2019 16:30:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</guid>
      <description>&lt;p&gt;A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Endpoints</title>
      <link>https://theithollow.com/2019/02/04/kubernetes-endpoints/</link>
      <pubDate>Mon, 04 Feb 2019 15:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/04/kubernetes-endpoints/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you&amp;rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we&amp;rsquo;ll cover the basics of Kubernetes endpoints in this post.&lt;/p&gt;
&lt;h2 id=&#34;endpoints---the-theory&#34;&gt;Endpoints - The Theory&lt;/h2&gt;
&lt;p&gt;During the &lt;a href=&#34;https://theithollow.com/?p=9427&#34;&gt;post&lt;/a&gt; where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you&amp;rsquo;re doing, you don&amp;rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven&amp;rsquo;t covered yet).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Services and Labels</title>
      <link>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</link>
      <pubDate>Thu, 31 Jan 2019 15:00:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;the series&lt;/a&gt;, you may be thinking that we&amp;rsquo;ve built ourselves a problem. You&amp;rsquo;ll recall that we&amp;rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven&amp;rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn&amp;rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we&amp;rsquo;ll access one of our pods &amp;hellip; finally.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Deployments</title>
      <link>https://theithollow.com/2019/01/30/kubernetes-deployments/</link>
      <pubDate>Wed, 30 Jan 2019 15:01:37 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/30/kubernetes-deployments/</guid>
      <description>&lt;p&gt;After following the previous posts, we should feel pretty good about deploying our &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;pods&lt;/a&gt; and ensuring they are highly available. We&amp;rsquo;ve learned about naked pods and then &lt;a href=&#34;https://theithollow.com/2019/01/28/kubernetes-replica-sets/&#34;&gt;replica sets&lt;/a&gt; to make those pods more HA, but what about when we need to create a new version of our pods? We don&amp;rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where &amp;ldquo;Deployments&amp;rdquo; comes into play.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Replica Sets</title>
      <link>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</link>
      <pubDate>Mon, 28 Jan 2019 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;previous post&lt;/a&gt; we covered the use of pods and deployed some &amp;ldquo;naked pods&amp;rdquo; in our Kubernetes cluster. In this post we&amp;rsquo;ll expand our use of pods with Replica Sets.&lt;/p&gt;
&lt;h2 id=&#34;replica-sets---the-theory&#34;&gt;Replica Sets - The Theory&lt;/h2&gt;
&lt;p&gt;One of the biggest reasons that we don&amp;rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can&amp;rsquo;t count on them to always be running. Kubernetes doesn&amp;rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let&amp;rsquo;s plan for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Getting Started with Kubernetes</title>
      <link>https://theithollow.com/2019/01/26/getting-started-with-kubernetes/</link>
      <pubDate>Sat, 26 Jan 2019 22:38:39 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/26/getting-started-with-kubernetes/</guid>
      <description>&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2019/01/kubernetesguide-1024x610.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The following posts are meant to get a beginner started with the process of understanding Kubernetes. They include basic level information to start understanding the concepts of the Kubernetes service and include both theory and examples.&lt;/p&gt;
&lt;p&gt;To follow along with the series, a Kubernetes cluster should be deployed and admin permissions are needed to perform many of the steps. If you wish to follow along with each of the posts, a cluster with cloud provider integration may be needed. In some cases we need a Load Balancer and elastic storage options.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pods</title>
      <link>https://theithollow.com/2019/01/21/kubernetes-pods/</link>
      <pubDate>Mon, 21 Jan 2019 16:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/21/kubernetes-pods/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got a Kubernetes cluster setup and we&amp;rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we&amp;rsquo;ll need to understand a little bit about pods.&lt;/p&gt;
&lt;h2 id=&#34;pods---the-theory&#34;&gt;Pods - The Theory&lt;/h2&gt;
&lt;p&gt;In a docker environment, the smallest unit you&amp;rsquo;d deal with is a container. In the Kubernetes world, you&amp;rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Upgrade to vRA 7.5</title>
      <link>https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/</link>
      <pubDate>Mon, 08 Oct 2018 14:03:53 +0000</pubDate>
      <guid>https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/</guid>
      <description>&lt;p&gt;Upgrading your vRealize Automation instance has some times been a painful exercise. But this was in the early days after VMware purchased the product from DynamicOps. It&amp;rsquo;s taken a while, but the upgrade process has improved for each and every version, in my opinion, and 7.5 is no exception. If you&amp;rsquo;re on a previous version, here is a quick rundown on the upgrade process from 7.4 to 7.5.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As always, please read the the official upgrade documentation. It includes prerequisites and steps that should always be followed. https://docs.vmware.com/en/vRealize-Automation/7.5/vrealize-automation-7172732to75upgrading.pdf&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Session Manager</title>
      <link>https://theithollow.com/2018/10/01/aws-session-manager/</link>
      <pubDate>Mon, 01 Oct 2018 14:05:01 +0000</pubDate>
      <guid>https://theithollow.com/2018/10/01/aws-session-manager/</guid>
      <description>&lt;p&gt;Amazon has released yet another &lt;a href=&#34;https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/&#34;&gt;Simple Systems Manager&lt;/a&gt; service to improve the management of EC2 instances. This time, it&amp;rsquo;s AWS Session Manager. Session Manager is a nifty little service that lets you assign permissions to users to access an instances&amp;rsquo;s shell. Now, you might be thinking, &amp;ldquo;Why would I need this? I can already add SSH keys to my instances at boot time to access my instances.&amp;rdquo; You&amp;rsquo;d be right of course, but think of how you might use Session Manager. Instead of having to deal with adding SSH keys, and managing access/distribution of the private keys, we can manage access through AWS Identity and Access Management permissions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS IAM Indecision</title>
      <link>https://theithollow.com/2018/05/07/aws-iam-indecision/</link>
      <pubDate>Mon, 07 May 2018 14:55:55 +0000</pubDate>
      <guid>https://theithollow.com/2018/05/07/aws-iam-indecision/</guid>
      <description>&lt;p&gt;Identity and Access Management (IAM) can be a confusing topic for people that are new to Amazon Web Services. There are IAM Users that could be used for authentication or solutions considered part of the AWS Directory Services such as Microsoft AD, Simple AD, or AD Connector. If none of these sound appealing, there is always the option to use Federation with a SAML 2.0 solution like OKTA, PING, or Active Directory Federation Services (ADFS). If all of these option have given you a case of decision fatigue, then hopefully this post and the associate links will help you to decide how your environment should be setup.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Manage Multiple AWS Accounts with Role Switching</title>
      <link>https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/</link>
      <pubDate>Mon, 30 Apr 2018 14:05:52 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/</guid>
      <description>&lt;p&gt;A pretty common question that comes up is how to manage multiple accounts within AWS from a user perspective. Multi-Account setups are common to provide control plane separation between Production, Development, Billing and Shared Services accounts but do you need to setup Federation with each of these accounts or create an IAM user in each one? That makes those accounts kind of cumbersome to manage and the more users we have the more chance one of them could get hacked.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Directory Service - AD Connector</title>
      <link>https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/</link>
      <pubDate>Mon, 23 Apr 2018 14:05:05 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/</guid>
      <description>&lt;p&gt;Just because you&amp;rsquo;ve started moving workloads into the cloud, doesn&amp;rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you&amp;rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on AD Connector which makes a connection to your on-premises or EC2 installed domain controllers. AD Connector doesn&amp;rsquo;t run your Active Directory but rather uses your existing active directory intances within AWS. As such, in order to use AD Connector you would need to have a VPN connection or Direct Connect to provide connectivity back to your data center. Also, you&amp;rsquo;ll need to be prepared to have credentials to connect to the domain. Domain Admin credentials will work, but as usual you should use as few privileges as possible so delegate access to a user with the follow permissions:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Manage vSphere Virtual Machines through AWS SSM</title>
      <link>https://theithollow.com/2017/11/06/manage-vsphere-virtual-machines-aws-ssm/</link>
      <pubDate>Mon, 06 Nov 2017 15:15:18 +0000</pubDate>
      <guid>https://theithollow.com/2017/11/06/manage-vsphere-virtual-machines-aws-ssm/</guid>
      <description>&lt;p&gt;Amazon Web Services has some great tools to help you operate your EC2 instances with their Simple Systems Manager services. These services include ensuring &lt;a href=&#34;https://theithollow.com/2017/07/24/patch-compliance-ec2-systems-manager/&#34;&gt;patches are deployed&lt;/a&gt; within maintenance windows specified by you, &lt;a href=&#34;https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/&#34;&gt;automation routines&lt;/a&gt; that are used to ensure state and &lt;a href=&#34;https://theithollow.com/2017/07/17/run-commands-ec2-systems-manager/&#34;&gt;run commands&lt;/a&gt; on a fleet of servers through the AWS console. These tools are great but wouldn&amp;rsquo;t be be even better if I could use these tools to manage my VMware virtual machines too? Well, you&amp;rsquo;re in luck, because EC2 SSM can do just that and better yet, the service itself is free! Now, if you&amp;rsquo;ve followed along with the &amp;quot; &lt;a href=&#34;https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/&#34;&gt;AWS EC2 Simple Systems Manager Reference&lt;/a&gt;&amp;quot; guide you&amp;rsquo;ve probably already seen the goodies that we&amp;rsquo;ve got available, so this post is used to show you how you can use these same tools on your vSphere, Hyper-V or other on-premises platforms.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS EC2 Simple Systems Manager Reference</title>
      <link>https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/</link>
      <pubDate>Mon, 02 Oct 2017 14:07:07 +0000</pubDate>
      <guid>https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/</guid>
      <description>&lt;p&gt;Please use this post as a landing page to get you started with using the EC2 Simple Systems Manager services from Amazon Web Services. Simple Systems Manager or (SSM) is a set of services used to manage EC2 instances as well as on-premises machines (known as managed instances) with the SSM agent installed on them. You can use these services to maintain state, run ad-hoc commands, and configure patch compliance among other things.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS EC2 Systems Manager - State Manager</title>
      <link>https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/</link>
      <pubDate>Tue, 26 Sep 2017 14:06:57 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/</guid>
      <description>&lt;p&gt;Sometimes you need to ensure that things are always a certain way when you deploy AWS EC2 instances. This could be things like making sure your servers are always joined to a domain when being deployed, or making sure you run an Ansible playbook every hour. The point of the AWS EC2 SSM State Manager service is to define a consistent state for your EC2 instances.&lt;/p&gt;
&lt;p&gt;This post will use a fictional use case where I have a an EC2 instance or instances that are checking every thirty minutes to see if they should use a new image for their Apache website. The instance will check against the EC2 Simple Systems Manager Parameter Store, which we&amp;rsquo;ve discussed in a &lt;a href=&#34;https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/&#34;&gt;previous post&lt;/a&gt;, and will download the image from the S3 location retrieved from that parameter.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS EC2 Simple Systems Manager Documents</title>
      <link>https://theithollow.com/2017/09/18/aws-ec2-simple-systems-manager-documents/</link>
      <pubDate>Mon, 18 Sep 2017 14:32:16 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/18/aws-ec2-simple-systems-manager-documents/</guid>
      <description>&lt;p&gt;Amazon Web Services uses Systems Manager Documents to define actions that should be taken on your instances. This could be a wide variety of actions including updating the operating system, copying files such as logs to another destination or re-configuring your applications. These documents are written in Javascript Object Notation (JSON) and are stored within AWS for use with theother Simple Systems Manager (SSM) services such as the Automation Service or Run command.&lt;/p&gt;</description>
    </item>
    <item>
      <title>EC2 Systems Manager Parameter Store</title>
      <link>https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/</link>
      <pubDate>Mon, 11 Sep 2017 14:15:52 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/</guid>
      <description>&lt;p&gt;Generally speaking, when you deploy infrastructure through code, or run deployment scripts you&amp;rsquo;ll need to have a certain amount of configuration data. Much of your code will have install routines but what about the configuration information that is specific to your environment? Things such as license keys, service accounts, passwords, or connection strings are commonly needed when connecting multiple services together. So how do you code that exactly? Do you pass the strings in at runtime as a parameter and then hope to remember those each time you execute code? Do you bake those strings into the code and then realize that you&amp;rsquo;ve got sensitive information stored in your deployment scripts?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Adding an Azure Endpoint to vRealize Automation 7</title>
      <link>https://theithollow.com/2017/03/20/adding-azure-endpoint-vrealize-automation-7/</link>
      <pubDate>Mon, 20 Mar 2017 14:03:55 +0000</pubDate>
      <guid>https://theithollow.com/2017/03/20/adding-azure-endpoint-vrealize-automation-7/</guid>
      <description>&lt;p&gt;As of vRealize Automation 7.2, you can now deploy workloads to Microsoft Azure through vRA&amp;rsquo;s native capabilities. Don&amp;rsquo;t get too excited here though since the process for adding an endpoint is much different than it is for other endpoints such as vSphere or AWS. The process for Azure in vRA 7 is to leverage objects in vRealize Orchestrator to do the heavy lifting. If you know things like resource mappings and vRO objects, you can do very similar tasks in the tool.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Getting Started with vRealize Automation Course</title>
      <link>https://theithollow.com/2016/11/28/getting-started-vrealize-automation-course/</link>
      <pubDate>Mon, 28 Nov 2016 15:09:05 +0000</pubDate>
      <guid>https://theithollow.com/2016/11/28/getting-started-vrealize-automation-course/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re trying to get started with vRealize Automation and don&amp;rsquo;t know where to get started, you&amp;rsquo;re in luck. &lt;a href=&#34;http://pluralsight.com&#34;&gt;Pluralsight&lt;/a&gt; has just released my course on &amp;ldquo;Getting Started with vRealize Automation 7&amp;rdquo;, which will give you a great leg up on your new skills. In this course you&amp;rsquo;ll learn to install the solution, configure the basics, connect it to your vSphere environment and publish your first blueprints. The course will explain why you&amp;rsquo;d want to go down the path of using vRA 7 in the first place and how to use the solution.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Infrastructure Setup</title>
      <link>https://theithollow.com/2016/10/12/ucs-director-infrastructure-setup/</link>
      <pubDate>Wed, 12 Oct 2016 14:00:05 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/12/ucs-director-infrastructure-setup/</guid>
      <description>&lt;p&gt;UCS Director is a cloud management platform and thus requires some infrastructure to deploy the orchestrated workloads. In many cases UCS Director can also orchestrate the configuration and deployment of bare metal or hardware as well, such as configuring new VLANs on switches, deploying operating systems on blades and setting hardware profiles etc. This post focuses on getting those devices to show up in UCS Director so that additional automation can be performed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Scaling in vRealize Automation</title>
      <link>https://theithollow.com/2016/10/06/scaling-vrealize-automation/</link>
      <pubDate>Thu, 06 Oct 2016 14:18:06 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/06/scaling-vrealize-automation/</guid>
      <description>&lt;p&gt;One of the new features of vRealize Automation in version 7.1 is the ability to scale out or scale in your servers. This sort of scaling is a horizontal scaling of the number of servers. For instance, if you had deployed a single web server, you can scale out to two, three etc. When you scale in, you can go from four servers to three and so on.&lt;/p&gt;
&lt;h1 id=&#34;use-cases&#34;&gt;Use Cases&lt;/h1&gt;
&lt;p&gt;The use cases here could really vary widely. The easiest to get started with would be some sort of a web / database deployment where the web servers have some static front end web pages and can be deployed over and over again with the same configurations. If we were to place the web servers behind a load balancer (yep, think NSX here for you vSphere junkies) then your web applications can be scaled horizontally based on when you run out of resources.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Add Custom Items to vRealize Automation</title>
      <link>https://theithollow.com/2016/07/05/add-custom-items-vrealize-automation/</link>
      <pubDate>Tue, 05 Jul 2016 14:14:41 +0000</pubDate>
      <guid>https://theithollow.com/2016/07/05/add-custom-items-vrealize-automation/</guid>
      <description>&lt;p&gt;vRealize Automation lets us publish vRealize Orchestrator workflows to the service catalog, but to get more functionality out of these XaaS blueprints, we can add the provisioned resources to the items list. This allows us to manage the lifecycle of these items and even perform secondary &amp;ldquo;Day 2 Operations&amp;rdquo; on these items later.&lt;/p&gt;
&lt;p&gt;For the example in this post, we&amp;rsquo;ll be provisioning an AWS Security group in an existing VPC. For now, just remember that AWS Security groups are not managed by vRA, but with some custom work, this is all about to change.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Code Stream with Artifactory</title>
      <link>https://theithollow.com/2016/05/23/code-stream-artifactory/</link>
      <pubDate>Mon, 23 May 2016 14:04:37 +0000</pubDate>
      <guid>https://theithollow.com/2016/05/23/code-stream-artifactory/</guid>
      <description>&lt;p&gt;vRealize Code Stream now comes pre-packaged with JFrog Artifactory which allows us to do some cool things while we&amp;rsquo;re testing and deploying new code. To begin this post, lets take a look at what an artifactory is and how we can use it.&lt;/p&gt;
&lt;p&gt;An artifactory is a version control repository, typically used for binary objects like .jar files. You might already be thinking, how is this different from GIT? My Github account already has repos and does its own version control. True, but what if we don&amp;rsquo;t want to pull down an entire repo to do work? Maybe we only need a single file of a build or we want to be able to pull down different versions of the same file without creating branches, forks, additional repos or committing new code? This is where an artifactory service can really shine.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Software Defined Networking with vRealize Automation and NSX</title>
      <link>https://theithollow.com/2015/10/12/software-defined-networking-with-vrealize-automation-and-nsx/</link>
      <pubDate>Mon, 12 Oct 2015 14:30:53 +0000</pubDate>
      <guid>https://theithollow.com/2015/10/12/software-defined-networking-with-vrealize-automation-and-nsx/</guid>
      <description>&lt;p&gt;This is a series of posts helping you get familiarized with how VMware&amp;rsquo;s vRealize Automation 6 can leverage VMware&amp;rsquo;s NSX product to provide software defined networking. The series will show you how to do some basic setup of NSX as well as how to use Private, Routed and NAT networks all from within vRA.&lt;/p&gt;
&lt;h2 id=&#34;vrealize-automation-6-with-nsx---nsx-setup&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1lT&#34;&gt;vRealize Automation 6 with NSX - NSX Setup&lt;/a&gt;&lt;/h2&gt;
&lt;h2 id=&#34;vrealize-automation-6-with-nsx---private-networks&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1lR&#34;&gt;vRealize Automation 6 with NSX - Private Networks&lt;/a&gt;&lt;/h2&gt;
&lt;h2 id=&#34;vrealize-automation-6-with-nsx---routed-networks&#34;&gt;&lt;a href=&#34;https://theithollow.com/2015/10/26/vrealize-automation-6-with-nsx-routed-networks/&#34;&gt;vRealize Automation 6 with NSX - Routed Networks&lt;/a&gt;&lt;/h2&gt;
&lt;h1 id=&#34;vrealize-automation-6-with-nsx---nat&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1qS&#34;&gt;vRealize Automation 6 with NSX - NAT&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;vrealize-automation-6-with-nsx---load-balancing&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1s2&#34;&gt;vRealize Automation 6 with NSX - Load Balancing&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;vrealize-automation-6-with-nsx---firewall&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1tu&#34;&gt;vRealize Automation 6 with NSX - Firewall&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2015/10/GuideLogo.jpg&#34;&gt;&lt;img alt=&#34;GuideLogo&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/10/GuideLogo-1024x543.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX - Initial Setup of NSX</title>
      <link>https://theithollow.com/2015/10/12/vrealize-automation-6-with-nsx-initial-setup-of-nsx/</link>
      <pubDate>Mon, 12 Oct 2015 14:00:22 +0000</pubDate>
      <guid>https://theithollow.com/2015/10/12/vrealize-automation-6-with-nsx-initial-setup-of-nsx/</guid>
      <description>&lt;p&gt;Before we can start deploying environments with automated network segments, we need to do some basic setup of the NSX environment.&lt;/p&gt;
&lt;h2 id=&#34;nsx-manager-setup&#34;&gt;NSX Manager Setup&lt;/h2&gt;
&lt;p&gt;It should be obvious that you need to setup NSX Manager, deploy controllers and do some host preparation. These are basic setup procedures just to use NSX even without vRealize Automation in the middle of things, but just as a quick review:&lt;/p&gt;
&lt;h3 id=&#34;install-nsx-manager-and-deploy-nsx-controller-nodes&#34;&gt;Install NSX Manager and deploy NSX Controller Nodes&lt;/h3&gt;
&lt;p&gt;NSX Manager setup can be deployed from an OVA and then you must register the NSX Manager with vCenter. After this is complete, deploy three NSX Controller nodes to configure your logical constructs.
&lt;img alt=&#34;NSXSetupManagementSetup&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/09/NSXSetupManagementSetup-1024x452.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Assign a VM to a Rubrik slaDomain</title>
      <link>https://theithollow.com/2015/09/14/assign-a-vm-to-a-rubrik-sladomain/</link>
      <pubDate>Mon, 14 Sep 2015 14:00:16 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/14/assign-a-vm-to-a-rubrik-sladomain/</guid>
      <description>&lt;p&gt;This last post in the series shows you how &lt;a href=&#34;https://twitter.com/vnickC&#34;&gt;Nick Colyer&lt;/a&gt; and I to tie everything together. If you want to just download the plugins and get started, please visit Github.com and import the plugins into your own vRealize Orchestrator environment.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Download the Plugin from Github&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The first version of this code has been refactored and migrated to Github in Rubrik&amp;rsquo;s Repository since the time of this initial writing&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To recap where we&amp;rsquo;ve been, we:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Get Rubrik VM through vRealize Orchestrator</title>
      <link>https://theithollow.com/2015/09/10/get-rubrik-vm-through-vrealize-orchestrator/</link>
      <pubDate>Thu, 10 Sep 2015 14:07:51 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/10/get-rubrik-vm-through-vrealize-orchestrator/</guid>
      <description>&lt;p&gt;Part four of this series will show you how to lookup a VM in the &lt;a href=&#34;http://rubrik.com&#34;&gt;Rubrik&lt;/a&gt; Hybrid Cloud appliance through the REST API by using vRealize Orchestrator. If you&amp;rsquo;d rather just download the plugin and get using it, check out the link to &lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Github&lt;/a&gt; to get the plugin and don&amp;rsquo;t forget to check out &lt;a href=&#34;http://twitter.com/vnickc&#34;&gt;Nick Colyer&amp;rsquo;s&lt;/a&gt; post over at &lt;a href=&#34;http://systemsgame.com&#34;&gt;systemsgame.com&lt;/a&gt; about how to use it.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Download the Plugin from Github&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The first version of this code has been refactored and migrated to Github in Rubrik&amp;rsquo;s Repository since the time of this initial writing&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rubrik API Logins through vRealize Orchestrator</title>
      <link>https://theithollow.com/2015/09/08/rubrik-api-logins-through-vrealize-orchestrator/</link>
      <pubDate>Tue, 08 Sep 2015 14:00:17 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/08/rubrik-api-logins-through-vrealize-orchestrator/</guid>
      <description>&lt;p&gt;Part three of this series focuses on how &lt;a href=&#34;http://twitter.com/vnickc&#34;&gt;Nick Colyer&lt;/a&gt; and I built the authentication piece of the plugin so that we could then pass commands to the &lt;a href=&#34;http://rubrik.com&#34;&gt;Rubrik&lt;/a&gt; appliance. An API requires a login just like any other portal would. Since this is a a REST API, we actually need to do a &amp;ldquo;POST&amp;rdquo; on the login resource to get ourselves an authentication token.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Download the Plugin from Github&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The first version of this code has been refactored and migrated to Github in Rubrik&amp;rsquo;s Repository since the time of this initial writing&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Orchestrator REST Hosts and Operations for Rubrik</title>
      <link>https://theithollow.com/2015/08/27/vrealize-orchestrator-rest-hosts-and-operations-for-rubrik/</link>
      <pubDate>Thu, 27 Aug 2015 14:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2015/08/27/vrealize-orchestrator-rest-hosts-and-operations-for-rubrik/</guid>
      <description>&lt;p&gt;In &lt;a href=&#34;https://theithollow.com/2015/08/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api/&#34;&gt;part one of this series&lt;/a&gt;, we went over some basics about what REST is and the methods involved in it. In this post, we&amp;rsquo;ll add a REST host and show you how to add some REST Operations. To begin, we need to add a REST host. In plain terms, this is simply a host that will be accepting an API call. In this case, we&amp;rsquo;re adding the &lt;a href=&#34;http://rubrik.com&#34;&gt;Rubrik&lt;/a&gt; Hybrid Cloud Appliance as our REST host.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Getting Started with vRealize Orchestrator and Rubrik&#39;s REST API</title>
      <link>https://theithollow.com/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api-2/</link>
      <pubDate>Tue, 25 Aug 2015 14:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api-2/</guid>
      <description>&lt;p&gt;What&amp;rsquo;s this REST thing everyone keeps talking about?&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Oh, don&amp;rsquo;t worry, we have a REST API.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;It&amp;rsquo;s just a simple REST call.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At one point I was hearing these phrases and would get very frustrated. If REST is so commonplace or so simple to use, then why did I not know how to do it? If this sounds like you, then keep reading. I work for a company called &amp;ldquo;Ahead&amp;rdquo; as a consultant and they recently got a Rubrik Hybrid Cloud Appliance in their lab but my colleague &lt;a href=&#34;http://twiter.com/vnickc&#34;&gt;Nick Colyer&lt;/a&gt; and I noticed that they didn&amp;rsquo;t have any vRealize Orchestrator Plugins for it. We decided to build these on our own, with the help of &lt;a href=&#34;http://twitter.com/chriswahl&#34;&gt;Chris Wahl&lt;/a&gt; and publish them for the community to use.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Getting Started with vRealize Orchestrator and Rubrik&#39;s REST API</title>
      <link>https://theithollow.com/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api/</link>
      <pubDate>Tue, 25 Aug 2015 14:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api/</guid>
      <description>&lt;p&gt;What&amp;rsquo;s this REST thing everyone keeps talking about?&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Oh, don&amp;rsquo;t worry, we have a REST API.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;or&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;It&amp;rsquo;s just a simple REST call.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;At one point I was hearing these phrases and would get very frustrated. If REST is so commonplace or so simple to use, then why did I not know how to do it? If this sounds like you, then keep reading. I work for a company called &amp;ldquo;Ahead&amp;rdquo; as a consultant and they recently got a Rubrik Hybrid Cloud Appliance in their lab but my colleague &lt;a href=&#34;http://twiter.com/vnickc&#34;&gt;Nick Colyer&lt;/a&gt; and I noticed that they didn&amp;rsquo;t have any vRealize Orchestrator Plugins for it. We decided to build these on our own, with the help of &lt;a href=&#34;http://twitter.com/chriswahl&#34;&gt;Chris Wahl&lt;/a&gt; and publish them for the community to use.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
