<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Technology on The IT Hollow</title>
    <link>https://theithollow.com/categories/technology/</link>
    <description>Recent content in Technology on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 02 Dec 2019 15:05:06 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/categories/technology/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Jetstack Cert-Manager</title>
      <link>https://theithollow.com/2019/12/02/jetstack-cert-manager/</link>
      <pubDate>Mon, 02 Dec 2019 15:05:06 +0000</pubDate>
      <guid>https://theithollow.com/2019/12/02/jetstack-cert-manager/</guid>
      <description>&lt;p&gt;One of my least favorite parts of computers is dealing with certificate creation. In fact, ya know those tweets about what you&amp;rsquo;d tweet if you were kidnapped and didn&amp;rsquo;t want to tip off the kidnapers?&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2019/11/certs-tweet-1024x293.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;Yeah, I&amp;rsquo;d tweet about how I love working with certificates. They are just not a fun thing for me. So when I found a new project where I needed certificates created, I was not really excited.&lt;/p&gt;</description>
    </item>
    <item>
      <title>ClusterAPI Demystified</title>
      <link>https://theithollow.com/2019/11/04/clusterapi-demystified/</link>
      <pubDate>Mon, 04 Nov 2019 15:05:05 +0000</pubDate>
      <guid>https://theithollow.com/2019/11/04/clusterapi-demystified/</guid>
      <description>&lt;p&gt;Deploying Kubernetes clusters may be the biggest hurdle in learning Kubernetes and one of the challenges in managing Kubernetes. ClusterAPI is a project designed to ease this burden and make the management and deployment of Kubernetes clusters simpler.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.&lt;/p&gt;
&lt;p&gt;kubernetes-sigs/cluster-api&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This post is designed to dive into ClusterAPI to investigate how it works, and how you can use it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A Kind Way to Learn Kubernetes</title>
      <link>https://theithollow.com/2019/10/07/a-kind-way-to-learn-kubernetes/</link>
      <pubDate>Mon, 07 Oct 2019 14:10:13 +0000</pubDate>
      <guid>https://theithollow.com/2019/10/07/a-kind-way-to-learn-kubernetes/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m not going to lie to you, as of the time of this writing, maybe the biggest hurdle to learning Kubernetes is getting a cluster stood up. Right now there are a myriad of ways so stand up a cluster, but none of them are really straight forward yet. If you&amp;rsquo;re interested in learning how Kubernetes works, and just want to setup a basic cluster to poke around in, this post is for you.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Desired State and Control Loops</title>
      <link>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</link>
      <pubDate>Mon, 16 Sep 2019 14:05:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve just gotten started with Kubernetes, you might be curious to know how the desired state is achieved? Think about it, you pass a YAML file to the API server and magically stuff happens. Not only that, but when disaster strikes (e.g. pod crashes) Kubernetes also makes it right again so that it matches the desired state.&lt;/p&gt;
&lt;p&gt;The mechanism that allows for Kubernetes to enforce this desired state is the control loop. The basics of this are pretty simple. A control loop can be though of in three stages.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes Visually - With VMware Octant</title>
      <link>https://theithollow.com/2019/08/20/kubernetes-visually-with-vmware-octant/</link>
      <pubDate>Tue, 20 Aug 2019 14:10:35 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/20/kubernetes-visually-with-vmware-octant/</guid>
      <description>&lt;p&gt;I don&amp;rsquo;t know about you, but I learn things best when I have a visual to reference. Many of my posts in this blog are purposefully built with visuals, not only because I think its helpful for the readers to &amp;ldquo;get the picture&amp;rdquo;, but also because that&amp;rsquo;s how I learn.&lt;/p&gt;
&lt;p&gt;Kubernetes can feel like a daunting technology to start learning, especially since you&amp;rsquo;ll be working with code and the command line for virtually all of it. That can be a scary proposition to an operations person who is trying to break into something brand new. But last week I was introduced to a project from VMware called &lt;a href=&#34;https://github.com/vmware/octant&#34;&gt;Octant&lt;/a&gt;, that helps visualize whats actually going on in our Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - DaemonSets</title>
      <link>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</link>
      <pubDate>Tue, 13 Aug 2019 14:10:04 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/13/kubernetes-daemonsets/</guid>
      <description>&lt;p&gt;DaemonSets can be a really useful tool for managing the health and operation of the pods within a Kubernetes cluster. In this post we&amp;rsquo;ll explore a use case for a DaemonSet, why we need them, and an example in the lab.&lt;/p&gt;
&lt;h2 id=&#34;daemonsets---the-theory&#34;&gt;DaemonSets - The Theory&lt;/h2&gt;
&lt;p&gt;DaemonSets are actually pretty easy to explain. A DaemonSet is a Kubernetes construct that ensures a pod is running on every node (where eligible) in a cluster. This means that if we were to create a DaemonSet on our six node cluster (3 master, 3 workers), the DaemonSet would schedule the defined pods on each of the nodes for a total of six pods. Now, this assumes there are either no &lt;a href=&#34;https://theithollow.com/?p=9736&#34;&gt;taints on the nodes, or there are tolerations&lt;/a&gt; on the DaemonSets.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Sysdig Secure 2.4 Announced</title>
      <link>https://theithollow.com/2019/08/06/sysdig-secure-2-4-announced/</link>
      <pubDate>Tue, 06 Aug 2019 13:17:20 +0000</pubDate>
      <guid>https://theithollow.com/2019/08/06/sysdig-secure-2-4-announced/</guid>
      <description>&lt;p&gt;Today Sysdig announced a new update to their Cloud Native Visibility and Security Platform, with the release of Sysdig Secure 2.4.&lt;/p&gt;
&lt;p&gt;The new version of the Secure product includes some pretty nifty enhancements.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Runtime profiling with machine learning -&lt;/strong&gt; New containers will be profiled after deployment to give insights into the processes, file system activity, networking and system calls. Once the profiling is complete, these profiles can be used to create policy sets for the expected behavior. Sysdig also offers a confidence level of the profile. Consistent behavior generating a higher confidence level whereas variable behavior would have a lower level.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Taints and Tolerations</title>
      <link>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</link>
      <pubDate>Mon, 29 Jul 2019 14:15:22 +0000</pubDate>
      <guid>https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/</guid>
      <description>&lt;p&gt;One of the best things about Kubernetes, is that I don&amp;rsquo;t have to think about which piece of hardware my container will run on when I deploy it. The Kubernetes scheduler can make that decision for me. This is great until I actually DO care about what node my container runs on. This post will examine one solution to pod placement, through taints and tolerations.&lt;/p&gt;
&lt;h2 id=&#34;taints---the-theory&#34;&gt;Taints - The Theory&lt;/h2&gt;
&lt;p&gt;Suppose we had a Kubernetes cluster where we didn&amp;rsquo;t want any pods to run on a specific node. You might need to do this for a variety of reasons, such as:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Test Your Kubernetes Cluster Conformance</title>
      <link>https://theithollow.com/2019/07/16/test-your-kubernetes-cluster-conformance/</link>
      <pubDate>Tue, 16 Jul 2019 13:56:08 +0000</pubDate>
      <guid>https://theithollow.com/2019/07/16/test-your-kubernetes-cluster-conformance/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve been dabbling in the world of Kubernetes for a while now and have probably noticed there are a whole lot of vendors packaging their own version of Kubernetes.&lt;/p&gt;
&lt;p&gt;You might be having a fun time comparing the upstream Kubernetes version vs the packaged versions put out by Redhat, VMware, and others. But how do we know that those packaged versions are supporting the required APIs so that all Kubernetes clusters have the same baseline of features?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Monitoring Kubernetes with Sysdig Monitor</title>
      <link>https://theithollow.com/2019/06/23/monitoring-kubernetes-with-sysdig-monitor/</link>
      <pubDate>Sun, 23 Jun 2019 14:10:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/23/monitoring-kubernetes-with-sysdig-monitor/</guid>
      <description>&lt;p&gt;Any system that&amp;rsquo;s going to be deployed for the enterprise needs to have at least a basic level of monitoring in place to manage it. Kubernetes is no exception to this rule. When we, as a community, underwent the shift from physical servers to virtual infrastructure, we didn&amp;rsquo;t ignore the new VMs and just keep monitoring the hardware, we had to come up with new products to monitor our infrastructure. &lt;a href=&#34;https://sysdig.com/&#34;&gt;Sysdig&lt;/a&gt; is building these new solutions for the Kubernetes world.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Account Tagging</title>
      <link>https://theithollow.com/2019/06/17/aws-account-tagging/</link>
      <pubDate>Mon, 17 Jun 2019 14:02:18 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/17/aws-account-tagging/</guid>
      <description>&lt;p&gt;We&amp;rsquo;re getting into the habit of tagging everything these days. It&amp;rsquo;s been drilled into our heads that we don&amp;rsquo;t care about names of our resources anymore because we can add our own metadata to resources to later identify them, or to use for automation. But up until June 6th, AWS wouldn&amp;rsquo;t let us tag one of the most important resources of all, our accounts.&lt;/p&gt;
&lt;p&gt;On June 6th though, our cloud world changed when &lt;a href=&#34;https://aws.amazon.com/about-aws/whats-new/2019/06/aws-organizations-now-supports-tagging-and-untagging-of-aws-acco/&#34;&gt;AWS announced&lt;/a&gt; that we can now add tags to our accounts through organizations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Helm</title>
      <link>https://theithollow.com/2019/06/10/kubernetes-helm/</link>
      <pubDate>Mon, 10 Jun 2019 14:02:52 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/10/kubernetes-helm/</guid>
      <description>&lt;p&gt;The Kubernetes series has now ventured into some non-native k8s discussions. Helm is a relatively common tool used in the industry and it makes sense to talk about why that is. This post covers the basics of Helm so we can make our own evaluations about its use in our Kubernetes environment.&lt;/p&gt;
&lt;h2 id=&#34;helm---the-theory&#34;&gt;Helm - The Theory&lt;/h2&gt;
&lt;p&gt;So what is Helm? In the most simplest terms its a package manager for Kubernetes.&lt;br&gt;
Think of Helm this way, Helm is to Kubernetes as yum/apt is to Linux. Yeah, sounds pretty neat now doesn&amp;rsquo;t it?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pod Backups</title>
      <link>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</link>
      <pubDate>Mon, 03 Jun 2019 14:00:58 +0000</pubDate>
      <guid>https://theithollow.com/2019/06/03/kubernetes-pod-backups/</guid>
      <description>&lt;p&gt;The focus of this post is on pod based backups, but this could also go for Deployments, replica sets, etc. This is not a post about how to backup your Kubernetes cluster including things like etcd, but rather the resources that have been deployed on the cluster. Pods have been used as an example to walk through how we can take backups of our applications once deployed in a Kubernetes cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Should I Feel this Stupid?</title>
      <link>https://theithollow.com/2019/04/08/should-i-feel-this-stupid/</link>
      <pubDate>Mon, 08 Apr 2019 14:10:28 +0000</pubDate>
      <guid>https://theithollow.com/2019/04/08/should-i-feel-this-stupid/</guid>
      <description>&lt;p&gt;Learning new things can be pretty exciting, and lucky for IT Professionals, there is no lack of things to learn. But this exciting world of endless configurations, code snippets, routes, and processes can have a demoralizing effect as well when you&amp;rsquo;re constantly bombarded with things you don&amp;rsquo;t know.&lt;/p&gt;
&lt;h2 id=&#34;growth-hurts-a-little&#34;&gt;Growth Hurts a Little&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;m not immune to the feelings of stupidity. I work with some smart folks in my day job as well as smart customers. I see what people are doing on twitter and realize that no matter what I already know, there is so much more that I could know.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - StatefulSets</title>
      <link>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</link>
      <pubDate>Mon, 01 Apr 2019 14:20:48 +0000</pubDate>
      <guid>https://theithollow.com/2019/04/01/kubernetes-statefulsets/</guid>
      <description>&lt;p&gt;We love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.&lt;/p&gt;
&lt;h2 id=&#34;statefulsets---the-theory&#34;&gt;StatefulSets - The Theory&lt;/h2&gt;
&lt;p&gt;StatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Cloud Providers and Storage Classes</title>
      <link>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</link>
      <pubDate>Wed, 13 Mar 2019 14:20:23 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/?p=9598&#34;&gt;previous post&lt;/a&gt; we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn&amp;rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn&amp;rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we&amp;rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Persistent Volumes</title>
      <link>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</link>
      <pubDate>Mon, 04 Mar 2019 15:00:51 +0000</pubDate>
      <guid>https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/</guid>
      <description>&lt;p&gt;Containers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.&lt;/p&gt;
&lt;h2 id=&#34;volumes---the-theory&#34;&gt;Volumes - The Theory&lt;/h2&gt;
&lt;p&gt;In the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Secrets</title>
      <link>https://theithollow.com/2019/02/25/kubernetes-secrets/</link>
      <pubDate>Mon, 25 Feb 2019 15:00:56 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/25/kubernetes-secrets/</guid>
      <description>&lt;p&gt;Secret, Secret, I&amp;rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post we used ConfigMaps&lt;/a&gt; to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a &amp;ldquo;secret&amp;rdquo;.&lt;/p&gt;
&lt;h2 id=&#34;secrets---the-theory&#34;&gt;Secrets - The Theory&lt;/h2&gt;
&lt;p&gt;The short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a &lt;a href=&#34;https://theithollow.com/?p=9583&#34;&gt;previous post&lt;/a&gt; in this &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;series&lt;/a&gt;, but with non-clear text.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - ConfigMaps</title>
      <link>https://theithollow.com/2019/02/20/kubernetes-configmaps/</link>
      <pubDate>Wed, 20 Feb 2019 15:00:40 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/20/kubernetes-configmaps/</guid>
      <description>&lt;p&gt;Sometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.&lt;/p&gt;
&lt;h2 id=&#34;configmaps---the-theory&#34;&gt;ConfigMaps - The Theory&lt;/h2&gt;
&lt;p&gt;Not all of our applications can be as simple as the basic nginx containers we&amp;rsquo;ve deployed earlier in &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;this series&lt;/a&gt;. In some cases, we need to pass configuration files, variables, or other information to our apps.&lt;/p&gt;
&lt;p&gt;The theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Ingress</title>
      <link>https://theithollow.com/2019/02/13/kubernetes-ingress/</link>
      <pubDate>Wed, 13 Feb 2019 15:00:46 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/13/kubernetes-ingress/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s time to look closer at how we access our containers from outside the Kubernetes cluster. We&amp;rsquo;ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.&lt;/p&gt;
&lt;h2 id=&#34;ingress-controllers---the-theory&#34;&gt;Ingress Controllers - The Theory&lt;/h2&gt;
&lt;p&gt;Lets first talk about why we&amp;rsquo;d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/%22&#34;&gt;http://store-name.com/&amp;quot;&lt;/a&gt; and a shopping cart page at &amp;ldquo;&lt;a href=&#34;http://store-name.com/cart%22&#34;&gt;http://store-name.com/cart&amp;quot;&lt;/a&gt; and an api URI at &amp;ldquo;&lt;a href=&#34;http://store-name.com/api%22&#34;&gt;http://store-name.com/api&amp;quot;&lt;/a&gt;. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we&amp;rsquo;re starting to drift off the point which hopefully you get now.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - KUBECONFIG and Context</title>
      <link>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</link>
      <pubDate>Mon, 11 Feb 2019 15:00:26 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve been working with Kubernetes for a while now and no doubt you have lots of clusters and namespaces to deal with now. This might be a good time to introduce Kubernetes KUBECONFIG files and context so you can more easily use all of these different resources.&lt;/p&gt;
&lt;h2 id=&#34;kubeconfig-and-context---the-theory&#34;&gt;KUBECONFIG and Context - The Theory&lt;/h2&gt;
&lt;p&gt;When you first setup your Kubernetes cluster you created a config file likely stored in your $HOME/.kube directory. This is the KUBECONFIG file and it is used to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG file. This is why you would&amp;rsquo;ve needed to add this file to your $PATH variable so that it could be used correctly by the kubectl commands.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Namespaces</title>
      <link>https://theithollow.com/2019/02/06/kubernetes-namespaces/</link>
      <pubDate>Wed, 06 Feb 2019 15:00:13 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/06/kubernetes-namespaces/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll start exploring ways that you might be able to better manage your Kubernetes cluster for security or organizational purposes. Namespaces become a big piece of how your Kubernetes cluster operates and who sees what inside your cluster.&lt;/p&gt;
&lt;h2 id=&#34;namespaces---the-theory&#34;&gt;Namespaces - The Theory&lt;/h2&gt;
&lt;p&gt;The easiest way to think of a namespace is that its a logical separation of your Kubernetes Cluster. Just like you might have segmented a physical server into several virtual severs, we can segment our Kubernetes cluster into namespaces. Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the &amp;ldquo;get pods&amp;rdquo; command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Service Publishing</title>
      <link>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</link>
      <pubDate>Tue, 05 Feb 2019 16:30:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/05/kubernetes-service-publishing/</guid>
      <description>&lt;p&gt;A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Endpoints</title>
      <link>https://theithollow.com/2019/02/04/kubernetes-endpoints/</link>
      <pubDate>Mon, 04 Feb 2019 15:00:02 +0000</pubDate>
      <guid>https://theithollow.com/2019/02/04/kubernetes-endpoints/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you&amp;rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we&amp;rsquo;ll cover the basics of Kubernetes endpoints in this post.&lt;/p&gt;
&lt;h2 id=&#34;endpoints---the-theory&#34;&gt;Endpoints - The Theory&lt;/h2&gt;
&lt;p&gt;During the &lt;a href=&#34;https://theithollow.com/?p=9427&#34;&gt;post&lt;/a&gt; where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you&amp;rsquo;re doing, you don&amp;rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven&amp;rsquo;t covered yet).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Services and Labels</title>
      <link>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</link>
      <pubDate>Thu, 31 Jan 2019 15:00:54 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/31/kubernetes-services-and-labels/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following &lt;a href=&#34;https://theithollow.com/2019/01/26/getting-started-with-kubernetes/&#34;&gt;the series&lt;/a&gt;, you may be thinking that we&amp;rsquo;ve built ourselves a problem. You&amp;rsquo;ll recall that we&amp;rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven&amp;rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn&amp;rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we&amp;rsquo;ll access one of our pods &amp;hellip; finally.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Deployments</title>
      <link>https://theithollow.com/2019/01/30/kubernetes-deployments/</link>
      <pubDate>Wed, 30 Jan 2019 15:01:37 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/30/kubernetes-deployments/</guid>
      <description>&lt;p&gt;After following the previous posts, we should feel pretty good about deploying our &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;pods&lt;/a&gt; and ensuring they are highly available. We&amp;rsquo;ve learned about naked pods and then &lt;a href=&#34;https://theithollow.com/2019/01/28/kubernetes-replica-sets/&#34;&gt;replica sets&lt;/a&gt; to make those pods more HA, but what about when we need to create a new version of our pods? We don&amp;rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where &amp;ldquo;Deployments&amp;rdquo; comes into play.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Replica Sets</title>
      <link>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</link>
      <pubDate>Mon, 28 Jan 2019 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/28/kubernetes-replica-sets/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2019/01/21/kubernetes-pods/&#34;&gt;previous post&lt;/a&gt; we covered the use of pods and deployed some &amp;ldquo;naked pods&amp;rdquo; in our Kubernetes cluster. In this post we&amp;rsquo;ll expand our use of pods with Replica Sets.&lt;/p&gt;
&lt;h2 id=&#34;replica-sets---the-theory&#34;&gt;Replica Sets - The Theory&lt;/h2&gt;
&lt;p&gt;One of the biggest reasons that we don&amp;rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can&amp;rsquo;t count on them to always be running. Kubernetes doesn&amp;rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let&amp;rsquo;s plan for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Native Backups</title>
      <link>https://theithollow.com/2019/01/22/aws-native-backups/</link>
      <pubDate>Tue, 22 Jan 2019 16:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/22/aws-native-backups/</guid>
      <description>&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2019/01/awsbackup1-1024x298.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;Amazon Web Services has released yet another service designed to improve the lives of people administering an AWS environment. There is a new backup service, cleverly named, AWS Backup.&lt;/p&gt;
&lt;p&gt;This new service allows you to create a backup plan for Elastic Block Store (EBS) volumes, Elastic File System (EFS), DynamoDB, Relational Database Services (RDS), and Storage Gateway.&lt;/p&gt;
&lt;p&gt;Now we can build plans to automatically backup, tier and expire old backups automatically based on our own criteria.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Kubernetes - Pods</title>
      <link>https://theithollow.com/2019/01/21/kubernetes-pods/</link>
      <pubDate>Mon, 21 Jan 2019 16:30:30 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/21/kubernetes-pods/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got a Kubernetes cluster setup and we&amp;rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we&amp;rsquo;ll need to understand a little bit about pods.&lt;/p&gt;
&lt;h2 id=&#34;pods---the-theory&#34;&gt;Pods - The Theory&lt;/h2&gt;
&lt;p&gt;In a docker environment, the smallest unit you&amp;rsquo;d deal with is a container. In the Kubernetes world, you&amp;rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy Kubernetes Using Kubeadm - CentOS7</title>
      <link>https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/</link>
      <pubDate>Mon, 14 Jan 2019 15:35:36 +0000</pubDate>
      <guid>https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been wanting to have a playground to mess around with Kubernetes (k8s) deployments for a while and didn&amp;rsquo;t want to spend the money on a cloud solution like &lt;a href=&#34;https://aws.amazon.com/eks/?nc2=h_m1&#34;&gt;AWS Elastic Container Service for Kubernetes&lt;/a&gt; or &lt;a href=&#34;https://cloud.google.com/kubernetes-engine/&#34;&gt;Google Kubernetes Engine&lt;/a&gt; . While these hosted solutions provide additional features such as the ability to spin up a load balancer, they also cost money every hour they&amp;rsquo;re available and I&amp;rsquo;m planning on leaving my cluster running. Also, from a learning perspective, there is no greater way to learn the underpinnings of a solution than having to deploy and manage it on your own. Therefore, I set out to deploy k8s in my vSphere home lab on some CentOS 7 virtual machines using Kubeadm. I found several articles on how to do this but somehow I got off track a few times and thought another blog post with step by step instructions and screenshots would help others. Hopefully it helps you. Let&amp;rsquo;s begin.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Security Hub</title>
      <link>https://theithollow.com/2018/12/17/aws-security-hub/</link>
      <pubDate>Mon, 17 Dec 2018 15:00:59 +0000</pubDate>
      <guid>https://theithollow.com/2018/12/17/aws-security-hub/</guid>
      <description>&lt;p&gt;A primary concern for companies moving to the cloud is whether or not their workloads will remain secure. While that debate still happens, AWS has made great strides to assuage customer&amp;rsquo;s concerns by adding services to ensure workloads are well protected. At re:Invent 2018 another service named &lt;a href=&#34;https://aws.amazon.com/security-hub/&#34;&gt;AWS Security Hub&lt;/a&gt; was added. Security Hub allows you to setup some basic security guardrails and get compliance information for multiple accounts within a single service. Amazon seems to have realized that enabling customers to very easily see their security recommendations for all environments in a single place has great value to their businesses.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setup AWS Transit Gateway</title>
      <link>https://theithollow.com/2018/12/12/setup-aws-transit-gateway/</link>
      <pubDate>Wed, 12 Dec 2018 15:00:07 +0000</pubDate>
      <guid>https://theithollow.com/2018/12/12/setup-aws-transit-gateway/</guid>
      <description>&lt;p&gt;Amazon announced a new service at re:Invent 2018 in Las Vegas, called the &lt;a href=&#34;https://aws.amazon.com/transit-gateway/&#34;&gt;AWS Transit Gateway&lt;/a&gt;. The Transit Gateway allows you to connect multiple VPCs together as well as VPN tunnels to on-premises networks through a single gateway device. As a consultant, I talk with customers often, about how they will plan to connect their data center with the AWS cloud, and how to interconnect all of those VPCs. In the past a solution like Aviatrix or a Cisco CSR transit gateway was used which leveraged some EC2 instances that lived within a VPC. You&amp;rsquo;d then connect spoke VPCs together via the use of VPN tunnels. With this new solution, there is a native service from AWS that allows you to do this without the need for VPN tunnels between spoke VPCs and you can use the AWS CLI/CloudFormation or console to deploy everything you need. This post takes you through an example of the setup of the AWS Transit Gateway in my own lab environment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Resource Access Manager</title>
      <link>https://theithollow.com/2018/12/10/aws-resource-access-manager/</link>
      <pubDate>Mon, 10 Dec 2018 15:00:44 +0000</pubDate>
      <guid>https://theithollow.com/2018/12/10/aws-resource-access-manager/</guid>
      <description>&lt;p&gt;At AWS re:Invent this year in Las Vegas, Amazon announced a ton of services, but one that caught my eye was the AWS Resource Access Manager. This is a service that facilitates the sharing of some resources between AWS accounts so that they can be used or referenced across account boundaries. Typically, an AWS account is used as a control plane boundary (or billing boundary) between environments, but even then resources will need to communicate with each other occasionally. Now with AWS Resource Access Manager (RAM) we can shared Hosted DNS zones, Transit Gateways and other objects. This list will undoubtedly grow over time. This post will show you how you can share another new service, the AWS Transit Gateway, across multiple accounts within your organization.&lt;/p&gt;</description>
    </item>
    <item>
      <title>VMware Cloud on AWS Firewalls Overview</title>
      <link>https://theithollow.com/2018/11/28/vmware-cloud-on-aws-firewalls-overview/</link>
      <pubDate>Wed, 28 Nov 2018 16:03:46 +0000</pubDate>
      <guid>https://theithollow.com/2018/11/28/vmware-cloud-on-aws-firewalls-overview/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re getting started with VMware Cloud on AWS then you should be aware of all the points in which you can block traffic with a firewall. Or, if you look at it another way, the places where you might need to create allow rules for traffic to traverse your cloud. This post is used to show where those choke points live both within your VMware Cloud on AWS SDDC, as well as the Amazon VPC in which your SDDC lives.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using AWS CloudFormation Drift Detection</title>
      <link>https://theithollow.com/2018/11/14/using-aws-cloudformation-drift-detection/</link>
      <pubDate>Wed, 14 Nov 2018 15:02:55 +0000</pubDate>
      <guid>https://theithollow.com/2018/11/14/using-aws-cloudformation-drift-detection/</guid>
      <description>&lt;p&gt;Today, AWS announced the release of the long anticipated drift detection feature for CloudFormation. This feature has been a common feature request for many of the AWS customers that I speak with to ensure their deployments are configured as expected. This post will take you through why this is an important feature and how you can use it.&lt;/p&gt;
&lt;h1 id=&#34;whats-the-big-deal&#34;&gt;Whats the Big Deal?&lt;/h1&gt;
&lt;p&gt;If you&amp;rsquo;re not familiar with it already, CloudFormation is a free service from AWS that lets you describe your infrastructure through a YAML or JSON file and deploy the configuration. Simply define your desired state and CloudFormation will deploy the resources and arrange them so that dependent services are (usually) deployed in the right order. If you&amp;rsquo;re familiar with Ansible, Chef, or Puppet, this concept of a desired state shouldn&amp;rsquo;t be new.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Quality Checking Infrastructure-as-Code</title>
      <link>https://theithollow.com/2018/11/05/quality-checking-infrastructure-as-code/</link>
      <pubDate>Mon, 05 Nov 2018 14:55:55 +0000</pubDate>
      <guid>https://theithollow.com/2018/11/05/quality-checking-infrastructure-as-code/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been doing application development for long, having tools in place to check the health of your code is probably not a new concept. However, if you&amp;rsquo;re jumping into something like Cloud and you&amp;rsquo;ve been an infrastructure engineer, this may be a foreign concept to you. Isn&amp;rsquo;t it bad enough that you&amp;rsquo;ve started learning Git, JSON, YAML, APIs etc on top of your existing skill sets? Well, take some lessons from the application teams and you may well find that you&amp;rsquo;re improving your processes and reducing the technical debt and time to provision infrastructure as code resources as well.&lt;/p&gt;</description>
    </item>
    <item>
      <title>This is Not Fine!</title>
      <link>https://theithollow.com/2018/10/25/this-is-not-fine/</link>
      <pubDate>Thu, 25 Oct 2018 17:55:07 +0000</pubDate>
      <guid>https://theithollow.com/2018/10/25/this-is-not-fine/</guid>
      <description>&lt;p&gt;I recently attended the Devops Enterprise Summit in Las Vegas so that I could keep up to date on the latest happenings around integrating devops for companies. This conference was nothing short of amazing, but what I wasn&amp;rsquo;t anticipating was a theme around IT burnout. The &lt;a href=&#34;http://itrevolution.com&#34;&gt;IT Revolutions&lt;/a&gt; team who puts on the conference started one of the keynotes on the topic of burnout, from &lt;a href=&#34;https://psychology.berkeley.edu/people/christina-maslach&#34;&gt;Dr. Christina Maslach&lt;/a&gt; who is Professor of Psychology, Emerita University of California, Berkeley. In addition to this powerful session, there was another panel group that happened on Wednesday, that went further into the discussion including the ultimate consequence of burnout, which is suicide.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Restore or Resize an AWS Transit Router</title>
      <link>https://theithollow.com/2018/10/22/restore-or-resize-an-aws-transit-router/</link>
      <pubDate>Mon, 22 Oct 2018 14:03:21 +0000</pubDate>
      <guid>https://theithollow.com/2018/10/22/restore-or-resize-an-aws-transit-router/</guid>
      <description>&lt;p&gt;A transit VPC is a pretty common networking pattern in an AWS environment. [Transit VPCs](http://Should I use a Transit VPC in AWS?) can limit the number of peering connections required to connect all your VPCs by switching from a mesh topology of peers to a hub and spoke method with transit. While transit VPCs offer some nice features, it also requires a bit more management overhead since you need to manage your own routers. Cisco makes the deployment of transit routers very easy but sometimes you need to make some changes to the routers after they&amp;rsquo;re deployed like if you need to resize them. Also, sometimes bad things happen and those routers can be destroyed by accident. This post shows how you can resize your Cisco CSRs and/or restore an old configuration from snapshot.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Upgrade to vRA 7.5</title>
      <link>https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/</link>
      <pubDate>Mon, 08 Oct 2018 14:03:53 +0000</pubDate>
      <guid>https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/</guid>
      <description>&lt;p&gt;Upgrading your vRealize Automation instance has some times been a painful exercise. But this was in the early days after VMware purchased the product from DynamicOps. It&amp;rsquo;s taken a while, but the upgrade process has improved for each and every version, in my opinion, and 7.5 is no exception. If you&amp;rsquo;re on a previous version, here is a quick rundown on the upgrade process from 7.4 to 7.5.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As always, please read the the official upgrade documentation. It includes prerequisites and steps that should always be followed. https://docs.vmware.com/en/vRealize-Automation/7.5/vrealize-automation-7172732to75upgrading.pdf&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Session Manager</title>
      <link>https://theithollow.com/2018/10/01/aws-session-manager/</link>
      <pubDate>Mon, 01 Oct 2018 14:05:01 +0000</pubDate>
      <guid>https://theithollow.com/2018/10/01/aws-session-manager/</guid>
      <description>&lt;p&gt;Amazon has released yet another &lt;a href=&#34;https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/&#34;&gt;Simple Systems Manager&lt;/a&gt; service to improve the management of EC2 instances. This time, it&amp;rsquo;s AWS Session Manager. Session Manager is a nifty little service that lets you assign permissions to users to access an instances&amp;rsquo;s shell. Now, you might be thinking, &amp;ldquo;Why would I need this? I can already add SSH keys to my instances at boot time to access my instances.&amp;rdquo; You&amp;rsquo;d be right of course, but think of how you might use Session Manager. Instead of having to deal with adding SSH keys, and managing access/distribution of the private keys, we can manage access through AWS Identity and Access Management permissions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Close an AWS Account Belonging to an Organization</title>
      <link>https://theithollow.com/2018/09/17/close-an-aws-account-belonging-to-an-organization/</link>
      <pubDate>Mon, 17 Sep 2018 14:05:24 +0000</pubDate>
      <guid>https://theithollow.com/2018/09/17/close-an-aws-account-belonging-to-an-organization/</guid>
      <description>&lt;p&gt;Opening an AWS account is very easy to do. AWS makes it possible to create an account with an email address and a credit card. Even better, if you&amp;rsquo;re setting up a multi-account structure, you can use the API through organizations and you really only need an email address as an input. But closing an account is slightly more difficult. While closing accounts doesn&amp;rsquo;t happen quite as often as opening new ones, it does happen. Especially if you&amp;rsquo;re trying to fail fast and have made some organizational mistakes. When you want to clean those accounts up, you&amp;rsquo;ll need to jump through a couple of small hoops to do so. This post hopes to outline how to remove an account from an AWS Organization and then close it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Create AWS Accounts with CloudFormation</title>
      <link>https://theithollow.com/2018/09/10/create-aws-accounts-with-cloudformation/</link>
      <pubDate>Mon, 10 Sep 2018 14:05:20 +0000</pubDate>
      <guid>https://theithollow.com/2018/09/10/create-aws-accounts-with-cloudformation/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2018/09/04/aws-custom-resources/&#34;&gt;previous post&lt;/a&gt;, we covered how to use an AWS Custom Resource in a CloudFormation template to deploy a very basic Lambda function. To expand upon this ability, lets use this knowledge to deploy something more useful than a basic Lambda function. How about we use it to create an AWS account? To my knowledge, the only way to create a new AWS account is to use the CLI or manually through the console. How about we use a custom resource to deploy a new account for us in our AWS Organization? Once this ability is available in a CloudFormation template, we could even publish it in the AWS Service Catalog and give our users an account vending machine capability.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Custom Resources</title>
      <link>https://theithollow.com/2018/09/04/aws-custom-resources/</link>
      <pubDate>Tue, 04 Sep 2018 14:00:04 +0000</pubDate>
      <guid>https://theithollow.com/2018/09/04/aws-custom-resources/</guid>
      <description>&lt;p&gt;We love to use AWS CloudFormation to deploy our environments. Its like configuration management for our AWS infrastructure in the sense that we write a desired state as code and apply it to our environment. But sometimes, there are tasks that we want to complete that aren&amp;rsquo;t part of CloudFormation. For instance, what if we wanted to use CloudFormation to deploy a new account which needs to be done through the CLI, or if we need to return some information to our CloudFormation template before deploying it? Luckily for us we can use a Custom Resource to achieve our goals. This post shows how you can use CloudFormation with a Custom Resource to execute a very basic Lambda function as part of a deployment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Add AWS Web Application Firewall to Protect your Apps</title>
      <link>https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/</link>
      <pubDate>Mon, 20 Aug 2018 14:02:31 +0000</pubDate>
      <guid>https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/</guid>
      <description>&lt;p&gt;Some things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with AWS we make sure we&amp;rsquo;re setting up security groups to limit our access to those resources. How about an extra level of protection from a Web Application Firewall. AWS WAF allows us to leverage some extra protections at the edge to protect us from those bad guys/girls.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using AWS CodeDeploy to Push New Versions of your Application</title>
      <link>https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/</link>
      <pubDate>Mon, 06 Aug 2018 14:04:33 +0000</pubDate>
      <guid>https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/</guid>
      <description>&lt;p&gt;Getting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, we&amp;rsquo;ve changed this conversation quite a bit as well. But what about a plain old server that needs a new version of code deployed on it? AWS CodeDeploy can help us to manage our software versions and rollbacks so that we have a consistent method to update our apps across multiple instances. This post will demonstrate how to get started with AWS CodeDeploy so that you can manage the deployment of new versions of your apps.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Mac Client</title>
      <link>https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/</link>
      <pubDate>Tue, 31 Jul 2018 14:06:02 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>How to Setup Amazon EKS with Windows Client</title>
      <link>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</link>
      <pubDate>Mon, 30 Jul 2018 16:05:09 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/</guid>
      <description>&lt;p&gt;We love Kubernetes. It&amp;rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Easy Snapshot Automation with Amazon Data Lifecycle Manager</title>
      <link>https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/</link>
      <pubDate>Mon, 23 Jul 2018 14:05:53 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/</guid>
      <description>&lt;p&gt;Amazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window.&lt;/p&gt;
&lt;p&gt;In the past, AWS customers might need to come up with their own solution for snapshots or backups. Some apps moving to the cloud might not even need backups based on their deployment method and architectures. For everything else, we assume we&amp;rsquo;ll need to at least snapshot the EBS volumes that the EC2 instances are running on. Prior to the Data Lifecycle Manager, this could be accomplished through some fairly simple Lambda functions to snapshot volumes on a schedule. Now with the new service, there is a solution right in the EC2 console.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Should I use a Transit VPC in AWS?</title>
      <link>https://theithollow.com/2018/07/16/should-i-use-a-transit-vpc-in-aws/</link>
      <pubDate>Mon, 16 Jul 2018 14:05:46 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/16/should-i-use-a-transit-vpc-in-aws/</guid>
      <description>&lt;p&gt;A common question that comes up during AWS designs is, &amp;ldquo;Should I use a transit VPC?&amp;rdquo; The answer, like all good IT riddles is, &amp;ldquo;it depends.&amp;rdquo; There are a series of questions that you must ask yourself before deciding whether to use a Transit VPC or not. In this post, I&amp;rsquo;ll try to help formulate those questions so you can answer this question yourself.&lt;/p&gt;
&lt;h1 id=&#34;the-basics&#34;&gt;The Basics&lt;/h1&gt;
&lt;p&gt;Before we can ask those tough questions, we first should answer the question, &amp;ldquo;What is a Transit VPC?&amp;rdquo; Well, a transit VPC acts as an intermediary for routing between two places. Just like a transit network bridges traffic between two networks, a transit VPC ferries traffic between two VPCs or perhaps your data center.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Who is Heptio?</title>
      <link>https://theithollow.com/2018/07/09/who-is-heptio/</link>
      <pubDate>Mon, 09 Jul 2018 14:00:53 +0000</pubDate>
      <guid>https://theithollow.com/2018/07/09/who-is-heptio/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2018/07/heptio-logo.jpeg&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2018/07/heptio-logo-300x171.jpeg&#34;&gt;&lt;/a&gt; There are a dozen new technologies being introduced every day that never amount to anything, while others move on to create completely new methodologies for how we interact with IT. Just like virtualization changed the way data centers operate, containers are changing how we interact with our applications and Kubernetes (K8s in short hand) seems to be a front runner in this space. However, with any new technology hitting the market, there is a bit of a lag before it takes off. People have to understand why it&amp;rsquo;s needed, who&amp;rsquo;s got the best solution, and how you can make it work with your own environment. Heptio is a startup company focusing on helping enterprises embrace Kubernetes through their open source tools and professional services. I&amp;rsquo;ve been hearing great things about Heptio, but when my good friend, &lt;a href=&#34;https://twitter.com/timmycarr&#34;&gt;Tim Carr&lt;/a&gt;, decided to go work for there, I decided that I better look into who they are, and figure out what he sees in their little startup.&lt;/p&gt;</description>
    </item>
    <item>
      <title>The Dark Side of Stress</title>
      <link>https://theithollow.com/2018/06/18/the-dark-side-of-stress/</link>
      <pubDate>Mon, 18 Jun 2018 14:06:40 +0000</pubDate>
      <guid>https://theithollow.com/2018/06/18/the-dark-side-of-stress/</guid>
      <description>&lt;p&gt;I took last week off from work to spend some time with my family and just relax. I&amp;rsquo;d never been to Disney World and have a six year old who is seriously into Star Wars, so this sounded like a great way to take a relaxing week off. During this vacation I found that it took several days before I even started to unwind. I ended the work week on a Friday and still felt the work stress through the weekend and into Monday. Maybe it&amp;rsquo;s a normal thing to still feel the stress through the weekend, but I had expected to feel an immediate release of tension when I was done with work on Friday when my vacation began. But all weekend I kept noticing that I couldn&amp;rsquo;t forget about work. In fact, I felt pretty sick one day and believe it was stress related. After a few days I started to pay attention to the activities of the day and didn&amp;rsquo;t pay as much attention, but it made me think that those two day weekends and how they certainly weren&amp;rsquo;t recharging me to be prepared for the next week of stress.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using Hashicorp Consul to Store Terraform State</title>
      <link>https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/</link>
      <pubDate>Mon, 21 May 2018 14:05:16 +0000</pubDate>
      <guid>https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/</guid>
      <description>&lt;p&gt;Hashicorp&amp;rsquo;s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you&amp;rsquo;ll store your state files and how they&amp;rsquo;ll be locked so that two team members or build servers aren&amp;rsquo;t stepping on each other. State can be stored in &lt;a href=&#34;https://www.terraform.io/&#34;&gt;Terraform Enterprise (TFE)&lt;/a&gt; or with some cloud services such as S3. But if you want to store your state, within your data center, perhaps you should check out &lt;a href=&#34;https://www.hashicorp.com/&#34;&gt;Hashicorp&amp;rsquo;s&lt;/a&gt; Consul product.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Visualizing the Chicago Cubs via Amazon QuickSight</title>
      <link>https://theithollow.com/2018/05/14/visualizing-the-chicago-cubs-via-amazon-quicksight/</link>
      <pubDate>Mon, 14 May 2018 15:01:07 +0000</pubDate>
      <guid>https://theithollow.com/2018/05/14/visualizing-the-chicago-cubs-via-amazon-quicksight/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re interested in visualizing your data in easy to display graphs, Amazon QuickSight may be your solution. Obviously, Amazon has great capabilities with big data, but sometimes even if you have &amp;ldquo;little&amp;rdquo; data you just need a dashboard or way of displaying that content. This post shows an example of how you can display data to tell a compelling story. For the purposes of this blog post, we&amp;rsquo;ll try to determine why the Chicago Cubs are the Major League&amp;rsquo;s favorite baseball team.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS IAM Indecision</title>
      <link>https://theithollow.com/2018/05/07/aws-iam-indecision/</link>
      <pubDate>Mon, 07 May 2018 14:55:55 +0000</pubDate>
      <guid>https://theithollow.com/2018/05/07/aws-iam-indecision/</guid>
      <description>&lt;p&gt;Identity and Access Management (IAM) can be a confusing topic for people that are new to Amazon Web Services. There are IAM Users that could be used for authentication or solutions considered part of the AWS Directory Services such as Microsoft AD, Simple AD, or AD Connector. If none of these sound appealing, there is always the option to use Federation with a SAML 2.0 solution like OKTA, PING, or Active Directory Federation Services (ADFS). If all of these option have given you a case of decision fatigue, then hopefully this post and the associate links will help you to decide how your environment should be setup.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Manage Multiple AWS Accounts with Role Switching</title>
      <link>https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/</link>
      <pubDate>Mon, 30 Apr 2018 14:05:52 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/</guid>
      <description>&lt;p&gt;A pretty common question that comes up is how to manage multiple accounts within AWS from a user perspective. Multi-Account setups are common to provide control plane separation between Production, Development, Billing and Shared Services accounts but do you need to setup Federation with each of these accounts or create an IAM user in each one? That makes those accounts kind of cumbersome to manage and the more users we have the more chance one of them could get hacked.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Directory Service - AD Connector</title>
      <link>https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/</link>
      <pubDate>Mon, 23 Apr 2018 14:05:05 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/</guid>
      <description>&lt;p&gt;Just because you&amp;rsquo;ve started moving workloads into the cloud, doesn&amp;rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you&amp;rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on AD Connector which makes a connection to your on-premises or EC2 installed domain controllers. AD Connector doesn&amp;rsquo;t run your Active Directory but rather uses your existing active directory intances within AWS. As such, in order to use AD Connector you would need to have a VPN connection or Direct Connect to provide connectivity back to your data center. Also, you&amp;rsquo;ll need to be prepared to have credentials to connect to the domain. Domain Admin credentials will work, but as usual you should use as few privileges as possible so delegate access to a user with the follow permissions:&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Directory Service - Simple AD</title>
      <link>https://theithollow.com/2018/04/16/aws-directory-service-simple-ad/</link>
      <pubDate>Mon, 16 Apr 2018 14:12:58 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/16/aws-directory-service-simple-ad/</guid>
      <description>&lt;p&gt;Just because you&amp;rsquo;ve started moving workloads into the cloud, doesn&amp;rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you&amp;rsquo;re moving to AWS, there are also some great services you can take advantage of to provide similar functionality. This post focuses on Simple AD is based on Samba4 and handles a subset of the features that the &lt;a href=&#34;https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/&#34;&gt;Microsoft AD&lt;/a&gt; type Directory Service provides. This service still allows you to use Kerberos authentication and manage users and computers as well as provide DNS services. One of the major differences between this service and Microsoft AD is that you can&amp;rsquo;t create a trust relationship with your existing domain, so if you need that functionality look at Microsoft AD instead. Simple AD gives you a great way to quickly stand up new domains and cut down on the things you need to manage such as OS patches, etc.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Directory Service - Microsoft AD</title>
      <link>https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/</link>
      <pubDate>Mon, 09 Apr 2018 14:55:20 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/</guid>
      <description>&lt;p&gt;Just because you&amp;rsquo;ve started moving workloads into the cloud, doesn&amp;rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you&amp;rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on Microsoft AD which is a Server 20012 R2 based domain that provides a pair of domain controllers across Availability Zones and also handles DNS. This service is the closest service to a full blow Active Directory that you&amp;rsquo;d host on premises. You can even create a trust between the Microsoft AD deployed in AWS and your on-prem domain. You cannot extend your on-premises domain into Microsoft AD at the time of this writing though. If you wish to extend your existing domain, you should consider building your own DCs on EC2 instances and then you have full control over your options.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Protect Your AWS Accounts with GuardDuty</title>
      <link>https://theithollow.com/2018/04/02/protect-your-aws-accounts-with-guardduty/</link>
      <pubDate>Mon, 02 Apr 2018 14:05:29 +0000</pubDate>
      <guid>https://theithollow.com/2018/04/02/protect-your-aws-accounts-with-guardduty/</guid>
      <description>&lt;p&gt;Locking down an AWS environment isn&amp;rsquo;t really that if you know what threats you&amp;rsquo;re protecting against. You have services such as the Web Application Firewall, Security Groups, Network Access Control Lists, Bucket Policies and the list goes on. But many times you encounter threats from malicious attackers just trying to probe which vulnerabilities might exist in your cloud. AWS has built a service, called Amazon GuardDuty, to help monitor and protect your environment that is based on AWS machine learning tools and threat intelligence feeds. GuardDuty currently reads VPC Flow Logs (used for network traffic analysis) and CloudTrail Logs (used for control plane access analysis) along with DNS log data to protect an AWS environment. GuardDuty will use threat intelligence feeds to alert you when your workloads may be communicating with known to be malicious IP Addresses and can alert you when privileged escalation occurs as part of its machine learning about suspicious patterns.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Fill Your Skills Tank</title>
      <link>https://theithollow.com/2018/03/26/fill-skills-tank/</link>
      <pubDate>Mon, 26 Mar 2018 14:10:55 +0000</pubDate>
      <guid>https://theithollow.com/2018/03/26/fill-skills-tank/</guid>
      <description>&lt;p&gt;Information Technology is a very difficult field to keep up with. Not only does computing power increase year after year, making the number of things we can do with computers increase, but drastic transformations always plague this industry. Complete paradigm shifts are a major part of our recent past such as mainframes, to client/server, to virtualization to cloud computing. In addition to these changes there are also silos of technologies we might want to focus on such as database design, programming, infrastructure or cloud computing. Inside each of these categories there are different platforms to learn, such as if you are a programmer, do you know C++, Java, Python or Cobol?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Woke to IT Age Discrimination</title>
      <link>https://theithollow.com/2018/03/12/woke-age-discrimination/</link>
      <pubDate>Mon, 12 Mar 2018 14:06:04 +0000</pubDate>
      <guid>https://theithollow.com/2018/03/12/woke-age-discrimination/</guid>
      <description>&lt;p&gt;Age discrimination can be an issue in any industry, but this issue is something members of the information technology (IT) industry can specifically identify with. My goal for this post is just to shine some light on the topic and discuss whether or not there is an injustice happening in IT when you reach a certain age, or if there is some less heinous reason why we see so many younger people in tech. I want to make it crystal clear that this is just an off the cuff discussion and not based on any discrimination that I&amp;rsquo;ve been witness to from my employer or anywhere else. Ageism has been a bit of the elephant in the room where I don&amp;rsquo;t see many people discussing it publicly, but it&amp;rsquo;s in the back of people&amp;rsquo;s mind. It does seem that there are many more young people in the technology industry than older people, but this also may just be a perception and not reality.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Migration to the Cloud with CloudEndure</title>
      <link>https://theithollow.com/2018/03/05/migration-cloud-cloudendure/</link>
      <pubDate>Mon, 05 Mar 2018 15:07:45 +0000</pubDate>
      <guid>https://theithollow.com/2018/03/05/migration-cloud-cloudendure/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m a big advocate for building your cloud apps to take advantage of cloud features. This usually means re-architecting them so that things like AWS Availability Zones can be used seemlessly. But I also know that to get benefits of the cloud quickly, this can&amp;rsquo;t always happen. If you&amp;rsquo;re trying to reduce your data center footprint rapidly due to a building lease or hardware refresh cycle quickly approaching, then you probably need a migration tool to accomplish this task.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Reserved Instance Considerations</title>
      <link>https://theithollow.com/2018/02/19/aws-reserved-instance-considerations/</link>
      <pubDate>Mon, 19 Feb 2018 15:10:10 +0000</pubDate>
      <guid>https://theithollow.com/2018/02/19/aws-reserved-instance-considerations/</guid>
      <description>&lt;p&gt;Reserved Instances are often used to reduce the price of Amazon EC2 instance on-demand pricing. If you&amp;rsquo;re not familiar with Reserved Instances, then you&amp;rsquo;re missing out. Reserved Instances, or RIs, are a billing construct used in conjunction with Amazon EC2 instances (virtual machines). The default usage on the AWS platform is the on-demand pricing in which you get billed by the hour or second with no commitments. Basically, when you decide to terminate an instance you stop paying for it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setup MFA for AWS Root Accounts</title>
      <link>https://theithollow.com/2018/02/12/setup-mfa-aws-root-accounts/</link>
      <pubDate>Mon, 12 Feb 2018 15:07:56 +0000</pubDate>
      <guid>https://theithollow.com/2018/02/12/setup-mfa-aws-root-accounts/</guid>
      <description>&lt;p&gt;Multi-Factor Authentication or MFA, is a common security precaution used to prevent someone from gaining access to an account even if an attacker has your username and password. With MFA you must also have a device that generates a time based one time password (TOTP) in addition to the standard username/password combination. The extra time it might take to login is well worth the advantages that MFA provides. Having your AWS account hijacked could be a real headache.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rubrik Acquires Datos IO</title>
      <link>https://theithollow.com/2018/02/06/rubrik-acquires-datos-io/</link>
      <pubDate>Tue, 06 Feb 2018 14:02:17 +0000</pubDate>
      <guid>https://theithollow.com/2018/02/06/rubrik-acquires-datos-io/</guid>
      <description>&lt;p&gt;There is news in the backup world today. Rubrik has acquired startup company Datos IO.&lt;/p&gt;
&lt;h1 id=&#34;who-is-datos-io&#34;&gt;Who is Datos IO?&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2018/02/datosio1.png&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2018/02/datosio1-300x73.png&#34;&gt;&lt;/a&gt; Datos IO was founded in 2014 and focuses on copy data management of distributed scale out databases purpose built for the cloud. The reason Datos IO is different from the common backup solutions we&amp;rsquo;re accustomed to seeing (Commvault, DataDomain, etc) is that they are building a solution from the ground up that tackles the problems of geo-dispersed scale out database which are becoming commonplace in the cloud world. Think about databases that scale multiple continents, and multiple clouds even.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Add a New AWS Account to an Existing Organization from the CLI</title>
      <link>https://theithollow.com/2018/02/05/add-new-aws-account-existing-organization-cli/</link>
      <pubDate>Mon, 05 Feb 2018 15:12:17 +0000</pubDate>
      <guid>https://theithollow.com/2018/02/05/add-new-aws-account-existing-organization-cli/</guid>
      <description>&lt;p&gt;AWS Organizations is a way for you to organize your accounts and have a hierarchy not only for bills to roll up to a single paying account, but also to setup a way to add new accounts programatically.&lt;/p&gt;
&lt;p&gt;For the purposes of this discussion, take a look at my AWS lab account structure.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup0.png&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup0.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;From the AWS Organizations Console we can see the account structure as well.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup1-mask.png&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup1-mask.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I need to create a new account in a new OU under my master billing account. This can be accomplished through the console, but it can also be done through the AWS CLI, which is what I&amp;rsquo;ll do here. NOTE: This can be done through the API as well which can be really useful for automating the building of new accounts.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using Change Sets with Nested CloudFormation Stacks</title>
      <link>https://theithollow.com/2018/01/29/using-change-sets-nested-cloudformation-stacks/</link>
      <pubDate>Mon, 29 Jan 2018 15:10:20 +0000</pubDate>
      <guid>https://theithollow.com/2018/01/29/using-change-sets-nested-cloudformation-stacks/</guid>
      <description>&lt;p&gt;In a &lt;a href=&#34;https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/&#34;&gt;previous post&lt;/a&gt;, we looked at how to use change sets with CloudFormation. This post covers how to use change sets with a nested CloudFormation Stack.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re not familiar with nested CloudFormation stacks, it is just what it sounds like. A root stack or top level stack will call subordinate or child stacks as part of the deployment. These nested stacks could be deployed as a standalone stack or they can be tied together by using the AWS::CloudFormation::Stack resource type. Nested stacks can be used to deploy entire environments from the individual stacks below it. In fact a root stack may not deploy any resources at all other than what comes from the nested stacks. An example of a commons stacking method might be to have a top level stack that deploys a VPC, while a nested stack is responsible for deploying subnets within that stack. You could keep chaining this together to deploy EC2 instances, S3 buckets or whatever you&amp;rsquo;d like.&lt;/p&gt;</description>
    </item>
    <item>
      <title>An Introduction to AWS CloudFormation Change Sets</title>
      <link>https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/</link>
      <pubDate>Mon, 22 Jan 2018 15:05:12 +0000</pubDate>
      <guid>https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve done any work in Amazon Web Services you probably know the importance of CloudFormation (CFn) as part of your Infrastructure as Code (IaC) strategy. CloudFormation provides a JSON or YAML formatted document which describes the AWS infrastructure that you want to deploy. If you need to re-deploy the same infrastructure across production and development environments, this is pretty easy since the configuration is in a template stored in your source control.&lt;/p&gt;</description>
    </item>
    <item>
      <title>In the Cloud World, It&#39;s Cheaper to Upgrade</title>
      <link>https://theithollow.com/2018/01/16/cloud-world-cheaper-upgrade/</link>
      <pubDate>Tue, 16 Jan 2018 15:10:26 +0000</pubDate>
      <guid>https://theithollow.com/2018/01/16/cloud-world-cheaper-upgrade/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been in technology for a while, you&amp;rsquo;ve probably had to go through a hardware refresh cycle at some point. These cycles usually meant taking existing hardware, doing some capacity planning exercises and setting out to buy new hardware that is supported by the vendors. This process was usually lengthy and made CIOs break into a cold sweat just thinking about paying for more hardware, that&amp;rsquo;s probably just meant to keep the lights on. Whenever I first learned of a hardware refresh cycle, my first thoughts were &amp;ldquo;Boy, this sounds expensive!&amp;rdquo;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Commit to Infrastructure As Code</title>
      <link>https://theithollow.com/2018/01/08/commit-infrastructure-code/</link>
      <pubDate>Mon, 08 Jan 2018 15:10:30 +0000</pubDate>
      <guid>https://theithollow.com/2018/01/08/commit-infrastructure-code/</guid>
      <description>&lt;p&gt;Over recent years, Infrastructure as Code (IaC) has become sort of a utopian goal of many organizations looking to modernize their infrastructure. The benefits to IaC have been covered many times so I won&amp;rsquo;t go into too much detail, but the highlights include:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reproducibility of an environment&lt;/li&gt;
&lt;li&gt;Reduction in deployment time&lt;/li&gt;
&lt;li&gt;Linking infrastructure deployments with application deployments&lt;/li&gt;
&lt;li&gt;Source control for infrastructure items&lt;/li&gt;
&lt;li&gt;Reduction of misconfiguration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The reasoning behind storing all of your infrastructure as code is valid and a worthy goal. The agility, stability, and deployment speeds achieved through IaC can prove to have substantial benefits to the business as a whole.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use Amazon CloudWatch Logs Metric Filters to Send Alerts</title>
      <link>https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/</link>
      <pubDate>Mon, 11 Dec 2017 16:14:47 +0000</pubDate>
      <guid>https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/</guid>
      <description>&lt;p&gt;With all of the services that Amazon has to offer, it can sometimes be difficult to manage your cloud environment. Face it, you need to manage multiple regions, users, storage buckets, accounts, instances and the list just keeps going on. Well the fact that the environment can be so vast might make it difficult to notice if something nefarious is going on in your cloud. Think of it this way, if a new EC2 instance was deployed in one of your most used regions, you might see it and wonder what it was, but if that instance (or 50 instances) was deployed in a region that you never login to, would you notice that?&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS DeepLens - The Nuclear Weapon of Privacy</title>
      <link>https://theithollow.com/2017/11/29/aws-deeplens-nuclear-weapon-privacy/</link>
      <pubDate>Wed, 29 Nov 2017 20:07:21 +0000</pubDate>
      <guid>https://theithollow.com/2017/11/29/aws-deeplens-nuclear-weapon-privacy/</guid>
      <description>&lt;p&gt;Today at AWS re:INVENT, Amazon had several new product announcements which is not uncommon for the company but one in-particular raised several eyebrows. Amazon has been working very hard to make machine learning much easier for people to use. Typically, understanding machine learning has taken great expertise and a relatively small number of people even attempted to learn these concepts just because of the complexity. That is all changing thanks to some of Amazon&amp;rsquo;s more recently announced services such as &lt;a href=&#34;https://aws.amazon.com/sagemaker/&#34;&gt;Amazon Sage Maker&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use AWS Config Managed Rules to Protect Your Accounts</title>
      <link>https://theithollow.com/2017/11/27/use-aws-config-managed-rules-protect-accounts/</link>
      <pubDate>Mon, 27 Nov 2017 15:10:54 +0000</pubDate>
      <guid>https://theithollow.com/2017/11/27/use-aws-config-managed-rules-protect-accounts/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re an Amazon Web Services customer and you&amp;rsquo;re not using the built in AWS config rules, you should be. AWS Config is a service that shows you the configuration changes that have happened on your AWS accounts. Whether that&amp;rsquo;s changes to your user accounts, changes to networks, modifications to S3 buckets or plenty of other configurations. AWS Config will keep this audit log of your changes in a specified S3 bucket which could be used for all sorts of other solutions such as updating your ServiceNow configuration management database. See &lt;a href=&#34;http://www.servicenow.com/solutions/technology-solutions/lifecycle-management/cloud-lifecycle.html&#34;&gt;this post from ServiceNow&lt;/a&gt; on some details of the solution.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Dedicated Hosts</title>
      <link>https://theithollow.com/2017/11/13/aws-dedicated-hosts/</link>
      <pubDate>Mon, 13 Nov 2017 15:15:46 +0000</pubDate>
      <guid>https://theithollow.com/2017/11/13/aws-dedicated-hosts/</guid>
      <description>&lt;p&gt;Sometimes it&amp;rsquo;s just not desirable to have your Amazon EC2 instances deployed all willy-nilly across the AWS infrastructure. Sure it&amp;rsquo;s nice not having to manage the underlying infrastructure but in some cases you actually need to be able to manage the hosts themselves. One example is when you have licensing that is &amp;ldquo;old-fashioned&amp;rdquo; and uses physical core counts. With the default tenancy model, host core counts just don&amp;rsquo;t make sense, so what can we do?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Manage vSphere Virtual Machines through AWS SSM</title>
      <link>https://theithollow.com/2017/11/06/manage-vsphere-virtual-machines-aws-ssm/</link>
      <pubDate>Mon, 06 Nov 2017 15:15:18 +0000</pubDate>
      <guid>https://theithollow.com/2017/11/06/manage-vsphere-virtual-machines-aws-ssm/</guid>
      <description>&lt;p&gt;Amazon Web Services has some great tools to help you operate your EC2 instances with their Simple Systems Manager services. These services include ensuring &lt;a href=&#34;https://theithollow.com/2017/07/24/patch-compliance-ec2-systems-manager/&#34;&gt;patches are deployed&lt;/a&gt; within maintenance windows specified by you, &lt;a href=&#34;https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/&#34;&gt;automation routines&lt;/a&gt; that are used to ensure state and &lt;a href=&#34;https://theithollow.com/2017/07/17/run-commands-ec2-systems-manager/&#34;&gt;run commands&lt;/a&gt; on a fleet of servers through the AWS console. These tools are great but wouldn&amp;rsquo;t be be even better if I could use these tools to manage my VMware virtual machines too? Well, you&amp;rsquo;re in luck, because EC2 SSM can do just that and better yet, the service itself is free! Now, if you&amp;rsquo;ve followed along with the &amp;quot; &lt;a href=&#34;https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/&#34;&gt;AWS EC2 Simple Systems Manager Reference&lt;/a&gt;&amp;quot; guide you&amp;rsquo;ve probably already seen the goodies that we&amp;rsquo;ve got available, so this post is used to show you how you can use these same tools on your vSphere, Hyper-V or other on-premises platforms.&lt;/p&gt;</description>
    </item>
    <item>
      <title>VMware Discovery</title>
      <link>https://theithollow.com/2017/10/30/vmware-discovery/</link>
      <pubDate>Mon, 30 Oct 2017 14:20:38 +0000</pubDate>
      <guid>https://theithollow.com/2017/10/30/vmware-discovery/</guid>
      <description>&lt;p&gt;VMware has been busy over the last year trying to re-invent themselves with more focus on cloud. With that they&amp;rsquo;ve added some new SaaS products that can be used to help manage your cloud environments and provide some additional governance IT departments. Cloud makes things very simple to deploy and often eliminates the resource request phases that usually slow down provisioning. But once you start using the cloud, you can pretty quickly lose track of the resources that you&amp;rsquo;ve deployed, and now are paying for on a monthly basis, so it&amp;rsquo;s important to have good visibility and management of those resources.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Move an EC2 Instance to Another Region</title>
      <link>https://theithollow.com/2017/10/23/move-ec2-instance-another-region/</link>
      <pubDate>Mon, 23 Oct 2017 14:12:31 +0000</pubDate>
      <guid>https://theithollow.com/2017/10/23/move-ec2-instance-another-region/</guid>
      <description>&lt;p&gt;Sometimes, you just need to change the data center where you&amp;rsquo;re running your virtual machines. You could be doing this for disaster recovery reasons, network latency reasons, or just because you&amp;rsquo;re shutting down a region. In an on-prem environment, you might move workloads to a different data center by vMotion, VMware Site Recovery Manager, Zerto, Recoverpoint for VMs, Veeam, or one of the other great tools for a virtualized environment. But how about if that VM is running in an AWS region and you want to move it to another region?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Understanding AWS Tenancy</title>
      <link>https://theithollow.com/2017/10/16/understanding-aws-tenancy/</link>
      <pubDate>Mon, 16 Oct 2017 15:00:10 +0000</pubDate>
      <guid>https://theithollow.com/2017/10/16/understanding-aws-tenancy/</guid>
      <description>&lt;p&gt;When it comes to deploying EC2 instances within Amazon Web Services VPCs, you may find yourself confused when presented with those tenancy options. This post aims to describe the different options that you have with AWS tenancy and how they might be used.&lt;/p&gt;
&lt;p&gt;First and foremost, what do we mean by tenancy? Well, tenancy determines who is the owner of a resource. It might be easiest to think of tenancy in terms of housing. For instance if you have a house then you could consider it a dedicated tenant since only one family presumably lives there. However, if you have an apartment building, there is a good chance that several families have rooms in a single building which would be more like a shared tenancy model.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS EC2 Simple Systems Manager Reference</title>
      <link>https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/</link>
      <pubDate>Mon, 02 Oct 2017 14:07:07 +0000</pubDate>
      <guid>https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/</guid>
      <description>&lt;p&gt;Please use this post as a landing page to get you started with using the EC2 Simple Systems Manager services from Amazon Web Services. Simple Systems Manager or (SSM) is a set of services used to manage EC2 instances as well as on-premises machines (known as managed instances) with the SSM agent installed on them. You can use these services to maintain state, run ad-hoc commands, and configure patch compliance among other things.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS EC2 Systems Manager - State Manager</title>
      <link>https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/</link>
      <pubDate>Tue, 26 Sep 2017 14:06:57 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/</guid>
      <description>&lt;p&gt;Sometimes you need to ensure that things are always a certain way when you deploy AWS EC2 instances. This could be things like making sure your servers are always joined to a domain when being deployed, or making sure you run an Ansible playbook every hour. The point of the AWS EC2 SSM State Manager service is to define a consistent state for your EC2 instances.&lt;/p&gt;
&lt;p&gt;This post will use a fictional use case where I have a an EC2 instance or instances that are checking every thirty minutes to see if they should use a new image for their Apache website. The instance will check against the EC2 Simple Systems Manager Parameter Store, which we&amp;rsquo;ve discussed in a &lt;a href=&#34;https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/&#34;&gt;previous post&lt;/a&gt;, and will download the image from the S3 location retrieved from that parameter.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS EC2 Simple Systems Manager Documents</title>
      <link>https://theithollow.com/2017/09/18/aws-ec2-simple-systems-manager-documents/</link>
      <pubDate>Mon, 18 Sep 2017 14:32:16 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/18/aws-ec2-simple-systems-manager-documents/</guid>
      <description>&lt;p&gt;Amazon Web Services uses Systems Manager Documents to define actions that should be taken on your instances. This could be a wide variety of actions including updating the operating system, copying files such as logs to another destination or re-configuring your applications. These documents are written in Javascript Object Notation (JSON) and are stored within AWS for use with theother Simple Systems Manager (SSM) services such as the Automation Service or Run command.&lt;/p&gt;</description>
    </item>
    <item>
      <title>EC2 Systems Manager Parameter Store</title>
      <link>https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/</link>
      <pubDate>Mon, 11 Sep 2017 14:15:52 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/</guid>
      <description>&lt;p&gt;Generally speaking, when you deploy infrastructure through code, or run deployment scripts you&amp;rsquo;ll need to have a certain amount of configuration data. Much of your code will have install routines but what about the configuration information that is specific to your environment? Things such as license keys, service accounts, passwords, or connection strings are commonly needed when connecting multiple services together. So how do you code that exactly? Do you pass the strings in at runtime as a parameter and then hope to remember those each time you execute code? Do you bake those strings into the code and then realize that you&amp;rsquo;ve got sensitive information stored in your deployment scripts?&lt;/p&gt;</description>
    </item>
    <item>
      <title>ServiceNow Streamlines Operations</title>
      <link>https://theithollow.com/2017/09/05/servicenow-streamlines-operations/</link>
      <pubDate>Tue, 05 Sep 2017 15:00:07 +0000</pubDate>
      <guid>https://theithollow.com/2017/09/05/servicenow-streamlines-operations/</guid>
      <description>&lt;p&gt;We focus a lot of time talking about public cloud and provisioning. Infrastructure as code has changed the way in which we can deploy our workloads and how our teams are structured. We&amp;rsquo;re even allowing other teams to deploy their own workloads through our cloud management portals. But some things haven&amp;rsquo;t changed all that much.&lt;/p&gt;
&lt;p&gt;When I mention ServiceNow the first things that come to your mind are probably &amp;ldquo;Change Ticketing&amp;rdquo;, &amp;ldquo;CMDB&amp;rdquo;, or &amp;ldquo;Asset Management&amp;rdquo;. While ServiceNow certainly does all of those things, the real purpose of ServiceNow is to streamline operations. Many people who work in the enterprise probably think of ServiceNow as something that just gets in their way. No one wants to stop what they&amp;rsquo;re doing to enter a change ticket, wait for an approval or update a configuration item once deploying new servers, it&amp;rsquo;s a pain. But ServiceNow really is meant to speed up the operations process.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Are We Really Concerned with Public Cloud Vendor Lock-in?</title>
      <link>https://theithollow.com/2017/08/22/really-concerned-public-cloud-vendor-lock/</link>
      <pubDate>Tue, 22 Aug 2017 14:05:19 +0000</pubDate>
      <guid>https://theithollow.com/2017/08/22/really-concerned-public-cloud-vendor-lock/</guid>
      <description>&lt;p&gt;Recently, I was fortunate enough to attend &lt;a href=&#34;http://techfieldday.com/event/cfd2/&#34;&gt;Cloud Field Day 2&lt;/a&gt;, out in Silicon Valley. Cloud Field Day 2 brought a group of industry thought leaders together to speak with companies about their cloud products and stories. I was a little surprised to hear a reoccurring theme from some of the product vendors, which was: customers being so worried about being trapped by a public cloud vendor.&lt;/p&gt;
&lt;h1 id=&#34;is-it-true&#34;&gt;Is It True?&lt;/h1&gt;
&lt;p&gt;Based on my cloud consulting job, I can say that yes, many times customers are a bit worried about being locked in by a public cloud vendor. But most times this isn&amp;rsquo;t a crippling fear of being locked in, just a concern that they&amp;rsquo;d like to mitigate against if possible. But it&amp;rsquo;s like most things in the industry, you pick a valued partner and move forward with a strategy that makes sense for the business based on the information you know right now and a bet against the future. When virtualization was a new thing, I don&amp;rsquo;t recall that many conversations about making sure that both vSphere and Hyper-V were both in use in the data center so that lock-in could be prevented. We picked the partner that we saw had the most promise, capabilities, and price and built our solutions on top of those technologies. It&amp;rsquo;s still like that today, where you&amp;rsquo;ll pick a hardware vendor and attempt to prevent having multiple vendors because it increases the complexity of your services. You wouldn&amp;rsquo;t want to hire more people so that you can support two platforms, you&amp;rsquo;d want to hire the right employees to operate your corporate vision.&lt;/p&gt;</description>
    </item>
    <item>
      <title>NetApp at a Crossroads</title>
      <link>https://theithollow.com/2017/08/15/netapp-at-a-crossroads/</link>
      <pubDate>Tue, 15 Aug 2017 14:14:48 +0000</pubDate>
      <guid>https://theithollow.com/2017/08/15/netapp-at-a-crossroads/</guid>
      <description>&lt;p&gt;It is a pretty fair assumption that the Netapp company that you&amp;rsquo;re currently familiar with will be a much different company within the next five years. I say this because there isn&amp;rsquo;t much of a choice for anything else.&lt;/p&gt;
&lt;h1 id=&#34;where-is-netapp&#34;&gt;&lt;strong&gt;Where is Netapp?&lt;/strong&gt;&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/IMG_2583.jpg&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/IMG_2583-300x300.jpg&#34;&gt;&lt;/a&gt; When I say &lt;a href=&#34;http://netapp.com&#34;&gt;Netapp&lt;/a&gt;, my guess is the first thing that you think about is a good ole&amp;rsquo; storage array that’s been sitting in a data center. Netapp has been around for a pretty long time, and pre-dates virtualization. The storage array has had a pretty good run in the data center and provides all the capabilities that enterprises have been looking for in a storage array. The write anywhere  file layout (WAFL) introduced a very performant file system and RAID DP (Dual Parity) are part of the legacy of Netapp. Unfortunately, the legacy of Netapp has started to make them feel like a &amp;ldquo;legacy&amp;rdquo; company over the past few years.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Will Killing Net Neutrality End the Public Cloud?</title>
      <link>https://theithollow.com/2017/08/07/will-killing-net-neutrality-end-public-cloud/</link>
      <pubDate>Mon, 07 Aug 2017 14:05:27 +0000</pubDate>
      <guid>https://theithollow.com/2017/08/07/will-killing-net-neutrality-end-public-cloud/</guid>
      <description>&lt;p&gt;In today&amp;rsquo;s world, if you can get an Internet connection, you can go anywhere and connect to any service that is publicly available. No restrictions are imposed and you can use the entire amount of bandwidth you purchased from your Internet service provider. This is the world under Net Neutrality. To illustrate this point further take the following example.&lt;/p&gt;
&lt;p&gt;If you purchase a 25Mbps circuit from Comcast or AT&amp;amp;T, you can use all of that bandwidth, assuming the service on the other end is also providing 25Mbps or better.&lt;/p&gt;</description>
    </item>
    <item>
      <title>HPE Built Another Cloud - Storage This Time</title>
      <link>https://theithollow.com/2017/08/01/hpe-built-another-cloud-storage-time/</link>
      <pubDate>Tue, 01 Aug 2017 14:05:24 +0000</pubDate>
      <guid>https://theithollow.com/2017/08/01/hpe-built-another-cloud-storage-time/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/CloudVolumes1.png&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/CloudVolumes1.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;HPE &lt;a href=&#34;https://www.nimblestorage.com/blog/nimble-cloud-volumes-an-industry-first/&#34;&gt;recently announced&lt;/a&gt; that they were getting deeper into the cloud game bin introducing their Nimble Cloud Volumes (NCV) solution. Now while this sounds a lot like a storage array function, it&amp;rsquo;s really its own separate cloud that is focused only on storage. The idea behind it is that storage in both AWS and Azure isn&amp;rsquo;t great for enterprises and they want a better option to connect to their EC2 instances or Azure VMs.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Orchestrating Containers with Nirmata</title>
      <link>https://theithollow.com/2017/07/27/orchestrating-containers-nirmata/</link>
      <pubDate>Thu, 27 Jul 2017 15:06:19 +0000</pubDate>
      <guid>https://theithollow.com/2017/07/27/orchestrating-containers-nirmata/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/logo-white-200x43.png&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2017/07/logo-white-200x43-150x32.png&#34;&gt;&lt;/a&gt; I had high expectations for the sessions being presented during &lt;a href=&#34;http://techfieldday.com/event/cfd2/&#34;&gt;Cloud Field Day 2&lt;/a&gt; hosted by GestaltIT in Silicon Valley during the week of June 26th-28th. The first of the sessions presented was from a company that I hadn&amp;rsquo;t heard of before called &lt;a href=&#34;http://nirmata.io&#34;&gt;Nirmata&lt;/a&gt;. I had no idea what the company did, but after the session I found out the name is an Indo-Aryan word meaning Architect or Director which makes a lot of sense considering what they do.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Welcome to Cloud Field Day 2</title>
      <link>https://theithollow.com/2017/07/25/welcome-cloud-field-day-2/</link>
      <pubDate>Wed, 26 Jul 2017 04:53:25 +0000</pubDate>
      <guid>https://theithollow.com/2017/07/25/welcome-cloud-field-day-2/</guid>
      <description>&lt;p&gt;Tech Field Day will be presenting Cloud Field Day 2 on July 26th through the 28th in Silicon Valley. If you have the time, please join in on the fun and watch the live stream right here.&lt;/p&gt;
&lt;p&gt;The schedule will consist of nine great companies all explaining the ins and outs of their solutions and it&amp;rsquo;ll get real geeky. The schedule is found below and all times are Pacific US. So be sure to do the conversions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Patch Compliance with EC2 Systems Manager</title>
      <link>https://theithollow.com/2017/07/24/patch-compliance-ec2-systems-manager/</link>
      <pubDate>Mon, 24 Jul 2017 14:05:31 +0000</pubDate>
      <guid>https://theithollow.com/2017/07/24/patch-compliance-ec2-systems-manager/</guid>
      <description>&lt;p&gt;Deploying security patches to servers is almost as much fun as managing backup jobs. But everyone has to do it, including companies that have moved their infrastructure to AWS. As we&amp;rsquo;ve learned with previous posts, Amazon EC2 Systems Manager allows us to use some native AWS tools for management of our EC2 instances, and patch management is no exception.&lt;/p&gt;
&lt;p&gt;EC2 Systems Manager allows you to do patch compliance where you can set a baseline and then based on a defined maintenance window a scheduled scan and deployment can be initiated on those EC2 instances. This assumes that you&amp;rsquo;ve already installed the SSM Agent and setup the basic IAM permissions for the instances to communicate with the Systems Manager service. The details can be found in the previous post.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Run Commands through EC2 Systems Manager</title>
      <link>https://theithollow.com/2017/07/17/run-commands-ec2-systems-manager/</link>
      <pubDate>Mon, 17 Jul 2017 14:05:12 +0000</pubDate>
      <guid>https://theithollow.com/2017/07/17/run-commands-ec2-systems-manager/</guid>
      <description>&lt;p&gt;In a previous post we covered the different capabilities and basic setup of EC2 Systems Manager, including the IAM roles that needed to be created and the installation of the SSM Agent. In this post we&amp;rsquo;ll focus on running some commands through the EC2 Systems Manager Console.&lt;/p&gt;
&lt;p&gt;We&amp;rsquo;ve already got an Amazon Linux instance deployed within our VPC. I&amp;rsquo;ve placed this instance in a public facing subnet and it is a member of a security group that allows HTTP traffic over port 80.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Amazon EC2 Systems Manager Services</title>
      <link>https://theithollow.com/2017/07/10/amazon-ec2-systems-manager-services/</link>
      <pubDate>Mon, 10 Jul 2017 14:05:29 +0000</pubDate>
      <guid>https://theithollow.com/2017/07/10/amazon-ec2-systems-manager-services/</guid>
      <description>&lt;p&gt;We love Amazon EC2 instances because of how easy they are to deploy and we have a huge catalog of templates (AMIs) to choose from which really speeds up our provisioning. But once those instances are up and running it would be really nice to have some methods of managing those instances. Luckily, Amazon has developed several capabilities to help manage Amazon EC2 instances after they&amp;rsquo;ve been deployed. These capabilities are used to execute scripts, manage patches and kick off automation routines within an EC2 instance, directly from the AWS console.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Migrate vSphere VMs to Amazon with AWS Server Migration Service</title>
      <link>https://theithollow.com/2017/06/26/migrate-vsphere-vms-amazon-aws-server-migration-service/</link>
      <pubDate>Mon, 26 Jun 2017 14:05:01 +0000</pubDate>
      <guid>https://theithollow.com/2017/06/26/migrate-vsphere-vms-amazon-aws-server-migration-service/</guid>
      <description>&lt;p&gt;AWS is taking the virtualization world by storm. Workloads that used to get spun up on vSphere are now being deployed in AWS in many cases. But what if you&amp;rsquo;ve got workloads in vSphere that need to be moved? Sure, it probably makes sense to build new servers in AWS and decommission the old ones but sometimes it&amp;rsquo;s OK to lift and shift. Amazon has a service that can help with this process called the AWS Server Migration Service.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setup Amazon Storage Gateway</title>
      <link>https://theithollow.com/2017/06/13/setup-amazon-storage-gateway/</link>
      <pubDate>Tue, 13 Jun 2017 14:10:17 +0000</pubDate>
      <guid>https://theithollow.com/2017/06/13/setup-amazon-storage-gateway/</guid>
      <description>&lt;p&gt;Amazon&amp;rsquo;s S3 is a cost effective way to store file but many organizations are used to mapping NFS shares to machines for file storage purposes. Amazon Storage Gateways are a good way to cache or store files on an NFS mount and then back them up to an S3 bucket. This post goes through the setup of an AWS Storage Gateway in an EC2 instance for caching files and storing them in an S3 bucket. This same solution (and a similar but different process) can be used to mount block devices through iSCSI or setup a Tape Gateway for backup products.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRA 7.3 Component Profiles</title>
      <link>https://theithollow.com/2017/06/06/vra-7-3-component-profiles/</link>
      <pubDate>Tue, 06 Jun 2017 14:05:22 +0000</pubDate>
      <guid>https://theithollow.com/2017/06/06/vra-7-3-component-profiles/</guid>
      <description>&lt;p&gt;Preventing blueprint sprawl should be a consideration if you&amp;rsquo;re building out a new cloud through vRealize Automation. Too many blueprints and your users will be confused by the offerings and the more blueprints, the more maintenance needed to manage them. We&amp;rsquo;ve had custom methods for managing sprawl up until vRA 7.3 was released. Now we have some slick new methods right out of the box to cut down on the number of blueprints in use. These new out of the box configurations are called Component Profiles.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRA 7.3 Endpoints Missing</title>
      <link>https://theithollow.com/2017/05/30/vra-7-3-endpoints-missing/</link>
      <pubDate>Tue, 30 May 2017 14:08:58 +0000</pubDate>
      <guid>https://theithollow.com/2017/05/30/vra-7-3-endpoints-missing/</guid>
      <description>&lt;p&gt;vRealize Automation version 7.3 dropped a few weeks ago and you&amp;rsquo;re really excited about the new improvements that have been made with the platform. &lt;a href=&#34;http://pubs.vmware.com/Release_Notes/en/vra/73/vrealize-automation-73-release-notes.html&#34;&gt;Release Notes for version 7.3&lt;/a&gt; You&amp;rsquo;ve gone through the upgrade process which is constantly improving I might add but once you log in you find out that your endpoints that you spent so much time building are now missing. Kind of like the ones in my screenshot below.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRA Placement Decisions with a Dynamic Form</title>
      <link>https://theithollow.com/2017/05/22/vra-placement-decisions-dynamic-form/</link>
      <pubDate>Mon, 22 May 2017 14:07:31 +0000</pubDate>
      <guid>https://theithollow.com/2017/05/22/vra-placement-decisions-dynamic-form/</guid>
      <description>&lt;p&gt;vRA is great at deploying servers in an automated fashion, but to really use the built in functionality for an organization some additional information should be requested to properly place the workloads in the environment. This post covers how to ask users for the correct information to properly determine the placement location of new server workloads.&lt;/p&gt;
&lt;h1 id=&#34;cluster-placement&#34;&gt;Cluster Placement&lt;/h1&gt;
&lt;p&gt;The first placement decision that needs to be made is which cluster the workload should be placed on. This can be done with reservations and reservation policies but often comes with some blueprint sprawl. We&amp;rsquo;d like to be able to ask the requester which environment the workload should be placed on. To specify a cluster (which could include a cluster on a different vCenter or datacenter) we&amp;rsquo;ll modify an xml document stored in the IaaS Server(s) which will describe our datacenters. In my example I&amp;rsquo;ve got two clusters in a single vCenter named &amp;ldquo;Management&amp;rdquo; and &amp;ldquo;Workload&amp;rdquo;. My clusters are shown below.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setup ADFS for Amazon Web Services SAML Authentication</title>
      <link>https://theithollow.com/2017/05/15/setup-adfs-amazon-web-services-saml-authentication/</link>
      <pubDate>Mon, 15 May 2017 14:10:59 +0000</pubDate>
      <guid>https://theithollow.com/2017/05/15/setup-adfs-amazon-web-services-saml-authentication/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s a pretty common design request these days to have a single authentication source. I mean, do you really want to have to manage a bunch of different logins instead of having to remember one? Also, five different accounts give attackers five different avenues to try to exploit. So many times we use our existing Active Directory infrastructure as our single source of authentication. Amazon Web Services (AWS) needs a way for people to login and will allow you to use your own Active Directory credentials through Security Assertion Markup Language (SAML). This post will walk you through the setup of Active Directory Federation Services (ADFS) on Windows Server 2016 and configuring it to be your credentials for AWS.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Code Stream Management Pack for IT DevOps Unit Testing</title>
      <link>https://theithollow.com/2017/04/18/vrealize-code-stream-management-pack-devops-unit-testing/</link>
      <pubDate>Tue, 18 Apr 2017 14:02:02 +0000</pubDate>
      <guid>https://theithollow.com/2017/04/18/vrealize-code-stream-management-pack-devops-unit-testing/</guid>
      <description>&lt;p&gt;vRealize Code Stream Management Pack for IT DevOps (code named Houdini by VMware) allows us to treat our vRealize Automation Blueprints, or other objects, as pieces of code that can be promoted between environments. In &lt;a href=&#34;https://theithollow.com/2017/04/10/using-vrealize-code-stream-management-pack-devops/&#34;&gt;previous posts&lt;/a&gt; we&amp;rsquo;ve done just this, but a glaring piece was missing in during those articles. Promoting code between environments is great, but we&amp;rsquo;ve got to test it first or this process is only good for moving code around. A full release pipeline including unit tests can make your environment much more useful for organizations trying to ensure consistency.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using vRealize Code Stream Management Pack for IT DevOps</title>
      <link>https://theithollow.com/2017/04/10/using-vrealize-code-stream-management-pack-devops/</link>
      <pubDate>Mon, 10 Apr 2017 14:05:30 +0000</pubDate>
      <guid>https://theithollow.com/2017/04/10/using-vrealize-code-stream-management-pack-devops/</guid>
      <description>&lt;p&gt;In previous posts we covered how to &lt;a href=&#34;https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/&#34;&gt;install&lt;/a&gt;, &lt;a href=&#34;https://theithollow.com/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/&#34;&gt;configure and setup&lt;/a&gt; vRealize Code Stream Management Pack for IT DevOps (code named Houdini) so that we could get to this point. During this post we&amp;rsquo;ll take one of our vRA blueprints in the development instance and move it to the production instance. Let&amp;rsquo;s get started.&lt;/p&gt;
&lt;p&gt;To set the stage, here is my development instance where I have several blueprints at my disposal. Some of them even work! (That was a joke) For this exercise, I want to move the &amp;ldquo;Server2016&amp;rdquo; catalog from my development instance to my production instance because I have it working perfectly with my vSphere environment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Configuring vRealize Code Stream Management Pack for IT DevOps Endpoints</title>
      <link>https://theithollow.com/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/</link>
      <pubDate>Tue, 04 Apr 2017 14:03:50 +0000</pubDate>
      <guid>https://theithollow.com/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/&#34;&gt;previous post&lt;/a&gt; we covered the architecture and setup of the vRealize Code Stream Management Pack for IT DevOps (also known as Houdini). In this post we&amp;rsquo;ll cover how we need to setup Houdini&amp;rsquo;s endpoints so that we can use them to release our blueprints or workflows to other instances.&lt;/p&gt;
&lt;h1 id=&#34;remote-content-server-endpoint-setup&#34;&gt;Remote Content Server Endpoint Setup&lt;/h1&gt;
&lt;p&gt;To setup our endpoints we can use nicely packaged blueprints right in vRA. It&amp;rsquo;s pretty nice that our setup deployed some blueprints for us to use, right in the default tenant of our vRA server. Login to the vRA default tenant with your Houdini Administrator that you setup in &lt;a href=&#34;https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/&#34;&gt;part 1&lt;/a&gt;. Then go to the catalog and request the &amp;ldquo;Add Remote Content Endpoint&amp;rdquo;  under the &amp;ldquo;Administration&amp;rdquo; service. A remote content server (RCS) is a vRA appliance that will cache your packages. It&amp;rsquo;s a pretty useful thing to have if you&amp;rsquo;ve got vRA appliances in different sites and you need to move vSphere VMs or other large objects over a WAN. Future releases can be copied from the remote content server instead of always copying from the source.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Installing Code Stream Management Pack for IT DevOps</title>
      <link>https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/</link>
      <pubDate>Mon, 27 Mar 2017 14:02:58 +0000</pubDate>
      <guid>https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/</guid>
      <description>&lt;p&gt;Deploying blueprints in vRealize Automation is one thing, but with all things as code, we need to be able to move this work from our test instances to development and production instances. It&amp;rsquo;s pretty important to be sure that the code being moved to a new instance is identical. We don&amp;rsquo;t want to have a user re-create the blueprints or workflows because it&amp;rsquo;s prone to user error. Luckily for us, we have a solution. VMware has the vRealize Code Stream Management Pack for IT DevOps which I though about nicknaming vRCSMPITDO but that didn&amp;rsquo;t really roll off the tongue. VMware previously nicknamed this product &amp;ldquo;Houdini&amp;rdquo; so for the purposes of this post, we&amp;rsquo;ll use that too! This article will kick off a few more posts on using the product but for now we&amp;rsquo;ll focus on installing it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Adding an Azure Endpoint to vRealize Automation 7</title>
      <link>https://theithollow.com/2017/03/20/adding-azure-endpoint-vrealize-automation-7/</link>
      <pubDate>Mon, 20 Mar 2017 14:03:55 +0000</pubDate>
      <guid>https://theithollow.com/2017/03/20/adding-azure-endpoint-vrealize-automation-7/</guid>
      <description>&lt;p&gt;As of vRealize Automation 7.2, you can now deploy workloads to Microsoft Azure through vRA&amp;rsquo;s native capabilities. Don&amp;rsquo;t get too excited here though since the process for adding an endpoint is much different than it is for other endpoints such as vSphere or AWS. The process for Azure in vRA 7 is to leverage objects in vRealize Orchestrator to do the heavy lifting. If you know things like resource mappings and vRO objects, you can do very similar tasks in the tool.&lt;/p&gt;</description>
    </item>
    <item>
      <title>NSX Issues After Replacing VMware Self-Signed Certs</title>
      <link>https://theithollow.com/2017/03/13/nsx-issues-replacing-vmware-self-signed-certs/</link>
      <pubDate>Mon, 13 Mar 2017 14:05:58 +0000</pubDate>
      <guid>https://theithollow.com/2017/03/13/nsx-issues-replacing-vmware-self-signed-certs/</guid>
      <description>&lt;p&gt;Recently, I&amp;rsquo;ve been going through and updating my lab so that I&amp;rsquo;m all up to date with the latest technology. As part of this process, I&amp;rsquo;ve updated my certificates so that all of my URLs have the nice trusted green logo on them. Oh yeah, and because it&amp;rsquo;s more secure.&lt;/p&gt;
&lt;p&gt;I updated my vSphere lab to version 6.5 and moved to the vCenter Server Appliance (VCSA) as part of my updates. However, after I replaced the default self-signed certificates I had a few new problems. Specifically, after the update, NSX wouldn&amp;rsquo;t connect to the lookup service. This is particularly annoying because as I found out later, if I&amp;rsquo;d have just left my self-signed certificates in tact, I would never have had to deal with this. I thought that I was doing the right thing for security, but VMware made it more painful for me to do the right thing. I&amp;rsquo;m hoping this gets more focus soon from VMware.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Cisco UCS Director Catalog Request</title>
      <link>https://theithollow.com/2017/01/23/cisco-ucs-director-catalog-request/</link>
      <pubDate>Mon, 23 Jan 2017 15:00:24 +0000</pubDate>
      <guid>https://theithollow.com/2017/01/23/cisco-ucs-director-catalog-request/</guid>
      <description>&lt;p&gt;Cisco UCS Director Catalog Requests are the entire reason for having a cloud management platform in the first place. It&amp;rsquo;s the end user&amp;rsquo;s store for where they can request machines and services. To request a service, login to the UCS Director Portal with an account that has the &amp;ldquo;Service End-User&amp;rdquo; role. This role provides a different portal when logging in that only shows the user&amp;rsquo;s orders and catalogs and removes all of the administration options.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Step Functions</title>
      <link>https://theithollow.com/2017/01/17/aws-step-functions/</link>
      <pubDate>Tue, 17 Jan 2017 15:01:40 +0000</pubDate>
      <guid>https://theithollow.com/2017/01/17/aws-step-functions/</guid>
      <description>&lt;p&gt;This year at AWS re:Invent Amazon announced a new service called &lt;a href=&#34;https://aws.amazon.com/step-functions/&#34;&gt;Step Functions&lt;/a&gt;. According to AWS, Step Functions is an easy way to coordinate the components of distributed applications and microservices using visual workflows. That pretty much sums it up! When you&amp;rsquo;ve got a series of small microservices that need to be coordinated, it can be tricky to write this code into each lambda function to call the next function. Step Functions gives you a visual editor to manage the calls to multiple Lambda functions to make your life easier. I&amp;rsquo;ve written about this before on the &lt;a href=&#34;https://www.thinkahead.com/blog/visual-orchestration-aws/&#34;&gt;AHEAD blog&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Upgrade from vRA from 7.1 to 7.2</title>
      <link>https://theithollow.com/2016/11/24/upgrade-vra-7-1-7-2/</link>
      <pubDate>Thu, 24 Nov 2016 15:00:25 +0000</pubDate>
      <guid>https://theithollow.com/2016/11/24/upgrade-vra-7-1-7-2/</guid>
      <description>&lt;p&gt;vRealize Automation has had a different upgrade process for about every version that I can think of. The upgrade from vRA 7.1 to 7.2 is no exception, but this time you can see that some good things are happening to this process. There are fewer manual steps to do to make sure the upgrade goes smoothly and a script is now used to upgrade the IaaS Components which is a nice change from the older methods. As with any upgrade, you should read all of the instructions in the &lt;a href=&#34;http://pubs.vmware.com/vrealize-automation-72/topic/com.vmware.ICbase/PDF/vrealize-automation-71to72-upgrading.pdf&#34;&gt;official documentation&lt;/a&gt; before proceeding.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Creating a Cisco UCS Director Catalog</title>
      <link>https://theithollow.com/2016/11/14/creating-cisco-ucs-director-catalog/</link>
      <pubDate>Mon, 14 Nov 2016 15:05:26 +0000</pubDate>
      <guid>https://theithollow.com/2016/11/14/creating-cisco-ucs-director-catalog/</guid>
      <description>&lt;p&gt;Creating a Cisco UCS Director Catalog is a critical step because it&amp;rsquo;s what your end users will request new virtual machines and services from. There are a couple types of catalogs that will deploy virtual machines, advanced and standard. Standard selects a virtual machine template from vSphere. Advanced selects a pre-defined workflow that has been built in UCSD and then published to the catalog.&lt;/p&gt;
&lt;h1 id=&#34;create-a-standard-catalog&#34;&gt;Create a Standard Catalog&lt;/h1&gt;
&lt;p&gt;To create a “standard” object go to the Policies drop down and select catalogs. From there click &amp;ldquo;Add&amp;rdquo;. Select a catalog type and then click &amp;ldquo;Submit&amp;rdquo;. In this example, I&amp;rsquo;ve chosen the &amp;ldquo;Standard&amp;rdquo; catalog type.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Terraform with Cisco UCS Director</title>
      <link>https://theithollow.com/2016/11/07/terraform-cisco-ucs-director/</link>
      <pubDate>Mon, 07 Nov 2016 15:15:58 +0000</pubDate>
      <guid>https://theithollow.com/2016/11/07/terraform-cisco-ucs-director/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m a big fan of Terraform from Hashicorp but many organizations are using cloud management platforms like Cisco UCS Director or vRealize Automation in order to deploy infrastructure. If you read my blog often, you&amp;rsquo;ll know that I&amp;rsquo;ve got some experience with both of these products and if you&amp;rsquo;re looking to get up to speed on either of them, check out one of these links: &lt;a href=&#34;https://theithollow.com/2016/10/13/cisco-ucs-director-6-guide/&#34;&gt;UCS Director 6 Guide&lt;/a&gt; or &lt;a href=&#34;https://theithollow.com/2016/01/11/vrealize-automation-7-guide/&#34;&gt;vRealize Automation 7 Guide&lt;/a&gt;. But why not use Terraform with Cisco UCS Director and have the best of both worlds?&lt;/p&gt;</description>
    </item>
    <item>
      <title>Assigning Permissions to UCS Director Catalogs</title>
      <link>https://theithollow.com/2016/11/02/assigning-permissions-ucs-director-catalogs/</link>
      <pubDate>Wed, 02 Nov 2016 14:06:17 +0000</pubDate>
      <guid>https://theithollow.com/2016/11/02/assigning-permissions-ucs-director-catalogs/</guid>
      <description>&lt;p&gt;Creating a Cisco UCS Director Catalog is the first step to publishing services to your end users. The second step is to assign permissions. This post will show you how to assign permissions to UCS Director Catalogs.&lt;/p&gt;
&lt;p&gt;To allow users to access a catalog item they must be granted permissions. To do this, go to the Administration drop down &amp;ndash;&amp;gt; Users and Groups. From there click on the &amp;ldquo;User Groups&amp;rdquo; tab and find the group which should be entitled.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Cisco UCS Director End User Self-Service Policy</title>
      <link>https://theithollow.com/2016/10/31/cisco-ucs-director-end-user-self-service-policy/</link>
      <pubDate>Mon, 31 Oct 2016 14:02:18 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/31/cisco-ucs-director-end-user-self-service-policy/</guid>
      <description>&lt;p&gt;The Cisco UCS Director end user self-service policy is used to determine which day 2 operations that come out of the box are available on catalogs in a VDC. By &amp;ldquo;day 2&amp;rdquo; I mean the types of operations that can be performed on a virtual machine after its been deployed, such as reboot, power on, snapshot, etc.&lt;/p&gt;
&lt;p&gt;To configure these, go to the Policies drop down and select Virtual/Hypervisor Policies &amp;ndash;&amp;gt; Service Delivery. Then select the “End User Self-Service Policy” and click the Add button.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director VMware Management Policy</title>
      <link>https://theithollow.com/2016/10/26/ucs-director-vmware-management-policy/</link>
      <pubDate>Wed, 26 Oct 2016 15:24:20 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/26/ucs-director-vmware-management-policy/</guid>
      <description>&lt;p&gt;Cisco UCS Director VMware Management Policy is used to determine how virtual machines will behave and more specifically be cleaned up. In the cloud world, the removal of inactive and unnecessary virtual machines may be more important that the deployment of them. The VM Management Policy is used to configure leases, notifications about when leases expire, and determining when a VM is inactive. This policy is very useful to keep your cloud clean, and removing unneeded virtual machines when they&amp;rsquo;re past their usefulness.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Cost Model</title>
      <link>https://theithollow.com/2016/10/24/ucs-director-cost-model/</link>
      <pubDate>Mon, 24 Oct 2016 14:08:29 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/24/ucs-director-cost-model/</guid>
      <description>&lt;p&gt;Chargeback or at least showback is an important thing for any cloud environment. Cisco UCS Director can provide cost information back to managers but you need to create a UCS Director cost model. This cost model will define how all the costs are calculated.&lt;/p&gt;
&lt;h1 id=&#34;add-a-cost-model&#34;&gt;Add a Cost Model&lt;/h1&gt;
&lt;p&gt;To create a cost model, go to the Policies drop down and select Virtual/Hypervisor Policies &amp;ndash;&amp;gt; Service Delivery. Then select the Cost Model tab.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director System Policies</title>
      <link>https://theithollow.com/2016/10/19/ucs-director-system-policies/</link>
      <pubDate>Wed, 19 Oct 2016 14:15:23 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/19/ucs-director-system-policies/</guid>
      <description>&lt;p&gt;UCS Director System Policies are kind of a catch all for any settings that need to be defined prior to a virtual machine being deployed, and that don&amp;rsquo;t fit into a neat little category like Network, Storage or Compute. This post reviews two types of system policies: VMware and AWS.&lt;/p&gt;
&lt;h1 id=&#34;vmware-system-policy&#34;&gt;VMware System Policy&lt;/h1&gt;
&lt;p&gt;This policy is used to configure things like the Time Zones, DNS Settings, virtual machine naming conventions and guest licensing information. The policy can be found under the Policies drop down &amp;ndash;&amp;gt; Virtual/Hypervisor Policies &amp;ndash;&amp;gt; Service Delivery screen and from there you&amp;rsquo;ll be looking for the VMware System Policy tab.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Network Policies</title>
      <link>https://theithollow.com/2016/10/17/ucs-director-network-policies/</link>
      <pubDate>Mon, 17 Oct 2016 14:30:55 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/17/ucs-director-network-policies/</guid>
      <description>&lt;p&gt;The UCS Director Virtual Data Center construct requires several underlying policies in order to become an item that virtual machine can be deployed on. One of these items is the networking policy which includes IP Pools, VLANs, vNic rules and port group selection.&lt;/p&gt;
&lt;h1 id=&#34;ip-pool-policy&#34;&gt;IP Pool Policy&lt;/h1&gt;
&lt;p&gt;Before creating any Network Policies it may be necessary to create an IP Pool Policy. The IP Pool is used to distribute IP Addresses from UCS Director instead of an IPAM solution or DHCP. If either of those methods are to be used, this section can be skipped.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director VMware Storage Policy</title>
      <link>https://theithollow.com/2016/10/17/ucs-director-vmware-storage-policy/</link>
      <pubDate>Mon, 17 Oct 2016 14:08:10 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/17/ucs-director-vmware-storage-policy/</guid>
      <description>&lt;p&gt;The storage policy defines how virtual disks will be deployed on vSphere datastores. This policy will be added to the Cisco UCS Director Virtual Data Center construct to provide a comprehensive policy on how to deploy new virtual machines on VMware vSphere.&lt;/p&gt;
&lt;h1 id=&#34;vmware-storage-policies&#34;&gt;VMware Storage Policies&lt;/h1&gt;
&lt;p&gt;To configure a VMware Storage Policy,  go to the Policies drop down “Virtual/Hypervisor Policies” &amp;ndash;&amp;gt; Storage. Then click on the “VMware Storage Policy” tab.&lt;/p&gt;
&lt;p&gt;You’ll notice that there may be some default storage policies listed here. These can be deleted and you can create your own policies from scratch. VMware storage polices are created by default when you add the cloud. Click &amp;ldquo;Add&amp;rdquo;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Cisco UCS Director 6 Guide</title>
      <link>https://theithollow.com/2016/10/13/cisco-ucs-director-6-guide/</link>
      <pubDate>Thu, 13 Oct 2016 14:00:44 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/13/cisco-ucs-director-6-guide/</guid>
      <description>&lt;p&gt;Cisco UCS Director 6 is a cloud management platform that can deploy virtual machines and services across vSphere, KVM, Hyper-V and AWS endpoints. UCS Director will manage the orchestration, lifecycle and governance of virtual machines deployed through it and can also help in the automatic provisioning of hardware resources. Cisco has plenty of documentation on how to click the buttons to create constructs used for deployment, but I was not able to find any great resources on what order they should be performed in and why I&amp;rsquo;m making the choices in the GUI. If you follow this guide in the order of posts listed, it should help you to get a Cisco UCS Director 6 environment setup and be able to use it to deploy virtual resources. This guide does not cover many of the additional benefits that UCSD can provide when dealing with a physical environment. I hope that this guide can give you a good starting point on how the solution works and what you can do with it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Computing Policy</title>
      <link>https://theithollow.com/2016/10/13/ucs-director-computing-policy/</link>
      <pubDate>Thu, 13 Oct 2016 13:55:53 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/13/ucs-director-computing-policy/</guid>
      <description>&lt;p&gt;The Computing Polices determine how vCPUs and vMEM will be assigned to a virtual machine deployed through UCS Director as well as which clusters and hosts can have virtual machines placed on them.&lt;/p&gt;
&lt;h1 id=&#34;add-a-vmware-computing-policy&#34;&gt;Add a VMware Computing Policy&lt;/h1&gt;
&lt;p&gt;To add a computing policy got to the Policies drop down and select “Virtual/Hypervisor Polices” &amp;ndash;&amp;gt; Computing. Then select the VMware Computing Policy tab.&lt;/p&gt;
&lt;p&gt;You’ll notice that there may be some default VMware computing policies listed here. These can be deleted and you can create your own policies from scratch. VMware computing polices are created by default when you add the cloud.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Infrastructure Setup</title>
      <link>https://theithollow.com/2016/10/12/ucs-director-infrastructure-setup/</link>
      <pubDate>Wed, 12 Oct 2016 14:00:05 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/12/ucs-director-infrastructure-setup/</guid>
      <description>&lt;p&gt;UCS Director is a cloud management platform and thus requires some infrastructure to deploy the orchestrated workloads. In many cases UCS Director can also orchestrate the configuration and deployment of bare metal or hardware as well, such as configuring new VLANs on switches, deploying operating systems on blades and setting hardware profiles etc. This post focuses on getting those devices to show up in UCS Director so that additional automation can be performed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Basic Setup Configurations</title>
      <link>https://theithollow.com/2016/10/11/ucs-director-basic-setup-configurations/</link>
      <pubDate>Tue, 11 Oct 2016 14:05:15 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/11/ucs-director-basic-setup-configurations/</guid>
      <description>&lt;p&gt;The basic deployment of UCS Director consists of deploying an OVF file that is available from the Cisco downloads site. This post won&amp;rsquo;t go through the deployment of the OVF but this should be a pretty simple setup. The deployment will ask for IP Addressing information and some passwords. Complete the deployment of the OVF in your virtual environment and then continue with this post.&lt;/p&gt;
&lt;p&gt;Once the OVF has been deployed, open a web browser and place the IP Address of the appliance in the address bar.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Cisco UCS Director VDCs</title>
      <link>https://theithollow.com/2016/10/10/cisco-ucs-director-vdcs/</link>
      <pubDate>Mon, 10 Oct 2016 14:16:44 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/10/cisco-ucs-director-vdcs/</guid>
      <description>&lt;p&gt;Cisco UCS Director utilizes the idea of a Virtual Data Center (VDC) to determine how and where virtual machine should be placed. This includes which clusters to deploy to, networks to use, datastores to live on, as well as the guest customization and cost models that will be used for those virtual machines. According to the UCS Director Administration Guide, a Virtual Data Center is &amp;ldquo;a logical grouping that combines virtual resources, operational details, rules, and policies to manage specific group requirements&amp;rdquo;. Cisco UCS Director VDCs are the focal point of a virtual machine deployment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Scaling in vRealize Automation</title>
      <link>https://theithollow.com/2016/10/06/scaling-vrealize-automation/</link>
      <pubDate>Thu, 06 Oct 2016 14:18:06 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/06/scaling-vrealize-automation/</guid>
      <description>&lt;p&gt;One of the new features of vRealize Automation in version 7.1 is the ability to scale out or scale in your servers. This sort of scaling is a horizontal scaling of the number of servers. For instance, if you had deployed a single web server, you can scale out to two, three etc. When you scale in, you can go from four servers to three and so on.&lt;/p&gt;
&lt;h1 id=&#34;use-cases&#34;&gt;Use Cases&lt;/h1&gt;
&lt;p&gt;The use cases here could really vary widely. The easiest to get started with would be some sort of a web / database deployment where the web servers have some static front end web pages and can be deployed over and over again with the same configurations. If we were to place the web servers behind a load balancer (yep, think NSX here for you vSphere junkies) then your web applications can be scaled horizontally based on when you run out of resources.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Scale Sets</title>
      <link>https://theithollow.com/2016/10/03/azure-scale-sets/</link>
      <pubDate>Mon, 03 Oct 2016 14:10:32 +0000</pubDate>
      <guid>https://theithollow.com/2016/10/03/azure-scale-sets/</guid>
      <description>&lt;p&gt;Azure scale sets are a way to horizontally increase or decrease resources for your applications. Wouldn&amp;rsquo;t it be nice to provision a pair of web servers behind a load balancer, and then add a third or fourth web server once the load hit 75% of capacity? Even better, when the load on those web servers settles down, they could be removed to save you money? This is what an Azure scale set does. Think of the great uses for this; seasonal demand for a shopping site, event promotions that cause a short spike in traffic, or even end of the month data processing tasks could automatically scale out to meet the demand and then scale in to save money when not needed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Get Started with Azure Automation</title>
      <link>https://theithollow.com/2016/09/19/get-started-azure-automation/</link>
      <pubDate>Mon, 19 Sep 2016 14:15:50 +0000</pubDate>
      <guid>https://theithollow.com/2016/09/19/get-started-azure-automation/</guid>
      <description>&lt;p&gt;Microsoft Azure has a neat way to store and run code right from within Microsoft Azure called &amp;ldquo;Azure Automation&amp;rdquo;. If you&amp;rsquo;re familiar with Amazon&amp;rsquo;s Lambda service, Azure Automation is similar in many ways. The main difference is that in Azure, we&amp;rsquo;re working with PowerShell code instead of Python or Node.js.&lt;/p&gt;
&lt;h1 id=&#34;create-an-azure-automation-account&#34;&gt;Create An Azure Automation Account&lt;/h1&gt;
&lt;p&gt;To get started, the first thing that we need to do is to setup an Azure Automation Account. In your Azure instance, browse for &amp;ldquo;Automation Accounts&amp;rdquo; and then click Add. Give the account a name and a subscription that the PowerShell commands should run under. As with any Azure objects, select a resource group or create your own and then select a location. The last setting is to decide whether or not the account with be an &amp;ldquo;Azure Run As&amp;rdquo; account. If you select &amp;ldquo;Yes&amp;rdquo; then the account will have access to other Azure Resources within your instance. For our examples, this account should be a &amp;ldquo;run as&amp;rdquo; account.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Microsoft Azure Portals</title>
      <link>https://theithollow.com/2016/09/12/microsoft-azure-portals/</link>
      <pubDate>Mon, 12 Sep 2016 14:05:06 +0000</pubDate>
      <guid>https://theithollow.com/2016/09/12/microsoft-azure-portals/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re getting started with Microsoft Azure, you may feel confused about where things are located. One of the reasons for this confusion is the current use of multiple portals. It&amp;rsquo;s hard enough to learn how subscriptions work, how to access the resources through PowerShell and all of those new concepts without having to navigate different sites. This post should shed some light on what the portals are and how they&amp;rsquo;re used.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Cloud Services</title>
      <link>https://theithollow.com/2016/09/07/azure-cloud-services/</link>
      <pubDate>Wed, 07 Sep 2016 14:00:55 +0000</pubDate>
      <guid>https://theithollow.com/2016/09/07/azure-cloud-services/</guid>
      <description>&lt;p&gt;Azure provides a Platform-as-a-Service offering called a &amp;ldquo;Cloud Service.&amp;rdquo; Instead of managing every part of a virtual machine (the middle-wear and the application) it might be desirable to only worry about the application that is being deployed. An Azure cloud service allows you to just focus on the app, but does give you access to the underlying virtual machine if you need to use it.&lt;/p&gt;
&lt;p&gt;So what makes up an Azure Cloud Service? There are two main types of virtual machines that are deployed through a cloud service; web roles and worker roles. Web roles are Windows servers with IIS installed and ready to use on them. Worker roles are Windows servers without IIS installed. In addition to the Windows instances that will be deployed, a cloud service also includes a load balancer that will automatically load balance the web roles, and an IP Address will be assigned to the load balancer. One thing to note is that the web server roles have an agent installed on them as well so that the load balancer can determine if the server is working correctly and if it needs to remove a server from the load balancer.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Network Interfaces</title>
      <link>https://theithollow.com/2016/09/06/azure-network-interfaces/</link>
      <pubDate>Tue, 06 Sep 2016 14:05:44 +0000</pubDate>
      <guid>https://theithollow.com/2016/09/06/azure-network-interfaces/</guid>
      <description>&lt;p&gt;Azure allows you to manage network interfaces as an object that can be decoupled from the virtual machine. This is important to note, because when you delete your virtual machine, the Network Interface will still be in the Azure Portal. This NIC and all of it&amp;rsquo;s settings will still exist for reuse if you wish. This would include keeping the Public IP Address that is associated with it, subnets, and Network Security Groups.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Simple Disaster Recovery Options with Zerto</title>
      <link>https://theithollow.com/2016/08/24/simple-disaster-recovery-options-zerto/</link>
      <pubDate>Wed, 24 Aug 2016 14:05:04 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/24/simple-disaster-recovery-options-zerto/</guid>
      <description>&lt;p&gt;An issue serious enough to require servers in your data center to be failed over to a secondary site will probably keep you busy enough all on it&amp;rsquo;s own. You don&amp;rsquo;t want to have to think about how complicated your disaster recovery tool is. I&amp;rsquo;ve been impressed with Zerto since the first time that I worked with it. The tool requires a piece of software called the Zerto Virtual Manager, to be installed at each of your sites and connected to your vCenters. This manager will then deploy replication appliances on each of your ESXi hosts to manage the replication. From there on, all the replication settings, orchestration options, and fail over tasks are completed through this manager.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploying Virtual Machines in Microsoft Azure</title>
      <link>https://theithollow.com/2016/08/23/deploying-virtual-machines-microsoft-azure/</link>
      <pubDate>Tue, 23 Aug 2016 14:01:28 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/23/deploying-virtual-machines-microsoft-azure/</guid>
      <description>&lt;p&gt;Congratulations! If you&amp;rsquo;ve made it this far in the &lt;a href=&#34;https://theithollow.com/2016/07/18/guide-getting-started-azure/&#34;&gt;Microsoft Azure Series&lt;/a&gt;, you&amp;rsquo;re finally ready to start deploying virtual machines in Microsoft Azure. Let&amp;rsquo;s face it, the whole series has led up to this post because most of you are probably looking at getting started in Azure with the virtual machine. It&amp;rsquo;s familiar and can house applications, databases, data or whatever you&amp;rsquo;ve been housing in in your on premises data center. If you&amp;rsquo;re trying to benchmark Azure with you&amp;rsquo;re own data center apps, virtual machines are probably where you&amp;rsquo;ll spend your time. As you learn more about the the platform, Azure&amp;rsquo;s PaaS offerings might be more heavily used to prevent you from having to manage those pesky operating systems but for now we&amp;rsquo;re focusing on the VM.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Install PowerShell on Mac</title>
      <link>https://theithollow.com/2016/08/22/install-powershell-mac/</link>
      <pubDate>Mon, 22 Aug 2016 14:10:58 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/22/install-powershell-mac/</guid>
      <description>&lt;p&gt;It&amp;rsquo;s a weird thing to say, but we can install PowerShell on Mac after the &lt;a href=&#34;https://azure.microsoft.com/en-us/blog/powershell-is-open-sourced-and-is-available-on-linux/&#34;&gt;announcement from Microsoft&lt;/a&gt; that PowerShell will be available for both Macintosh and Linux. It&amp;rsquo;s pretty easy to accomplish but having a great scripting language like PowerShell available for Mac is really cool and deserves a blog post. I mean, now I don&amp;rsquo;t even need to fire up my Windows virtual machine just to run PowerShell!&lt;/p&gt;
&lt;p&gt;To get started, download the OSX .pkg file from the github page: &lt;a href=&#34;https://github.com/PowerShell/PowerShell/releases/&#34;&gt;https://github.com/PowerShell/PowerShell/releases/&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Get Started with Azure PowerShell</title>
      <link>https://theithollow.com/2016/08/15/get-started-azure-powershell/</link>
      <pubDate>Mon, 15 Aug 2016 14:35:02 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/15/get-started-azure-powershell/</guid>
      <description>&lt;p&gt;Microsoft Azure has its own command line that can be used to script installs, export and import configurations and query your portal for information. Being a Microsoft solution, this command line is accessed through PowerShell.&lt;/p&gt;
&lt;h1 id=&#34;install-azure-powershell&#34;&gt;Install Azure PowerShell&lt;/h1&gt;
&lt;p&gt;Using PowerShell with Microsoft Azure is pretty simple to get up and going. The first step to getting started is to install the Azure PowerShell modules. Open up your PowerShell console and run both &amp;ldquo;Install-Module AzureRM&amp;rdquo; and then &amp;ldquo;Install-Module Azure&amp;rdquo;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Storage Accounts</title>
      <link>https://theithollow.com/2016/08/11/azure-storage-accounts/</link>
      <pubDate>Thu, 11 Aug 2016 14:05:47 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/11/azure-storage-accounts/</guid>
      <description>&lt;p&gt;Azure storage accounts provide a namespace in which to store data objects. These objects could be blobs, file, tables, queues and virtual machine disks. This post focuses on the pieces necessary to create a new storage account for use within Azure Resource Manager portal.&lt;/p&gt;
&lt;h1 id=&#34;setup&#34;&gt;Setup&lt;/h1&gt;
&lt;p&gt;To setup a storage account go to the Azure Resource Manager Portal, select storage accounts and then click the &amp;ldquo;Add&amp;rdquo; button. From there you&amp;rsquo;ll have some familiar settings that will need to be filled out such as a unique name for the account, a subscription to use for billing, a resource group for management, and a location for the region to be used. The rest of this article explains the additional settings shown in the screenshot below.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Create Azure VPN Connection</title>
      <link>https://theithollow.com/2016/08/08/create-azure-vpn-connection/</link>
      <pubDate>Mon, 08 Aug 2016 14:05:37 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/08/create-azure-vpn-connection/</guid>
      <description>&lt;p&gt;Unless you&amp;rsquo;re starting up a company from scratch, you probably won&amp;rsquo;t host all of your workloads in a public cloud like Microsoft Azure. If you&amp;rsquo;re building a hybrid cloud, you probably want to have network connectivity between the two clouds and that means a VPN. Microsoft Azure uses a Virtual Network Gateway to provide this connectivity.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: As of the writing of this blog post, Microsoft has two portals that can be used to provide cloud resources. The Classic portal and the Azure Resource Manager portal. This post focuses on setting up a VPN tunnel using the new Azure Resource Manager portal.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Network Security Groups</title>
      <link>https://theithollow.com/2016/08/03/azure-network-security-groups/</link>
      <pubDate>Wed, 03 Aug 2016 14:05:32 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/03/azure-network-security-groups/</guid>
      <description>&lt;p&gt;An Azure network security group is your one stop shop for access control lists. Azure NSGs are how you will block or allow traffic from entering or exiting your subnets or individual virtual machines. In the new Azure Resource Manager Portal NSGs are applied to either a subnet or a virtual NIC of a virtual machine, and not the entire machine itself.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: At the time of this post, Azure has a pair of Azure portals, including the classic portal where NSGs are applied to a virtual machine, or the Resource Manager Portal where NSGs are applied to a VNic of a virtual machine.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setup Azure Networks</title>
      <link>https://theithollow.com/2016/08/01/setup-azure-networks/</link>
      <pubDate>Mon, 01 Aug 2016 14:05:16 +0000</pubDate>
      <guid>https://theithollow.com/2016/08/01/setup-azure-networks/</guid>
      <description>&lt;p&gt;Setting up networks in Microsoft Azure is pretty simple task, but care should be taken when deciding how the address space will be carved out. To get started lets cover a couple of concepts about how Azure handles networking. To start we have the idea of a &amp;ldquo;VNet&amp;rdquo; which is the IP space that will be assigned to smaller subnets. These VNets are isolated from each other and the outside world. If you want your VNet to communicate with another VNet or your on-premises networks, you&amp;rsquo;ll need to setup a VPN tunnel. You might be wondering, how do you do any segmentation between servers without having to setup a VPN then? The answer there is using subnets. Multiple subnets can be created inside of a VNet and security groups can be added to them so that they only allow certain traffic, sort of like a firewall does.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Execute vRO Workflow from AWS Lambda</title>
      <link>https://theithollow.com/2016/07/26/vro_from_aws_lambda/</link>
      <pubDate>Tue, 26 Jul 2016 14:00:15 +0000</pubDate>
      <guid>https://theithollow.com/2016/07/26/vro_from_aws_lambda/</guid>
      <description>&lt;p&gt;The use cases here are open for debate, but you can setup a serverless call to vRealize Orchestrator to execute your custom orchestration tasks. Maybe you&amp;rsquo;re integrating this with an &lt;a href=&#34;http://amzn.to/2a0VHhe&#34;&gt;Amazon IoT button&lt;/a&gt;, or you want voice deployments with &lt;a href=&#34;http://amzn.to/2a0VFG8&#34;&gt;Amazon Echo&lt;/a&gt;, or maybe you&amp;rsquo;re just trying to provide access to your workflows based on a CloudWatch event in Amazon. In any case, it is possible to setup an Amazon Lambda call to execute a vRO workflow. In this post, we&amp;rsquo;ll actually build a Lambda function that executes a vRO workflow that deploys a CentOS virtual machine in vRealize Automation, but the workflow could really be anything you want.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Resource Groups</title>
      <link>https://theithollow.com/2016/07/18/azure-resource-groups/</link>
      <pubDate>Mon, 18 Jul 2016 14:05:36 +0000</pubDate>
      <guid>https://theithollow.com/2016/07/18/azure-resource-groups/</guid>
      <description>&lt;p&gt;An Azure resource group is a way for you to, you guessed it, group a set of resources together. This is a useful capability in a public cloud so that you can manage permissions, set alerts, built deployment templates and audit logs on a subset of resources. Resource groups can contain, virtual machines, gateways, VNets, VPNs and about any other resource Azure can deploy.&lt;/p&gt;
&lt;p&gt;Most items that you create will need to belong to a resource group but an item can only belong to a single resource group at a time. Resources can be moved from one resource group to another.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Azure Subscriptions</title>
      <link>https://theithollow.com/2016/07/11/azure-subscriptions/</link>
      <pubDate>Mon, 11 Jul 2016 14:12:30 +0000</pubDate>
      <guid>https://theithollow.com/2016/07/11/azure-subscriptions/</guid>
      <description>&lt;p&gt;Azure is a great reservoir of resources that your organization can use to deploy applications upon and the cloud is focused around pooling resources together. However, organizations need to be able to split resources up based on cost centers. The development team will be using resources for building new apps, as well as maybe an e-commerce team for production uses. Subscriptions allow for a single Azure instance to separate these costs, and bill to different teams.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Add Custom Items to vRealize Automation</title>
      <link>https://theithollow.com/2016/07/05/add-custom-items-vrealize-automation/</link>
      <pubDate>Tue, 05 Jul 2016 14:14:41 +0000</pubDate>
      <guid>https://theithollow.com/2016/07/05/add-custom-items-vrealize-automation/</guid>
      <description>&lt;p&gt;vRealize Automation lets us publish vRealize Orchestrator workflows to the service catalog, but to get more functionality out of these XaaS blueprints, we can add the provisioned resources to the items list. This allows us to manage the lifecycle of these items and even perform secondary &amp;ldquo;Day 2 Operations&amp;rdquo; on these items later.&lt;/p&gt;
&lt;p&gt;For the example in this post, we&amp;rsquo;ll be provisioning an AWS Security group in an existing VPC. For now, just remember that AWS Security groups are not managed by vRA, but with some custom work, this is all about to change.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Setup the Azure AD Connector</title>
      <link>https://theithollow.com/2016/06/27/setup-azure-ad-connector/</link>
      <pubDate>Mon, 27 Jun 2016 14:10:04 +0000</pubDate>
      <guid>https://theithollow.com/2016/06/27/setup-azure-ad-connector/</guid>
      <description>&lt;p&gt;The cloud doesn&amp;rsquo;t need to be a total shift to the way you manage your infrastructure. Sure, it has many differences, but you don&amp;rsquo;t have to redo everything just to provision cloud workloads. One thing you&amp;rsquo;ll probably want to do is connect your Active Directory Domain to your cloud provider so that you can continue to administer one group of users. Face it, you&amp;rsquo;re not going to create a user account in AD, then one in Amazon and then another one in Azure. You want to be able to manage one account and have it affect everything. Microsoft Azure allows you to extend your on-prem domain to the Azure portal. This post focuses on the AD Connector and doing a sync.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Ansible with vRealize Automation Quickstart</title>
      <link>https://theithollow.com/2016/06/20/ansible-vrealize-automation/</link>
      <pubDate>Mon, 20 Jun 2016 14:04:51 +0000</pubDate>
      <guid>https://theithollow.com/2016/06/20/ansible-vrealize-automation/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re brand new to Ansible but have some vRealize Automation and Orchestration experience, this post will get you started with a configuration management tool.&lt;/p&gt;
&lt;p&gt;The goal in this example is to deploy a CentOS server from vRealize Automation and then have Ansible configure Apache and deploy a web page. It assumes that you have no Ansible server setup, but do have a working vRealize Automation instance. If you need help with setting up vRealize Automation 7 take a look at the &lt;a href=&#34;https://theithollow.com/2016/01/11/vrealize-automation-7-guide/&#34;&gt;guide here&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Determine the Number of vSphere Clusters to Use</title>
      <link>https://theithollow.com/2016/06/13/cluster-decision-sizing/</link>
      <pubDate>Mon, 13 Jun 2016 14:15:10 +0000</pubDate>
      <guid>https://theithollow.com/2016/06/13/cluster-decision-sizing/</guid>
      <description>&lt;p&gt;The number of clusters that should be used for a vSphere environment comes up for every vSphere design. The number of clusters that should be used isn’t a standard number and should be evaluated based on several factors.&lt;/p&gt;
&lt;h1 id=&#34;number-of-hosts&#34;&gt;Number of Hosts&lt;/h1&gt;
&lt;p&gt;Let’s start with the basics, if the design calls for more virtual machines than can fit into a single cluster, then it’s obvious that multiple clusters must be used. The same is true for a design that calls for more hosts that can fit into a single cluster or any other cluster maximums.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Add REST to a SQL Database</title>
      <link>https://theithollow.com/2016/06/06/add-rest-sql-database/</link>
      <pubDate>Mon, 06 Jun 2016 14:06:27 +0000</pubDate>
      <guid>https://theithollow.com/2016/06/06/add-rest-sql-database/</guid>
      <description>&lt;p&gt;If you do a lot of work with orchestration, you&amp;rsquo;re almost certain to be familiar with working with a REST API. These REST APIs have become the primary way that different systems can interact with each other. How about database operations? How about the ability to use a generic database to house CMDB data, change tracking or really anything you can think of.&lt;/p&gt;
&lt;p&gt;I came across a nifty program called &lt;a href=&#34;https://www.dreamfactory.com/&#34;&gt;DreamFactory&lt;/a&gt; that allows us to add an API to our databases. The examples in this post are all around MS SQL Server, but it also has support for PostgreSQL, NO SQL, SQL Lite, DB2, Salesforce and even Active Directory or LDAP.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Code Stream with Artifactory</title>
      <link>https://theithollow.com/2016/05/23/code-stream-artifactory/</link>
      <pubDate>Mon, 23 May 2016 14:04:37 +0000</pubDate>
      <guid>https://theithollow.com/2016/05/23/code-stream-artifactory/</guid>
      <description>&lt;p&gt;vRealize Code Stream now comes pre-packaged with JFrog Artifactory which allows us to do some cool things while we&amp;rsquo;re testing and deploying new code. To begin this post, lets take a look at what an artifactory is and how we can use it.&lt;/p&gt;
&lt;p&gt;An artifactory is a version control repository, typically used for binary objects like .jar files. You might already be thinking, how is this different from GIT? My Github account already has repos and does its own version control. True, but what if we don&amp;rsquo;t want to pull down an entire repo to do work? Maybe we only need a single file of a build or we want to be able to pull down different versions of the same file without creating branches, forks, additional repos or committing new code? This is where an artifactory service can really shine.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Using Jenkins with vRealize Code Stream</title>
      <link>https://theithollow.com/2016/05/09/using-jenkins-vrealize-code-stream/</link>
      <pubDate>Mon, 09 May 2016 14:05:51 +0000</pubDate>
      <guid>https://theithollow.com/2016/05/09/using-jenkins-vrealize-code-stream/</guid>
      <description>&lt;p&gt;By now, we&amp;rsquo;re probably Jenkins experts. So lets see how we can use Jenkins with vRealize Code Stream. To give you a little background, vRealize Code Stream is a release automation solution that can be added to VMware&amp;rsquo;s vRealize Automation solution. It&amp;rsquo;s a nifty little tool that will let us deploy a server from blueprint, call some Jenkins jobs and deploy code from an artifactory repository. One of the best features is that you can build your release in stages and have gating rules between them so you can automate going from Development to UAT to Production or whatever else you can think of.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Use vRealize Automation with Jenkins</title>
      <link>https://theithollow.com/2016/05/02/use-vrealize-automation-jenkins/</link>
      <pubDate>Mon, 02 May 2016 14:05:35 +0000</pubDate>
      <guid>https://theithollow.com/2016/05/02/use-vrealize-automation-jenkins/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been following the rest of this series about using Jenkins, you&amp;rsquo;re starting to see that there are a lot of capabilities that can be used to suit whatever use case you have for deploying and testing code. This post focuses on a great plugin that was recently pushed out by &lt;a href=&#34;http://twitter.com/inkysea&#34;&gt;Kris Thieler&lt;/a&gt; (aka &lt;a href=&#34;http://inkysea.com&#34;&gt;inkysea&lt;/a&gt;) and Paul Gifford. These guys have published a Jenkins Plugin for vRealize Automation.&lt;/p&gt;
&lt;p&gt;Just like we&amp;rsquo;ve done in other posts, the first step is to install the plugin in the Manage Plugins section of Jenkins.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Push Code to GIT and test with Jenkins</title>
      <link>https://theithollow.com/2016/04/25/push-code-git-test-jenkins/</link>
      <pubDate>Mon, 25 Apr 2016 14:26:57 +0000</pubDate>
      <guid>https://theithollow.com/2016/04/25/push-code-git-test-jenkins/</guid>
      <description>&lt;p&gt;in previous posts we discussed how you can use Jenkins to test various pieces of code including Powershell. Jenkins is a neat way to test your code and have a log of the successes and failures but let&amp;rsquo;s face it, you were probably testing your code as you were writing it anyway right? Well, what if you could push your code to GIT and have that code tested each time a GIT push was executed? Then you can have several people working on the same code and when the code gets updated in your repositories, it will be tested and logged. This makes it really nice to see when the code stopped working and who published the code to GIT. Now we&amp;rsquo;re really starting to see the power of this CI/CD stuff.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Test PowerCLI Code with Jenkins</title>
      <link>https://theithollow.com/2016/04/18/test-powercli-code-with-jenkins/</link>
      <pubDate>Mon, 18 Apr 2016 13:58:25 +0000</pubDate>
      <guid>https://theithollow.com/2016/04/18/test-powercli-code-with-jenkins/</guid>
      <description>&lt;p&gt;In the previous post we discuss how to setup a Windows Node to test PowerShell code. In this post, we&amp;rsquo;ll configure a new Jenkins project to test some very basic PowerCLI code.&lt;/p&gt;
&lt;p&gt;To start, we need to have some basics setup on our Windows Node that we setup previously as a slave. In our case, we need to make sure that we have PowerCLI installed on the host. Let&amp;rsquo;s think about this logically for a second. Jenkins is going to tell our Windows node to execute some PowerCLI scripts as a test. If the Windows node doesn&amp;rsquo;t understand PowerCLI, then our tests just won&amp;rsquo;t work. I would suggest that you install PowerCLI on your Windows node and then do a quick test to make sure you can connect to your vCenter server.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Add a Jenkins Node for Windows Powershell</title>
      <link>https://theithollow.com/2016/04/11/add-a-jenkins-node-for-windows-powershell/</link>
      <pubDate>Mon, 11 Apr 2016 14:17:39 +0000</pubDate>
      <guid>https://theithollow.com/2016/04/11/add-a-jenkins-node-for-windows-powershell/</guid>
      <description>&lt;p&gt;Not all of your Jenkins projects will consist of &amp;ldquo;Hello World&amp;rdquo; type routines. What if we want to run some PowerShell jobs? Or better yet, PowerCLI? Our Jenkins instance was built on CentOS and doesn&amp;rsquo;t run Windows PowerShell very well at all. Luckily for us, in situations like this, we can add additional Jenkins nodes and yes they can also be Windows hosts!&lt;/p&gt;
&lt;p&gt;Login to your Jenkins Instance and go to Manage Jenkins and then click on Manage Nodes.&lt;img alt=&#34;JenkinsWIN1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2016/02/JenkinsWIN1-1024x649.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Create a Jenkins Project</title>
      <link>https://theithollow.com/2016/04/04/create-a-jenkins-job/</link>
      <pubDate>Mon, 04 Apr 2016 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2016/04/04/create-a-jenkins-job/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll create a Jenkins project on our brand new shiny server that we just deployed. The project we create will be very simple but should show off the possibilities of using a Jenkins server to test your code.&lt;/p&gt;
&lt;p&gt;To get started login to your Jenkins server at the http://jenkinsservername:8080 port and then click the &amp;ldquo;New Item&amp;rdquo; link. From there give your new project a name. In this example our project is a Freestyle project which will let us throw code right into the project and run it on the Jenkins server or subsequent Jenkins Nodes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Jenkins Installation</title>
      <link>https://theithollow.com/2016/03/28/jenkins-installation/</link>
      <pubDate>Mon, 28 Mar 2016 14:32:04 +0000</pubDate>
      <guid>https://theithollow.com/2016/03/28/jenkins-installation/</guid>
      <description>&lt;p&gt;Installing a Jenkins instance is pretty simple if you&amp;rsquo;re a Linux guy. But even if you&amp;rsquo;re not a Linux admin, this isn&amp;rsquo;t going to make you sweat too much. First, start by deploying yourself a Linux instance. The OS version in this post is based on CentOS 7 if you are interested in following along.&lt;/p&gt;
&lt;p&gt;Once you&amp;rsquo;re up and running, make sure you can ping into the box and have SSH access. If you&amp;rsquo;re new to this, you can find instructions on &lt;a href=&#34;https://www.digitalocean.com/community/tutorials/initial-server-setup-with-centos-7&#34;&gt;setting up an SSH daemon here&lt;/a&gt;. Now that it&amp;rsquo;s setup we can install Jenkins by running the following commands.&lt;/p&gt;</description>
    </item>
    <item>
      <title>AWS Cloud Formation Templates in vRealize Automation</title>
      <link>https://theithollow.com/2016/03/14/aws-cloud-formation-templates-in-vrealize-automation/</link>
      <pubDate>Mon, 14 Mar 2016 14:15:46 +0000</pubDate>
      <guid>https://theithollow.com/2016/03/14/aws-cloud-formation-templates-in-vrealize-automation/</guid>
      <description>&lt;p&gt;Amazon has a pretty cool service that allows you to create a template for an entire set of infrastructure. This isn&amp;rsquo;t a template for a virtual machine, or even a series of virtual machines, but a whole environment. You can create a template with servers, security groups, networks and even PaaS services like their relational database service (RDS). Hey, in today&amp;rsquo;s world, infrastructure as code is the direction things are going and AWS has a pretty good solution for that already.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 - Deploy NSX Blueprints</title>
      <link>https://theithollow.com/2016/03/09/vrealize-automation-7-deploy-nsx-blueprints/</link>
      <pubDate>Wed, 09 Mar 2016 15:10:20 +0000</pubDate>
      <guid>https://theithollow.com/2016/03/09/vrealize-automation-7-deploy-nsx-blueprints/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;http://wp.me/p32uaN-1Cy&#34;&gt;previous post&lt;/a&gt; we went over how to get the basics configured for NSX and vRealize Automation integration. In this post we&amp;rsquo;ll build a blueprint and deploy it! Let&amp;rsquo;s jump right in and get started.&lt;/p&gt;
&lt;h2 id=&#34;blueprint-designer&#34;&gt;Blueprint Designer&lt;/h2&gt;
&lt;p&gt;Login to your vRA tenant and click on the Design Tab. Create a new blueprint just like we have done in the &lt;a href=&#34;https://theithollow.com/2016/01/28/vrealize-automation-7-blueprints/&#34;&gt;past posts&lt;/a&gt;. This time when you are creating your blueprint, click the NSX Settings tab and select the Transport zone. I&amp;rsquo;ve also added a reservation policy that can help define with reservations are available for this blueprint.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 - NSX Initial Setup</title>
      <link>https://theithollow.com/2016/03/07/6234/</link>
      <pubDate>Mon, 07 Mar 2016 15:01:03 +0000</pubDate>
      <guid>https://theithollow.com/2016/03/07/6234/</guid>
      <description>&lt;p&gt;Its time to think about deploying our networks through vRA. Deploying servers are cool, but deploying three tiered applications in different networks is cooler. So lets add VMware NSX to our cloud portal and get cracking.&lt;/p&gt;
&lt;p&gt;The first step is to have NSX up and running in your vSphere environment. Once this simple task is complete, a Distributed Logical Router should be deployed with an Uplink interface configured. The diagram below explains what needs to be setup in vSphere prior to doing any configurations in vRealize Automation. A Distributed Logical Router with a single uplink to an Edge Services Gateway should be configured first, then any new networks will be built through the vRealize Automation integration. While the section of the diagram that is manual, will remain roughly the same throughout, the section handled by vRealize Automation will change often, based on the workloads that are deployed. Note: be sure to setup some routing between your Provider Edge and the DLR so that you can reach the new networks that vRA creates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – XaaS Blueprints</title>
      <link>https://theithollow.com/2016/02/29/vrealize-automation-7-xaas-blueprints/</link>
      <pubDate>Mon, 29 Feb 2016 15:03:07 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/29/vrealize-automation-7-xaas-blueprints/</guid>
      <description>&lt;p&gt;XaaS isn&amp;rsquo;t a made up term, well maybe it is, but it supposed to stand for &amp;ldquo;Anything as a Service.&amp;rdquo; vRealize Automation will allow you to publish vRO workflows in the service catalog. This means that you can publish just about any thing you can think of, and not just server blueprints. If you have a workflow that can order your coffee and have it delivered to you, then you can publish it in your vRA service catalog. &lt;em&gt;Side note, if you have that workflow, please share it with the rest of us.&lt;/em&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 - Load Balancer Rules</title>
      <link>https://theithollow.com/2016/02/24/vrealize-automation-7-load-balancer-rules/</link>
      <pubDate>Wed, 24 Feb 2016 15:15:32 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/24/vrealize-automation-7-load-balancer-rules/</guid>
      <description>&lt;p&gt;In a previous post we went over installing an &lt;a href=&#34;https://theithollow.com/2016/02/22/vrealize-automation-7-enterprise-install/&#34;&gt;Enterprise Install of vRealize Automation&lt;/a&gt; behind a load balancer. This install required us to setup a Load Balancer with three VIPs but also required that we only had one active member in each VIP. A load balancer with a single member doesn&amp;rsquo;t really balance much load does it?&lt;/p&gt;
&lt;p&gt;After the installation is done, some modifications need to be made on the Load Balancer. The instructions on this can be found in the official &lt;a href=&#34;http://pubs.vmware.com/vra-70/topic/com.vmware.ICbase/PDF/vrealize-automation-70-load-balancing.pdf&#34;&gt;vRealize Automation Load Balancing Configuration Guide&lt;/a&gt; if you want to learn more. There are several examples on how to setup load balancing on an F5 load balancer and NSX for example. This post will focus on a KEMP load balancer which is free for vExperts and it will all be shown through with GUI examples.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Enterprise Install</title>
      <link>https://theithollow.com/2016/02/22/vrealize-automation-7-enterprise-install/</link>
      <pubDate>Mon, 22 Feb 2016 15:09:48 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/22/vrealize-automation-7-enterprise-install/</guid>
      <description>&lt;p&gt;OK, You&amp;rsquo;ve done a vRealize Automation 7 simple install and have the basics down. Now it&amp;rsquo;s time to put your grown up pants on, and get an enterprise install done. This is a pretty long process, so be ready, but trust me, this is much better in version 7 than in the past.&lt;/p&gt;
&lt;h1 id=&#34;load-balancer&#34;&gt;Load Balancer&lt;/h1&gt;
&lt;p&gt;To start with, you will want to configure your load balancer. An enterprise install means that you&amp;rsquo;ll want at least two of each type of service so that you can protect yourself from a failure. There are three Virtual IPs (VIPs) that should be created prior to starting your install. The table below lists an example list of VIPs with their associated members and ports.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Custom Actions</title>
      <link>https://theithollow.com/2016/02/15/vrealize-automation-7-custom-actions/</link>
      <pubDate>Mon, 15 Feb 2016 15:31:16 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/15/vrealize-automation-7-custom-actions/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve deployed a virtual machine from a vRA blueprint, but we still have to manage that machine. One of the cool things we can do with vRealize Automation 7 is to add a custom action. This takes the virtual machine object and runs a vRealize Orchestration blueprint against that input. We call these actions &amp;ldquo;Day 2 Operations&amp;rdquo; since they happen post provisioning.&lt;/p&gt;
&lt;p&gt;To create a new custom resource action go to the Design Tab &amp;ndash;&amp;gt; Design &amp;ndash;&amp;gt; Resource Actions. Click the &amp;ldquo;New&amp;rdquo; button to add a new action.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 - Custom Properties</title>
      <link>https://theithollow.com/2016/02/10/vrealize-automation-7-custom-properties/</link>
      <pubDate>Wed, 10 Feb 2016 15:09:39 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/10/vrealize-automation-7-custom-properties/</guid>
      <description>&lt;p&gt;Custom Properties are used to control aspects of machines that users are able to provision. For example, memory and CPU are required information that are necessary for users to deploy a VM from a blueprint. Custom properties can be assigned to a blueprint or reservation to control how memory and CPU should be configured.&lt;/p&gt;
&lt;p&gt;Custom properties are really powerful attributes that can vastly change how a machine behaves. I like to think of custom properties as the &amp;ldquo;Windows Registry&amp;rdquo; of vRealize Automation. Changing one property can have a huge effect on deployments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Subscriptions</title>
      <link>https://theithollow.com/2016/02/08/vrealize-automation-7-subscription/</link>
      <pubDate>Mon, 08 Feb 2016 15:05:14 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/08/vrealize-automation-7-subscription/</guid>
      <description>&lt;p&gt;In vRealize Automation 7 a new concept was introduced called a &amp;ldquo;Subscription.&amp;rdquo; A subscription is a way to allow you to execute a vRealize Orchestrator workflow based on some sort of event that has taken place in vRA. Simple idea huh? Well some of you might be thinking to yourself, &amp;ldquo;Yeah, this is called a stub, Duh!&amp;rdquo; The truth is that stubs are still available in vRealize Automation 7 but are clearly being phased out and we should stop using them soon because they are likely to not be around in future versions. The idea of an event subscription is a lot like a stub when in the context of machine provisioning, but there are a lot more events that can be triggered than the stubs that have been around in previous versions. Let&amp;rsquo;s take a look.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Manage Catalog Items</title>
      <link>https://theithollow.com/2016/02/02/vrealize-automation-7-manage-catalog-items/</link>
      <pubDate>Tue, 02 Feb 2016 15:05:37 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/02/vrealize-automation-7-manage-catalog-items/</guid>
      <description>&lt;p&gt;You&amp;rsquo;ve created your blueprints and entitled users to use them. How do we get them to show up in our service catalog? How do we make them look pretty and organized? For that, we need to look at managing catalog items.&lt;/p&gt;
&lt;p&gt;Log in as a tenant administrator and go to the Administration Tab &amp;ndash;&amp;gt; Catalog Management &amp;ndash;&amp;gt; Catalog Items. From here, we&amp;rsquo;ll need to look for the blueprint that we&amp;rsquo;ve previously published. Click on the blueprint.
&lt;img alt=&#34;vra7-catitem1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2016/01/vra7-catitem1.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Entitlements</title>
      <link>https://theithollow.com/2016/02/01/vrealize-automation-7-entitlements/</link>
      <pubDate>Mon, 01 Feb 2016 15:01:13 +0000</pubDate>
      <guid>https://theithollow.com/2016/02/01/vrealize-automation-7-entitlements/</guid>
      <description>&lt;p&gt;An entitlement is how we assign users a set of catalog items. Each of these entitlements can be managed by the business group manager or a tenant administrator can manage entitlements for all business groups in their tenant.&lt;/p&gt;
&lt;p&gt;To create a new entitlement go to Administration tab &amp;ndash;&amp;gt; Catalog Management &amp;ndash;&amp;gt; Entitlements. Click the &amp;ldquo;New&amp;rdquo; button to add a new entitlement.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;vra7-Entitlements1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-Entitlements1-1024x449.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;Under the General tab, enter a name for the entitlement and a description. Change the status to &amp;ldquo;Active&amp;rdquo; and select a Business Group. Note: If only a single business group has been created, this will not be selectable since it will default to the only available group. Then select the users who will be part of this entitlement.&lt;img alt=&#34;vra7-Entitlements2&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-Entitlements2-1024x326.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Blueprints</title>
      <link>https://theithollow.com/2016/01/28/vrealize-automation-7-blueprints/</link>
      <pubDate>Thu, 28 Jan 2016 15:10:13 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/28/vrealize-automation-7-blueprints/</guid>
      <description>&lt;p&gt;Blueprints are arguably the thing you&amp;rsquo;ll spend most of your operational time dealing with in vRealize Automation. We&amp;rsquo;ve finally gotten most of the setup done so that we can publish our vSphere templates in vRA.&lt;/p&gt;
&lt;p&gt;To create a blueprint in vRealize Automation 7 go to the &amp;ldquo;Design&amp;rdquo; tab. Note: If you&amp;rsquo;re missing this tab, be sure you added yourself to the custom group with permissions like we did in a previous post, and that you&amp;rsquo;ve logged back into the portal after doing so.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Custom Groups</title>
      <link>https://theithollow.com/2016/01/28/vrealize-automation-7-custom-groups/</link>
      <pubDate>Thu, 28 Jan 2016 15:05:40 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/28/vrealize-automation-7-custom-groups/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve been reading the whole series of posts on vRealize Automation 7, then you&amp;rsquo;ll know that we&amp;rsquo;ve already been setting up roles in our cloud portal, but we&amp;rsquo;re not done yet. If you need any permissions besides just requesting a blueprint, you&amp;rsquo;ll need to be added to a custom group first.&lt;/p&gt;
&lt;p&gt;To create a custom group, login as a tenant administrator and go to the Administration Tab &amp;ndash;&amp;gt; Users and Groups &amp;ndash;&amp;gt; Custom Groups. From there click the &amp;ldquo;New&amp;rdquo; button to add a new custom group.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Services</title>
      <link>https://theithollow.com/2016/01/26/vrealize-automation-7-services/</link>
      <pubDate>Tue, 26 Jan 2016 15:05:21 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/26/vrealize-automation-7-services/</guid>
      <description>&lt;p&gt;Services might be a poor name for this feature of vRealize Automation 7. When I think of a service, I think of some sort of activity that is being provided but in the case of vRA a service is little more than a category or type. For example, I could have a service called &amp;ldquo;Private Cloud&amp;rdquo; and put all of my vSphere blueprints in it and another one called &amp;ldquo;Public Cloud&amp;rdquo; and put all of my AWS blueprints in it. In the screenshot below you can see the services in a catalog. If you highlight the &amp;ldquo;All Services&amp;rdquo; service, it will show you all blueprints regardless of their service category. Otherwise, selecting a specific service will show you only the blueprints in that category.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Reservations</title>
      <link>https://theithollow.com/2016/01/25/vrealize-automation-7-reservations/</link>
      <pubDate>Mon, 25 Jan 2016 15:16:53 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/25/vrealize-automation-7-reservations/</guid>
      <description>&lt;p&gt;vRealize Automation 7 uses the concept of reservations to grant a percentage of fabric group resources to a business group. To add a reservation go to Infrastructure &amp;ndash;&amp;gt; Reservations. Click the &amp;ldquo;New&amp;rdquo; button to add a reservation and then select the type of reservation to be added. Since I&amp;rsquo;m using a vSphere Cluster, I selected Virtual &amp;ndash;&amp;gt; vCenter. Depending on what kind of reservations you&amp;rsquo;ve selected, the next few screens may be different, but I&amp;rsquo;m assuming many people will use vSphere so I&amp;rsquo;ve chosen this for my example.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Business Groups</title>
      <link>https://theithollow.com/2016/01/21/vrealize-automation-7-business-groups/</link>
      <pubDate>Thu, 21 Jan 2016 15:15:27 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/21/vrealize-automation-7-business-groups/</guid>
      <description>&lt;p&gt;The job of a business group is to associate a set of resources with a set of users. Think of it this way, your development team and your production managers likely need to deploy machines to different sets of servers. I should mention that a business group doesn&amp;rsquo;t do this by itself. Instead it is combined with a reservation which we&amp;rsquo;ll discuss in the next post. But before we can build those out, lets setup our business groups as well as machine prefixes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Fabric Groups</title>
      <link>https://theithollow.com/2016/01/19/vrealize-automation-7-fabric-groups/</link>
      <pubDate>Tue, 19 Jan 2016 15:09:39 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/19/vrealize-automation-7-fabric-groups/</guid>
      <description>&lt;p&gt;In the last post we setup an vCenter endpoint that defines how our vRealize Automation solution will talk to our vSphere environment. Now we must create a fabric group. Fabric Groups are a way of segmenting our endpoints into different types of resources or to separate them by intent. These groups are mandatory before you can build anything so don&amp;rsquo;t think that since you don&amp;rsquo;t need to segment your resources, that you can get away with not creating one.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Endpoints</title>
      <link>https://theithollow.com/2016/01/18/vrealize-automation-7-endpoints/</link>
      <pubDate>Mon, 18 Jan 2016 15:06:46 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/18/vrealize-automation-7-endpoints/</guid>
      <description>&lt;p&gt;Now that we&amp;rsquo;ve setup our new tenant, lets login as an infrastructure admin and start assigning some resources that we can use. To do this we need to start by adding an endpoint. An endpoint is anything that vRA uses to complete it&amp;rsquo;s provisioning processes. This could be a public cloud resource such as Amazon Web Services, an external orchestrator appliance, or a private cloud hosted by Hyper-V or vSphere.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 – Create Tenants</title>
      <link>https://theithollow.com/2016/01/14/vrealize-automation-7-create-tenants/</link>
      <pubDate>Thu, 14 Jan 2016 16:10:08 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/14/vrealize-automation-7-create-tenants/</guid>
      <description>&lt;p&gt;Now it&amp;rsquo;s time to create a new tenant in our vRealize Automation portal. Let&amp;rsquo;s login to the portal as the system administrator account as we have before. Click the Tenants tab and then click the &amp;ldquo;New&amp;rdquo; button.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;vra7-base_1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-base_1-1.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;Give the new tenant a name and a description. Then enter a URL name. This name will be appended to this string: https://[vraappliance.domain.name]/vcac/org/ and will be the URL that users will login to. In my example the url is &lt;a href=&#34;https://vra7.hollow.local/vcac/org/labtenant&#34;&gt;https://vra7.hollow.local/vcac/org/labtenant&lt;/a&gt;. Click &amp;ldquo;Submit and Next&amp;rdquo;.&lt;img alt=&#34;vra7-NewTenant1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-NewTenant1-1024x457.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 - Authentication</title>
      <link>https://theithollow.com/2016/01/13/vrealize-automation-7/</link>
      <pubDate>Wed, 13 Jan 2016 15:03:15 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/13/vrealize-automation-7/</guid>
      <description>&lt;p&gt;In order to setup Active Directory Integrated Authentication, we must login to our default tenant again but this time as our &amp;ldquo;Tenant Administrator&amp;rdquo; (we setup in &lt;a href=&#34;https://theithollow.com/2016/01/12/vrealize-automation-7-base-setup/&#34;&gt;the previous post&lt;/a&gt;) instead of the system administrator account that is created during initial setup.&lt;/p&gt;
&lt;p&gt;Once you&amp;rsquo;re logged in, click the Administration tab &amp;ndash;&amp;gt; Directories Management &amp;ndash;&amp;gt; Directories and then click the &amp;ldquo;Add Directory&amp;rdquo; button. Give the directory a descriptive name like the name of the ad domain for example. Then select the type of directory. I&amp;rsquo;ve chosen the &amp;ldquo;Active Directory (Integrated Windows Authentication)&amp;rdquo; option. This will add the vRA appliance to the AD Domain and use the computer account for authentication. &lt;strong&gt;Note:&lt;/strong&gt; you must setup Active Directory in the default (vsphere.local) tenant before it can be used in the subtenants.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 - Base Setup</title>
      <link>https://theithollow.com/2016/01/12/vrealize-automation-7-base-setup/</link>
      <pubDate>Tue, 12 Jan 2016 15:07:46 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/12/vrealize-automation-7-base-setup/</guid>
      <description>&lt;p&gt;We&amp;rsquo;ve got vRA installed and thats a good start. Our next step is to login to the portal and start doing some configuration. Go to https://vra-appliance-name-orIP and enter the administrator login that you specified during your install. Unlike prior versions of vRealize Automation, no domain vsphere.local domain suffix is required to login.&lt;img alt=&#34;vra7-base1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-base1.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;To start, Lets add some local users to our vSphere.local tenant. Click on the vsphere.local tenant.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;vra7-base_1&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-base_1.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;Click on the &amp;ldquo;Local users&amp;rdquo; tab and then click the &amp;ldquo;New&amp;rdquo; button to add a local account. I&amp;rsquo;ve created a vraadmin account that will be a local account only used to manage the default tenant configurations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 Guide</title>
      <link>https://theithollow.com/2016/01/11/vrealize-automation-7-guide/</link>
      <pubDate>Mon, 11 Jan 2016 15:45:35 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/11/vrealize-automation-7-guide/</guid>
      <description>&lt;p&gt;If following the posts in order, this guide should help you setup vRealize Automation 7 from start to finish. This is a getting started guide that will hopefully get you on the right path, answer any questions you might have, and give you tips on deploying your own cloud management portal.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Setup vRealize Automation 7&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2016/01/vRA7Guide1-1024x610.png&#34;&gt;&lt;/p&gt;
&lt;h1 id=&#34;part-1---simple-installation&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1uy&#34;&gt;Part 1 - Simple Installation&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-2--base-setup&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1vm&#34;&gt;Part 2 -Base Setup&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-3--authentication&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1vb&#34;&gt;Part 3 - Authentication&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-4---tenants&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1vK&#34;&gt;Part 4 - Tenants&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-5---endpoints&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1w0&#34;&gt;Part 5 - Endpoints&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-6---fabric-groups&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1w8&#34;&gt;Part 6 - Fabric Groups&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-7---business-groups&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1wq&#34;&gt;Part 7 - Business Groups&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-8---reservations&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1wf&#34;&gt;Part 8 - Reservations&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-9---services&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1x1&#34;&gt;Part 9 - Services&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-10---custom-groups&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1wT&#34;&gt;Part 10 - Custom Groups&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-11---blueprints&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/01/28/vrealize-automation-7-blueprints/&#34;&gt;Part 11 - Blueprints&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-12---entitlements&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1xa&#34;&gt;Part 12 - Entitlements&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-13---manage-catalog-items&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1zN&#34;&gt;Part 13 - Manage Catalog Items&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-14---event-subscriptions&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1xU&#34;&gt;Part 14 - Event Subscriptions&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-15---custom-properties&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1yi&#34;&gt;Part 15 - Custom Properties&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-16---xaas-blueprints&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/02/29/vrealize-automation-7-xaas-blueprints/&#34;&gt;Part 16 - XaaS Blueprints&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-17---resource-actions&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/02/15/vrealize-automation-7-custom-actions/&#34;&gt;Part 17 - Resource Actions&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-18---enterprise-install&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/02/22/vrealize-automation-7-enterprise-install/&#34;&gt;Part 18 - Enterprise Install&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-19---load-balancer-settings&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/02/24/vrealize-automation-7-load-balancer-rules/&#34;&gt;Part 19 - Load Balancer Settings&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-20--nsx-initial-setup&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/03/07/6234/&#34;&gt;Part 20 - NSX Initial Setup&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-21---nsx-blueprints&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1Db&#34;&gt;Part 21 - NSX Blueprints&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-22---code-stream-and-jenkins-setup&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/05/09/using-jenkins-vrealize-code-stream/&#34;&gt;Part 22 - Code Stream and Jenkins Setup&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-23---code-stream-and-artifactory-setup&#34;&gt;&lt;a href=&#34;https://theithollow.com/2016/05/23/code-stream-artifactory/&#34;&gt;Part 23 - Code Stream and Artifactory Setup&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-24---add-custom-items-to-vra7&#34;&gt;&lt;a href=&#34;http://wp.me/p32uaN-1G8&#34;&gt;Part 24 - Add Custom Items to vRA7&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-25---upgrade-vra-from-71-to-72&#34;&gt;&lt;a href=&#34;https://theithollow.com/?p=7311&amp;amp;preview=true&#34;&gt;Part 25 - Upgrade vRA from 7.1 to 7.2&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-26---adding-an-azure-endpoint&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/03/20/adding-azure-endpoint-vrealize-automation-7/&#34;&gt;Part 26 - Adding an Azure Endpoint&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-27---installing-vrealize-code-stream-for-it-devops&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/&#34;&gt;Part 27 - Installing vRealize Code Stream for IT DevOps&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-28---configuring-endpoints-for-vrealize-code-stream-for-it-devops&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/&#34;&gt;Part 28 - Configuring Endpoints for vRealize Code Stream for IT DevOps&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-29---using-vrealize-code-stream-for-it-devops&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/04/10/using-vrealize-code-stream-management-pack-devops/&#34;&gt;Part 29 - Using vRealize Code Stream for IT DevOps&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-30---unit-testing-with-vrealize-code-stream-for-it-devops&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/04/18/vrealize-code-stream-management-pack-devops-unit-testing/&#34;&gt;Part 30 - Unit Testing with vRealize Code Stream for IT DevOps&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-31---containers-on-vrealize-automation&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/05/08/containers-vrealize-automation/&#34;&gt;Part 31 - Containers on vRealize Automation&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-32---vra-73-component-profiles&#34;&gt;&lt;a href=&#34;https://theithollow.com/2017/06/06/vra-7-3-component-profiles/&#34;&gt;Part 32 - vRA 7.3 Component Profiles&lt;/a&gt;&lt;/h1&gt;
&lt;h1 id=&#34;part-33---vra-75-upgrade&#34;&gt;&lt;a href=&#34;https://wp.me/p32uaN-2oA&#34;&gt;Part 33 - vRA 7.5 Upgrade&lt;/a&gt;&lt;/h1&gt;
&lt;p&gt;If you&amp;rsquo;re looking for a getting started video, check out this P &lt;a href=&#34;http://pluralsight.com&#34;&gt;luralsight&lt;/a&gt; course for a quick leg up on vRA 7.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 7 Simple Installation</title>
      <link>https://theithollow.com/2016/01/11/vrealize-automation-7-simple-installation/</link>
      <pubDate>Mon, 11 Jan 2016 15:00:21 +0000</pubDate>
      <guid>https://theithollow.com/2016/01/11/vrealize-automation-7-simple-installation/</guid>
      <description>&lt;p&gt;This is our first stop in our journey to install vRealize Automation 7 and all of it&amp;rsquo;s new features. This post starts with the setup of the environment and assumes that you&amp;rsquo;ve deployed a vRealize Automation appliance from an OVA and that you&amp;rsquo;ve also got a Windows Server deployed so that we can install the IAAS components on it.&lt;/p&gt;
&lt;p&gt;After you&amp;rsquo;ve deployed the vRA7 OVA, login to the appliance with the root login and password supplied during your OVA deployment.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Veeam Package for vRealize Orchestrator</title>
      <link>https://theithollow.com/2015/12/07/veeam-plugin-for-vrealize-orchestrator/</link>
      <pubDate>Mon, 07 Dec 2015 15:00:10 +0000</pubDate>
      <guid>https://theithollow.com/2015/12/07/veeam-plugin-for-vrealize-orchestrator/</guid>
      <description>&lt;p&gt;Veeam is a popular backup product for virtualized environments but who wants to spend their days adding and removing machines to backup jobs?&lt;/p&gt;
&lt;p&gt;Now available on &lt;a href=&#34;https://github.com/theITHollow/Veeam-vRO-Package&#34;&gt;github&lt;/a&gt; is a Veeam package for vRealize Orchestrator. This is my gift to you, just in time for the Hollow-days.&lt;/p&gt;
&lt;h1 id=&#34;available-features&#34;&gt;Available Features&lt;/h1&gt;
&lt;p&gt;&lt;a href=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/veeamlogo.png&#34;&gt;&lt;img alt=&#34;veeamlogo&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/12/veeamlogo.png&#34;&gt;&lt;/a&gt; The following features are available with the plugin for it&amp;rsquo;s initial release.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Add a VM to an existing backup job&lt;/li&gt;
&lt;li&gt;Remove a VM from a backup job&lt;/li&gt;
&lt;li&gt;Start a backup job immediately&lt;/li&gt;
&lt;li&gt;Add a Build Profile to vRealize Automation&lt;/li&gt;
&lt;li&gt;Add a VM to a backup job from vRA&lt;/li&gt;
&lt;li&gt;Remove a VM from a backup job from vRA&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some additional functionality could easily be added to your environment using the existing worfklows such as start a backup as a Day 2 operation in vRA, or change backup jobs etc. The world is your oyster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX – Firewall</title>
      <link>https://theithollow.com/2015/11/30/vrealize-automation-6-with-nsx-firewall/</link>
      <pubDate>Mon, 30 Nov 2015 15:08:27 +0000</pubDate>
      <guid>https://theithollow.com/2015/11/30/vrealize-automation-6-with-nsx-firewall/</guid>
      <description>&lt;p&gt;So far we&amp;rsquo;ve talked a lot about using our automation solution to automate network deployments with NSX. But one of the best features about NSX is how we can firewall everything! Lucky for us, we can automate the deployment of specific firewall rules for each of our blueprints as well as deploying brand new networks for them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt; There are plenty of reasons to firewall your applications. It could be for compliance purposes or just a good practice to limit what traffic can access your apps.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Create a Day 2 Operations Wrapper</title>
      <link>https://theithollow.com/2015/11/16/create-a-day-2-operations-wrapper/</link>
      <pubDate>Mon, 16 Nov 2015 15:08:30 +0000</pubDate>
      <guid>https://theithollow.com/2015/11/16/create-a-day-2-operations-wrapper/</guid>
      <description>&lt;p&gt;Just deploying virtual machines in an automated fashion is probably the most important piece of a cloud management platform, but you still need to be able to manage the machines after they&amp;rsquo;ve been deployed.  In order to add more functionality to the portal, we can create post deployment &amp;ldquo;actions&amp;rdquo; that act on our virtual machine. For instance an action that snapshots a virtual machine would be a good one. We refer to these actions that take place after the provisioning process a &amp;ldquo;Day 2 Operation&amp;rdquo;, probably because it&amp;rsquo;s likely to happen on the second day or later. Clever huh?&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX – Load Balancing</title>
      <link>https://theithollow.com/2015/11/09/vrealize-automation-6-with-nsx-load-balancing/</link>
      <pubDate>Mon, 09 Nov 2015 15:19:10 +0000</pubDate>
      <guid>https://theithollow.com/2015/11/09/vrealize-automation-6-with-nsx-load-balancing/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;re building a multi-machine blueprint or multi-tiered app, there is a high likelihood that at least some of those machines will want to be load balanced. Many apps require multiple web servers in order to provide additional availability or to scale out. vRealize Automation 6 coupled with NSX will allow you to put some load balancing right into your server blueprints.&lt;/p&gt;
&lt;p&gt;Just to set the stage here, we&amp;rsquo;re going to deploy an NSX Edge appliance with our multi-machine blueprint and this will load balance both HTTPs and HTTP traffic between a pair of servers.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX - NAT</title>
      <link>https://theithollow.com/2015/11/02/vrealize-automation-6-with-nsx-nat/</link>
      <pubDate>Mon, 02 Nov 2015 15:10:54 +0000</pubDate>
      <guid>https://theithollow.com/2015/11/02/vrealize-automation-6-with-nsx-nat/</guid>
      <description>&lt;p&gt;You&amp;rsquo;re network isn&amp;rsquo;t fully on IPv6 yet? Ah, well don&amp;rsquo;t worry you&amp;rsquo;re certainly not alone, in fact you&amp;rsquo;re for sure in the majority. Knowing this, you&amp;rsquo;re probably using some sort of network address translation (NAT). Luckily, vRealize Automation can help you deploy translated networks as well as routed and private networks with a little help from NSX.&lt;/p&gt;
&lt;p&gt;A quick refresher here, a translated network is a network that remaps an IP Address space from one to another. The quickest way to explain this is a public and a private IP Address. Your computer likely sits behind a firewall and has a private address like 192.168.1.50 but when you send traffic to the internet, the firewall translates it into a public IP Address like 143.95.32.129. This translation can be used to do things like keeping two servers on a network with the exact same IP Address.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX - Routed Networks</title>
      <link>https://theithollow.com/2015/10/26/vrealize-automation-6-with-nsx-routed-networks/</link>
      <pubDate>Mon, 26 Oct 2015 14:00:28 +0000</pubDate>
      <guid>https://theithollow.com/2015/10/26/vrealize-automation-6-with-nsx-routed-networks/</guid>
      <description>&lt;p&gt;Any corporate network thats larger than a very small business is likely going to have a routed network already. Segmenting networks improves performance and more importantly used for security purposes. Many compliance regulations such as PCI-DSS state that machines need to be segmented from each other unless there is a specific reason for them to be on the same network. For instance your corporate file server doesn&amp;rsquo;t need to communicate directly with your CRM database full of credit card numbers. The quickest way to fix this is to put these systems on different networks but this can be difficult to manage in a highly automated environment. Developers might need to spin up new applications which may need to be on different network segments from the rest of the environment. Its not very feasible to assume we can now spin up test and delete hundred of machines each day, but need the network team to manually create new network segments and tear them down each day. That wouldn&amp;rsquo;t be a nice thing to do to your network team.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX - Private Networks</title>
      <link>https://theithollow.com/2015/10/19/vrealize-automation-6-with-nsx-private-networks/</link>
      <pubDate>Mon, 19 Oct 2015 14:05:45 +0000</pubDate>
      <guid>https://theithollow.com/2015/10/19/vrealize-automation-6-with-nsx-private-networks/</guid>
      <description>&lt;p&gt;Of the types of networks available through NSX, private networks are the easiest to get going because they don&amp;rsquo;t require any NSX edge routers to be in place. Think about it, the NSX edge appliance is used to allow communication with the physical network which we won&amp;rsquo;t need for a private network.&lt;/p&gt;
&lt;p&gt;A quick refresher here, a private network is a network that is not connected to the rest of the environment. Machines that are on the private network can communicate with each other, but nothing else in the environment. Its simple, think of some machines connected to a switch and the switch isn&amp;rsquo;t connected to any routers. The machines connected to the switch can talk to each other, but thats it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation 6 with NSX - Initial Setup of NSX</title>
      <link>https://theithollow.com/2015/10/12/vrealize-automation-6-with-nsx-initial-setup-of-nsx/</link>
      <pubDate>Mon, 12 Oct 2015 14:00:22 +0000</pubDate>
      <guid>https://theithollow.com/2015/10/12/vrealize-automation-6-with-nsx-initial-setup-of-nsx/</guid>
      <description>&lt;p&gt;Before we can start deploying environments with automated network segments, we need to do some basic setup of the NSX environment.&lt;/p&gt;
&lt;h2 id=&#34;nsx-manager-setup&#34;&gt;NSX Manager Setup&lt;/h2&gt;
&lt;p&gt;It should be obvious that you need to setup NSX Manager, deploy controllers and do some host preparation. These are basic setup procedures just to use NSX even without vRealize Automation in the middle of things, but just as a quick review:&lt;/p&gt;
&lt;h3 id=&#34;install-nsx-manager-and-deploy-nsx-controller-nodes&#34;&gt;Install NSX Manager and deploy NSX Controller Nodes&lt;/h3&gt;
&lt;p&gt;NSX Manager setup can be deployed from an OVA and then you must register the NSX Manager with vCenter. After this is complete, deploy three NSX Controller nodes to configure your logical constructs.
&lt;img alt=&#34;NSXSetupManagementSetup&#34; loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2015/09/NSXSetupManagementSetup-1024x452.png&#34;&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation Entity Properties</title>
      <link>https://theithollow.com/2015/10/05/vrealize-automation-entity-properties/</link>
      <pubDate>Mon, 05 Oct 2015 14:19:16 +0000</pubDate>
      <guid>https://theithollow.com/2015/10/05/vrealize-automation-entity-properties/</guid>
      <description>&lt;p&gt;A common task that comes up during an automation engagement relates to passing values from vRealize Automation blueprints over to vRealize Orchestrator. There is a workflow that I use quite frequently that will list the properties available for further programming and you can download the plugin at &lt;a href=&#34;https://github.com/theITHollow/vRA6-PropertyEntities&#34;&gt;github.com&lt;/a&gt; if you&amp;rsquo;d like to use it as well.&lt;/p&gt;
&lt;h1 id=&#34;how-it-works&#34;&gt;How it works&lt;/h1&gt;
&lt;p&gt;The workflow takes several inputs that are provided by vRealize Automation during a stub like Building Machine, Machine Provisioned or Machine Disposing. These inputs include the vRA Virtual Machine instance, the vCenter Virtual Machine ID, the vRealize Automation Host, the stubs used and most importantly the vRealize Automation VM properties.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation Load Balancer Settings</title>
      <link>https://theithollow.com/2015/09/28/vrealize-automation-load-balancer-settings/</link>
      <pubDate>Mon, 28 Sep 2015 13:56:30 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/28/vrealize-automation-load-balancer-settings/</guid>
      <description>&lt;p&gt;I found some conflicting information about setting up load balancers for vRealize Automation in a Distributed installation, specifically around Health Checks. The following health checks were found to work for a fully distributed installation of vRA 6.2.2.&lt;/p&gt;
&lt;h2 id=&#34;vrealize-automation-appliances&#34;&gt;&lt;strong&gt;vRealize Automation Appliances&lt;/strong&gt;&lt;/h2&gt;
&lt;p&gt;This is the pair of vRealize Automation Linux appliances that are deployed via OVA file.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Type:&lt;/strong&gt; HTTPS&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Interval:&lt;/strong&gt; 5 seconds&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Timeout:&lt;/strong&gt; 9 seconds&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Send String:&lt;/strong&gt; GET /vcac/services/api/statusrn&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Load Balancing Method:&lt;/strong&gt; Round Robin&lt;/p&gt;</description>
    </item>
    <item>
      <title>vRealize Automation and vCloud Air Integration</title>
      <link>https://theithollow.com/2015/09/21/vrealize-automation-and-vcloud-air-integration/</link>
      <pubDate>Mon, 21 Sep 2015 14:07:42 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/21/vrealize-automation-and-vcloud-air-integration/</guid>
      <description>&lt;p&gt;vRealize Automation is at its best when it can leverage multiple infrastructures to provide a hybrid cloud infrastructure. One of the things we might want to do is to set up VMware vCloud Air integration with your vRA instance.&lt;/p&gt;
&lt;p&gt;To start, we need to have a &lt;a href=&#34;http://vcloud.vmware.com/&#34;&gt;vCloud Air&lt;/a&gt; account which you can currently sign up for with some initial credits to get you started for free. Once you&amp;rsquo;ve got an account you&amp;rsquo;ll be able to setup a VDC and will have some catalogs that you can build VMs from. If you&amp;rsquo;re concerned about these steps, don&amp;rsquo;t worry a default VDC including some storage and a network will be there for you by default.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Assign a VM to a Rubrik slaDomain</title>
      <link>https://theithollow.com/2015/09/14/assign-a-vm-to-a-rubrik-sladomain/</link>
      <pubDate>Mon, 14 Sep 2015 14:00:16 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/14/assign-a-vm-to-a-rubrik-sladomain/</guid>
      <description>&lt;p&gt;This last post in the series shows you how &lt;a href=&#34;https://twitter.com/vnickC&#34;&gt;Nick Colyer&lt;/a&gt; and I to tie everything together. If you want to just download the plugins and get started, please visit Github.com and import the plugins into your own vRealize Orchestrator environment.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Download the Plugin from Github&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The first version of this code has been refactored and migrated to Github in Rubrik&amp;rsquo;s Repository since the time of this initial writing&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;To recap where we&amp;rsquo;ve been, we:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Get Rubrik VM through vRealize Orchestrator</title>
      <link>https://theithollow.com/2015/09/10/get-rubrik-vm-through-vrealize-orchestrator/</link>
      <pubDate>Thu, 10 Sep 2015 14:07:51 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/10/get-rubrik-vm-through-vrealize-orchestrator/</guid>
      <description>&lt;p&gt;Part four of this series will show you how to lookup a VM in the &lt;a href=&#34;http://rubrik.com&#34;&gt;Rubrik&lt;/a&gt; Hybrid Cloud appliance through the REST API by using vRealize Orchestrator. If you&amp;rsquo;d rather just download the plugin and get using it, check out the link to &lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Github&lt;/a&gt; to get the plugin and don&amp;rsquo;t forget to check out &lt;a href=&#34;http://twitter.com/vnickc&#34;&gt;Nick Colyer&amp;rsquo;s&lt;/a&gt; post over at &lt;a href=&#34;http://systemsgame.com&#34;&gt;systemsgame.com&lt;/a&gt; about how to use it.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Download the Plugin from Github&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The first version of this code has been refactored and migrated to Github in Rubrik&amp;rsquo;s Repository since the time of this initial writing&lt;/p&gt;</description>
    </item>
    <item>
      <title>Rubrik API Logins through vRealize Orchestrator</title>
      <link>https://theithollow.com/2015/09/08/rubrik-api-logins-through-vrealize-orchestrator/</link>
      <pubDate>Tue, 08 Sep 2015 14:00:17 +0000</pubDate>
      <guid>https://theithollow.com/2015/09/08/rubrik-api-logins-through-vrealize-orchestrator/</guid>
      <description>&lt;p&gt;Part three of this series focuses on how &lt;a href=&#34;http://twitter.com/vnickc&#34;&gt;Nick Colyer&lt;/a&gt; and I built the authentication piece of the plugin so that we could then pass commands to the &lt;a href=&#34;http://rubrik.com&#34;&gt;Rubrik&lt;/a&gt; appliance. An API requires a login just like any other portal would. Since this is a a REST API, we actually need to do a &amp;ldquo;POST&amp;rdquo; on the login resource to get ourselves an authentication token.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rubrikinc/vRO-Workflow&#34;&gt;Download the Plugin from Github&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;NOTE: The first version of this code has been refactored and migrated to Github in Rubrik&amp;rsquo;s Repository since the time of this initial writing&lt;/p&gt;</description>
    </item>
    <item>
      <title>UCS Director Dynamic List of Values</title>
      <link>https://theithollow.com/2015/08/10/ucs-director-dynamic-list-of-values/</link>
      <pubDate>Mon, 10 Aug 2015 13:55:47 +0000</pubDate>
      <guid>https://theithollow.com/2015/08/10/ucs-director-dynamic-list-of-values/</guid>
      <description>&lt;p&gt;When you execute a Cisco UCS Director workflow you&amp;rsquo;re usually prompted to enter in some information. Usually this is something like a virtual machine name, or an IP Address, even some credentials possibly. The values that you enter can be formatted so that they come from a list and the user just has to select the right value. This helps immensely in the amount of troubleshooting you have to do because only specific verified values can be displayed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>VCDX Vision Quest and Mea Culpa</title>
      <link>https://theithollow.com/2015/07/20/vcdx-vision-quest-and-mea-culpa/</link>
      <pubDate>Mon, 20 Jul 2015 14:07:44 +0000</pubDate>
      <guid>https://theithollow.com/2015/07/20/vcdx-vision-quest-and-mea-culpa/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Long is the way and hard, that out of hell leads up to light - Milton&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Apparently, Milton has been through the VCDX process. It is a challenge that will test your resolve and you will probably learn a lot along the way. You&amp;rsquo;ll also be glad when its over.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been good at many things in my life, but never felt like I was great at anything. I&amp;rsquo;ve succeeded  at most things I&amp;rsquo;ve attempted, but the VCDX was a goal I truly didn&amp;rsquo;t think I was capable of achieving. &lt;a href=&#34;https://twitter.com/ccolotti&#34;&gt;Chris Colotti&lt;/a&gt; mentioned in one of his posts that you need to decide why you&amp;rsquo;re going for the VCDX in the first place. In my case, I was doing it to prove to myself that I could do it. The process really taught me something about myself that I didn&amp;rsquo;t know. It was my own personal Vision Quest. (Queue Lunatic Fringe them song here)&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
