[{"content":"Over the past several years, I worked in marketing and spent a lot of time trying to make technical demos more compelling. The goal was always to demonstrate a capability or product feature, but there’s no reason a demo can’t also be fun, memorable, and grounded in reality.\n“Hello World” gets the job done, but it doesn’t feel real. It doesn’t resemble how applications are actually built, deployed, or operated. And because of that, it’s easy to forget.\nEven though I’m no longer in a marketing role, I still build labs, experiment with infrastructure, and give technical demos. I also now work for a company where open source is foundational, so I figured why not build one of these apps and let everyone use it.\nIntroducing Brix Pizza So let me introduce you to Brix Pizza. This is a Golang web application that uses a MySQL database to store your pizza orders. It\u0026rsquo;s intentionally simple to use but realistic enough to be useful for real demos, labs, and personal experimentation. You can see a demo video below of the application in action, but I must warn you: This app might be controversial especially if you are working with a group of people who don\u0026rsquo;t agree on the best style of pizza! I will not be held responsible for any fights you have with co-workers when they don\u0026rsquo;t agree that Chicago Style Deep Dish is the best pizza to order from Brix Pizza.\nHere is a short demo video of the app in use by a user.\nhttps://youtu.be/MaYOsVk9UhE\nGetting Started If you\u0026rsquo;d like to use this demo application for your own purpose, feel free. Just know that this app was almost entirely created with AI by someone who spends most of their time working on Infrastructure stuff and not a developer. If you\u0026rsquo;re a programmer and want to make improvements to this code, here is the source code repository, feel free to fork it, branch it or submit a pull request. If you submit issues, I might also look into it further in my spare time, so don\u0026rsquo;t be shy.\nThe source code can be found on my github page: https://github.com/eshanks16/brix\nIf you have a Kubernetes cluster and don\u0026rsquo;t care about the source code, the repo also contains a deployment directory with Kubernetes YAML files to deploy both mysql and the brix web app. Just be sure to update your connection strings, and storage classes for your own environment.\nEnjoy the app, and I would love to hear how you\u0026rsquo;ve used this app in your own demos or labs to make things more engaging. Have fun!\n","permalink":"https://theithollow.com/2025/12/23/brix-pizza-a-demo-app/","summary":"\u003cp\u003eOver the past several years, I worked in marketing and spent a lot of time trying to make technical demos more compelling. The goal was always to demonstrate a capability or product feature, but there’s no reason a demo can’t also be fun, memorable, and grounded in reality.\u003c/p\u003e\n\u003cp\u003e“Hello World” gets the job done, but it doesn’t feel real. It doesn’t resemble how applications are actually built, deployed, or operated. And because of that, it’s easy to forget.\u003c/p\u003e","title":"Brix Pizza - A Demo App"},{"content":"Today marks the beginning of the 2024 Formula 1 season, a notable day for my mental well-being as it begins the end of a long, cold winter. The winter months often challenge me, limiting my time outdoors and casting shadows over my daily routines. The start of the Formula 1 season provides a needed distraction and spark of excitement as I look forward to warmer weather.\nMy interest in Formula 1 was ignited by the popular Netflix series, \u0026quot; Drive to Survive.\u0026quot; Although new to the sport, I quickly became engrossed, but I initially found myself without a team to support. Over time, the Mercedes Formula 1 team stood out, embodying values I\u0026rsquo;ve always respected.\nThe Mercedes team, led by Team Principal Toto Wolff, has enjoyed a decade of success, capturing several World Championships. However, their success is not what drew me to them. Instead, I was attracted to their organizational culture, particularly their commitment to a blameless environment. This approach focuses on problem-solving and learning from setbacks, rather than assigning fault.\nThis philosophy of a blameless culture is one I\u0026rsquo;ve experienced in my career, notably during my time with the Pivotal and Heptio teams while at VMware, and previously with AHEAD. These experiences taught me the importance of assuming competence in colleagues and focusing on collective problem-solving and improvement.\nTrust is central to this approach. It\u0026rsquo;s about trusting that everyone is doing their best, aligning with team objectives, and handling setbacks constructively. Trusting each other allows us to admit mistakes which is a critical first step to fixing things.\nOne year for company kickoff we had a former Blue Angel leader who talked about how their team trusted each other. When you’re flying a jet inches away from another jet you had to trust that both pilots would do their job. This was also part of their blameless culture. After every practice the team would go around the table and talk about what they personally did wrong and need improvement on before finally giving feedback about the other team members. It’s all dependent upon trust.\nLook at the contrasting scenario of politics. We live in a society where admitting mistakes is often seen as a weakness. Politicians never admit mistakes because it may be used against them later. So either our politicians never make any mistakes, which I find hard to believe, or they dig their heels in and spin it instead of admitting the failures and improving things later. Does it feel like not being able to admit a mistake is making our politics any better? Does it make you trust your politicians more?\nAdopting a blameless culture can lead to personal and professional growth. It encourages a safe environment where mistakes are viewed as opportunities for learning rather than reasons for punishment. A no blame culture fosters a supportive team dynamic, leading to better outcomes and a more enjoyable working environment. It\u0026rsquo;s fun to win as a team.\nBlame is an emotion. Assigning blame makes us feel better, but doesn\u0026rsquo;t solve anything beyond that. Let\u0026rsquo;s challenge ourselves to adopt a blameless mindset, both professionally and personally. Instead of seeking blame when errors occur, focus on trust and continuous improvement.\nAnd with that, I enthusiastically say, \u0026ldquo;Go Mercedes!\u0026rdquo; And if things don\u0026rsquo;t go well this year, then hopefully we learn our lesson and come back stronger next year.\n","permalink":"https://theithollow.com/2024/02/29/the-power-of-a-blameless-culture/","summary":"\u003cp\u003eToday marks the beginning of the 2024 Formula 1 season, a notable day for my mental well-being as it begins the end of a long, cold winter. The winter months often challenge me, limiting my time outdoors and casting shadows over my daily routines. The start of the Formula 1 season provides a needed distraction and spark of excitement as I look forward to warmer weather.\u003c/p\u003e\n\u003cp\u003eMy interest in Formula 1 was ignited by the popular Netflix series, \u0026quot; \u003ca href=\"https://www.netflix.com/title/80204890\"\u003eDrive to Survive.\u003c/a\u003e\u0026quot; Although new to the sport, I quickly became engrossed, but I initially found myself without a team to support. Over time, the \u003ca href=\"https://www.mercedesamgf1.com/\"\u003eMercedes Formula 1\u003c/a\u003e team stood out, embodying values I\u0026rsquo;ve always respected.\u003c/p\u003e","title":"The Power of a Blameless Culture"},{"content":"Maybe it\u0026rsquo;s bougie, but one of the reasons I purchased E200-8d Supermicro Servers for my home lab was because they had IPMI built into them. Being able to remote into my lab at night when I\u0026rsquo;m messing around with different configurations was a nice to have. When I was consulting and traveling a lot, being able to remotely start my servers up was really important. I tested out things many times in my lab so that customers wouldn\u0026rsquo;t have to spend time figuring it out themselves. Anyway, the point is I bought servers with IPMI in them because I thought it was important.\nWhen I had my lab setup with vSphere, I would usually configure the Distributed Power Management, mainly so I could power on and off my servers from the vCenter UI and not have to log in to each server individually to power them off or things. I was pleasantly surprised to find out that you can configure IPMI integration in Harvester as well.\nEnable Seeder Seeder is an open source tool used to the the IPMI integration with Harvester. To install Seeder, go to the Advanced settings page in the Harvester UI and go to Addons. In the Addons page look for the harvester-seeder addon, and click the hyperlink, or select the kabob menu and choose \u0026ldquo;Enable\u0026rdquo;.\nAfter a few moments, you\u0026rsquo;ll see the harvester-seeder addon deployed successfully.\nConfigure your Hosts Once the Seeder Addon has been enabled, go to the Hosts tab. You\u0026rsquo;ll need to edit each of your hosts configurations. Click on the Kabob menu next to the host you want to configure and click \u0026ldquo;Edit Config.\u0026rdquo;\nIf you JUST enabled the add-on you might receive a message like the error I got. If you do, wait a little longer for the Seeder Addon to finish reconciling and try again.\nEnable Out-Of-Band Access from the menu for your host.\nA new set of fields will be shown on the screen where you need to fill in connection information for your IPMI interface. Provide the IP Address, Port, and polling interval.\nYou\u0026rsquo;ll also provide a password, but you\u0026rsquo;ll need to create a new secret first. This is kind of handy if you use the same login for each of your hosts because you can just create it once and select it for each host. If this is a production environment though, you should of course have a different login for each host for security purposes. For me, its my home lab, and YOLO. Save your configuration settings and move on to your next host.\nTest Your Hosts Once you\u0026rsquo;ve configured your hosts, you can try them out. Put one of your hosts into Maintenance mode and migrate your VMs to another host so they won\u0026rsquo;t be affected by a server being powered off. You can put hosts in maintenance mode by clicking the Kabob menu and choosing \u0026ldquo;Maintenance Mode.\u0026rdquo; Once the host is in maintenance mode, your host menu should have a a \u0026ldquo;Shut Down\u0026rdquo; option. When you click this option, the Seeder addon will communicate with your IPMI to perform the shutdown proceedure.\nYou should get a success message in your console if it worked correctly.\nOf course powering off a server is only part of the goal. We (maybe more importantly) want to be able to remotely start our node as well. Once the server is powered off, we then Power it back on from the same menu.\nYou\u0026rsquo;ll also start getting some hardware events in the console which is also a nice way to keep an eye on your cluster\u0026rsquo;s hosts.\nSummary Configuring IPMI integration might not be everyone\u0026rsquo;s first priority, but it is pretty handy to have hardware information and power options in your Harvester UI. If you want to check out any of the other posts about my Harvester lab, check them out here.\nInstall Harvester Create a Harvester virtual network Creating a Harvester virtual machine Add Kubernetes to the Harvester installation with Rancher Backup and Restore Harvester Virtual Machines Deploy Virtual Machine from Template ","permalink":"https://theithollow.com/2024/02/29/harvester-and-ipmi-integration/","summary":"\u003cp\u003eMaybe it\u0026rsquo;s bougie, but one of the reasons I purchased \u003ca href=\"https://amzn.to/3Te2Jrr\"\u003eE200-8d Supermicro Servers\u003c/a\u003e for my \u003ca href=\"/2021/03/08/2021-home-lab/\"\u003ehome lab\u003c/a\u003e was because they had IPMI built into them. Being able to remote into my lab at night when I\u0026rsquo;m messing around with different configurations was a nice to have. When I was consulting and traveling a lot, being able to remotely start my servers up was really important. I tested out things many times in my lab so that customers wouldn\u0026rsquo;t have to spend time figuring it out themselves. Anyway, the point is I bought servers with IPMI in them because I thought it was important.\u003c/p\u003e","title":"Harvester and IPMI Integration"},{"content":"Virtual infrastructure has a lot of advantages, but one of the biggest time savers is being able to provision new virtual machines quickly from a template. As part of this series we\u0026rsquo;ve covered, how to deploy harvester, how to setup a new virtual network , how to create a new virtual machine, and how to backup and restore VMs. In this post we\u0026rsquo;ll take a look at how we can create templates and deploy harvester VMs from templates.\nCreate a Template from an Existing VM Before we can deploy any new virtual machines from a template we need to create a template, so this is where we\u0026rsquo;ll start this post. To create a template from a VM, it couldn\u0026rsquo;t be simpler. Find a VM that you\u0026rsquo;d like to use as the base image for your templates, and select \u0026quot; Generate Template\u0026quot; from the kabob menu for that VM.\nStep two of this process is to give the template a name and a description. There is also a checkbox to select whether or not you want the data as part of this template. This checkbox basically takes your snapshot of the virtual machine disks. If you don\u0026rsquo;t select the \u0026ldquo;With Data\u0026rdquo; checkbox, then all the settings for your VM such as the virtual network, size of disks, etc will all be set for you when you use it to deploy a new VM, but none of your configurations inside the virtual machine\u0026rsquo;s OS will be included. Both options might be useful for your situation, but in my lab, I\u0026rsquo;m taking a backup of the data as well.\nNOTE: I found that creating a template with a long name caused it to fail. I found this bug stating that restores might fail if the template name is longer than 40 characters, so if you have problems, try using a short name for the template name.\nCreate a Harvester VM from Template Once our snapshot has completed, we can use that snapshot to deploy a new virtual machine with all our virtual machine settings intact. Go to the \u0026quot; Templates\u0026quot; menu under the Advanced settings in the Harvester UI. Click the \u0026ldquo;Create a Virtual Machine\u0026rdquo; from the options menu.\nThis will take you to the VM creation page and the \u0026ldquo;VM from Template\u0026rdquo; option will already be checked, and the virtual machine configuration options such as the CPU/Memory settings will already be filled in. You can modify these settings as you choose.\nI would focus your attention on the Advanced Options section of the template. Obviously, we don\u0026rsquo;t want to deploy a new virtual machine to our network with the exact settings as our template. We want to customize the VM with a new IP Address, or other settings but start with our template as the defaults. In the Advanced Settings you can set your \u0026ldquo;data template.\u0026rdquo; If you haven\u0026rsquo;t created one yet, you can choose the \u0026quot; create new\u0026quot; option from the drop down to create your data template. The data template is basically a cloud-init config file. After you\u0026rsquo;ve created a User Data template, you an select it from this drop down and again modify it for this specific VM we\u0026rsquo;re creating. I\u0026rsquo;ve modified the static IP Address.\nI\u0026rsquo;ll also point out that you can create other config templates including a network template which allows you to quickly set the network interfaces used with your VM. When you\u0026rsquo;re all done with your tinkering, you can click the \u0026quot; Create\u0026quot; button to start building your VM from template.\nAs your VM is being created, you\u0026rsquo;ll see some information in your \u0026quot; Virtual Machine\u0026quot; list from the Harvester UI.\nEventually your VM will be built and you can access the console to complete any remaining work to your new virtual machine.\nSummary Creating virtual machines from a template is a pretty critical component of virtual infrastructure these days. Harvester, in conjunction with snapshots and cloud-init configs gives us the capability to create new VMs from a copy of an operating system installation and customize it so that we can quickly spin up new VMs in a reasonable amount of time.\n","permalink":"https://theithollow.com/2024/02/05/deploy-harvester-vms-from-a-template/","summary":"\u003cp\u003eVirtual infrastructure has a lot of advantages, but one of the biggest time savers is being able to provision new virtual machines quickly from a template. As part of this series we\u0026rsquo;ve covered, \u003ca href=\"/?p=10772\"\u003ehow to deploy harvester\u003c/a\u003e, \u003ca href=\"/?p=10810\"\u003ehow to setup a new virtual network\u003c/a\u003e \u003ca href=\"/?p=10824\"\u003e, how to create a new virtual machine\u003c/a\u003e, and \u003ca href=\"/?p=10900\"\u003ehow to backup and restore VMs\u003c/a\u003e. In this post we\u0026rsquo;ll take a look at how we can create templates and deploy harvester VMs from templates.\u003c/p\u003e","title":"Deploy Harvester VMs from a Template"},{"content":"In our previous posts we setup a Harvester cluster in our lab, configured the network, and deployed our first VM. Here we will explore the backup and restore options available to us from the Harvester console. If you\u0026rsquo;re new to running virtual machines on Kubernetes, you\u0026rsquo;re probably questioning a few things, like will your current VM backup tool work? The answer is maybe, but only if it is able to backup a container since our VMs are now wrapped inside of a Kubernetes Pod thanks to Kubevirt. There are several options out there (I work for one of those companies myself) but this post only focuses on whats built into the Harvester solution.\nBackups are a critical component to a virtual machine strategy. Since virtual machines almost always have state to save, we need some mechanism to back them up. These capabilities are also needed if we want to create new virtual machines from a template later on, as well. In this post we\u0026rsquo;ll walk through the process to backup and restore Harvester VMs.\nConfigure a Backup Target Before you can backup virtual machines with Harvester, you need to configure a backup target where the backup data will live. In the Harvester UI, navigate to the Advanced/Settings and look for the backup-target configuration item. Click the kabob menu, to see additional options.\nFrom the configuration settings screen, specify the details about your backup target. This target can be configured to use NFS if you have an NFS server hanging around like I do, or you can use an S3 bucket. I chose NFS because because I didn\u0026rsquo;t have an S3 option available to me locally. Now you could use S3 as an option and use Amazon S3 buckets which would be useful for an offsite backup, but I didn\u0026rsquo;t want to download those images over the Internet if I needed them. Your own configurations will vary and you should make this decision based on your own lab equipment and requirements.\nBackup a Virtual Machine Now that we have a backup target configured, we can try a backup. From within the Harvester console navigate to the VM tab where we already have a machine running. Select the Take Backup option from the VM menu.\nGive the backup a name and click Create.\nWhen you\u0026rsquo;re done, you can find the status of the backup in the Backup \u0026amp; Snapshot/VM Backups menu.\nRestore from Backup As we all know, backups are really worthless if they can\u0026rsquo;t be restored. So let\u0026rsquo;s try that out quick as well. To perform a restore you\u0026rsquo;ll go to the VM Backups menu and click the Restore Backup button.\nYou can choose to create a new virtual machine from the backup, or to restore it over the current state of an existing machine. In my case I\u0026rsquo;m creating a new VM so I give it a name and I select the backup to restore.\nFrom the Virtual Machines tab, we can see the state of the machine being restored.\nEventually the virtual machine should be in a running status.\nSnapshots Snapshots aren\u0026rsquo;t backups! There, now I\u0026rsquo;ve said it. Now if you want to use snapshots for something that has to pass less data around, you can do this with a very similar process as you did for your backups.\nGive the snapshot a name and click Create.\nNow you can watch the status of your snapshot from the VM Snapshots page. When its completed, you can click the Restore Snapshot button to test restoring it.\nThis time I\u0026rsquo;ll replace an existing VM with my snapshot to show the alternate way of restoring. Here the virtual machine is already selected, and I choose which snapshot to restore. This also asks if you want to Retain or Delete the current volumes.\nWhen done with the restore process you can see the status of your VM in the Virtual Machines menu.\nSummary My home lab doesn\u0026rsquo;t have anything really important in it, but being able to backup and restore my virtual machines is pretty handy. The functionality to create virtual machine backups and snapshots is pretty critical for us to do things like cloning or deploying VMs from templates which we\u0026rsquo;ll review in a future post.\n","permalink":"https://theithollow.com/2024/01/29/backup-and-restore-harvester-vms/","summary":"\u003cp\u003eIn our previous posts we \u003ca href=\"/2024/01/18/the-harvester-home-lab-vms-and-containers/#:~:text=the%20deployment%20here%3A-,Install%20Harvester,-Create%20a%20Harvester\"\u003esetup a Harvester cluster in our lab\u003c/a\u003e, \u003ca href=\"/?p=10810\"\u003econfigured the network\u003c/a\u003e, and \u003ca href=\"/?p=10824\"\u003edeployed our first VM\u003c/a\u003e. Here we will explore the backup and restore options available to us from the Harvester console. If you\u0026rsquo;re new to running virtual machines on Kubernetes, you\u0026rsquo;re probably questioning a few things, like will your current VM backup tool work? The answer is maybe, but only if it is able to backup a container since our VMs are now wrapped inside of a Kubernetes Pod thanks to \u003ca href=\"https://kubevirt.io/\"\u003eKubevirt\u003c/a\u003e. There are several options out there (I work for one of those companies myself) but this post only focuses on whats built into the \u003ca href=\"https://harvesterhci.io/\"\u003eHarvester\u003c/a\u003e solution.\u003c/p\u003e","title":"Backup and Restore Harvester VMs"},{"content":"I have a very soft spot in my heart for VMware. Pretty early on in my career when I was a System Administrator I got the opportunity to convert my companies physical infrastructure to a virtual infrastructure on VMware vSphere. The version of VMware we moved to, is not relevant to this discussion other than to show how truly old I am, and in order to save the parties involved some embarrassment we\u0026rsquo;ll ignore this implementation detail.\nAfter this I went on to do consulting where I focused a good portion of my time on VMware vSphere, and later vRealize Automation related engagements. Eventually, I went on to work for VMware where I was focusing more on Kubernetes and Tanzu. VMware has almost been part of my career from the beginning and I\u0026rsquo;ve invested a lot of time into understanding the VMware technology stack.\nIn recent times though, I\u0026rsquo;m hearing of an increasing demand for alternatives. I\u0026rsquo;m not here to discuss why this is, or what I feel about it, just that I\u0026rsquo;m aware of it occurring around me. So this year I\u0026rsquo;ve updated my home lab to use Harvester, an open source virtualization management which not only allows me to run virtual machines but also containers. Time will tell whether or not this lab will be sufficient for my needs or if I\u0026rsquo;ll feel compelled to rebuild a VMware lab, but for now I\u0026rsquo;m getting out of my comfort zone and trying something new.\nFollow Along I\u0026rsquo;m setting up a Harvester lab for virtual machines and Rancher for Kubernetes all on the same hardware. If you\u0026rsquo;d like to learn more about this process, you can follow along with the deployment here:\nInstall Harvester Create a Harvester virtual network Creating a Harvester virtual machine Add Kubernetes to the Harvester installation with Rancher Backup and Restore Harvester Virtual Machines Deploy Virtual Machine from Template Configure IPMI Integration for Power Management The Hardware My home lab hardware hasn\u0026rsquo;t really changed in recent years. I\u0026rsquo;m still running E200-8d Supermicro servers and I have five of them. Each of those servers have a 500 GB NVMe disk in them that I will later use for Virtual Machine disks. If you\u0026rsquo;re looking for these servers, they are the 5 small form factor servers on the right side of the second shelf. The giant servers on the bottom shelf are sadly unused and just taking up space these days.\nI\u0026rsquo;ve also got two switches that I\u0026rsquo;m using but this isn\u0026rsquo;t necessary if you\u0026rsquo;re building a similar lab for Harvester. The HP switch is a layer 3 switch that I use for my router up to my Ubiquiti wireless network. The 10Gbps switch is used for my storage network. This used to be my vMotion / vSAN Storage network switch in the vSphere lab.\nCore Switch: HP v1910-24G Ethernet Switch Storage/vMotion Switch: Netgear XS708E 10 Gigabit That switch was a gift from fellow vExpert Jason Langer /2016/12/19/unbelievable-gift-home-lab/ The cables are colored according to purpose.\nYellow – Management Networks and Out of Band access. Green – Storage and vMotion Networks (10GbE) Blue – Trunk ports for virtual machines Red – Uplinks Architecture One of the main things that piqued my curiosity about Harvester was that it ran on the Kubevirt project. Kubevirt is a way to run virtual machines in a container where we can manage it with Kubernetes. Its pretty handy to be able to deploy virtual machines in my home lab, and have Kubernetes all running on the bare metal.\nHere is my architecture. It\u0026rsquo;s not a typo, node02 does not have an NVMe disk in it. There was an an issue and I never got it replaced so its missing for now. I have four nodes in my Harvester cluster, and I have a 5th server I\u0026rsquo;m using as a management machine with docker installed. This is where I\u0026rsquo;ll deploy docker (not a production recommendation). I have a mgmt vlan where I deploy the Harvester hosts, and I\u0026rsquo;ve setup a VM network connection as well and plumbed it down to my hosts.\nSummary It\u0026rsquo;ll be weird not having a VMware lab in my basement this year, but if the rest of the industry is starting to look around at alternatives, I suppose I should adapt as well and see what else is out there. Perhaps there will be more posts in this series if this lab works out for me.\n","permalink":"https://theithollow.com/2024/01/18/the-harvester-home-lab-vms-and-containers/","summary":"\u003cp\u003eI have a very soft spot in my heart for \u003ca href=\"https://www.vmware.com/\"\u003eVMware\u003c/a\u003e. Pretty early on in my career when I was a System Administrator I got the opportunity to convert my companies physical infrastructure to a virtual infrastructure on VMware vSphere. The version of VMware we moved to, is not relevant to this discussion other than to show how truly old I am, and in order to save the parties involved some embarrassment we\u0026rsquo;ll ignore this implementation detail.\u003c/p\u003e","title":"The Harvester Home Lab - VMs and Containers"},{"content":"During the previous posts in this series, we deployed a Harvester cluster, setup virtual networks to segment traffic, and deployed our first virtual machine. So far this has been a pretty good experience, but most of my day job requires me to do a lot of work on containers so I\u0026rsquo;d like to have Kubernetes clusters at home. Of course I could deploy a bunch of VMs for a new Kubernetes cluster, but Harvester is built on top of Kubernetes already. So this post will show us how we can connect Rancher to our Harvester cluster so we can use the underlying Kubernetes cluster that Harvester runs on to run our own containers.\nInstall Rancher Rancher is a Kubernetes cluster management tool. You can use the Rancher server to deploy new Kubernetes clusters to different types of infrastructure including the cloud or on vSphere. In our case though, we\u0026rsquo;ve already deployed a Kubernetes cluster, so we can connect that cluster to our Rancher server.\nThe Rancher server is an important piece of the puzzel that we need to add to our lab. The Rancher server allows you to configure your Kubernetes Authentication, including getting Kubeconfig files, performance monitoring and some CI/CD services. It can of course also deploy additional clusters if you\u0026rsquo;re looking for this funcationality.\nInstalling the Rancher server is pretty simple, however the the trick is you\u0026rsquo;ll need a Kubernetes cluster to deploy it to if you want high availability. I know, it\u0026rsquo;s a bit of a pain to deploy a Kubernetes cluster so that you can use Rancher to deploy more Kubernetes clusters, but them\u0026rsquo;s the breaks. The other installation method is not meant for production (lucky for us, this is my home lab, and its worth the risk to me) and is based on docker. If you have a docker host available, you can install the rancher server simply by running:\ndocker run -d --restart=unless-stopped \\ -p 80:80 -p 443:443 \\ --privileged \\ rancher/rancher:latest This is precisely what I did. I had a linux host and deployed docker to it before starting up the Rancher server. Once the command completes and the container starts you should be able to access it at the IP Address or server name of the docker server on port 80 or 443. I also made sure that my linux server had a DNS entry so that I can use it in the steps below.\nConfigure Rancher with Harvester Login to your Rancher server. If you installed Rancher like I did, the login page will give you instructions on retrieving the admin password. This comes down to running a docker command or a Kubernetes command depenging on how you installed it. Login and go to the Virtualization Management tab and then the click \u0026quot; Import Existing\u0026quot; button.\nName your cluster and give it a description. Then click Create.\nWhen you\u0026rsquo;ve created the Virtualization cluster, you\u0026rsquo;ll be taken to a screen with instructions on connecting your Harvester cluster with Rancher. Follow those instructions.\nThe instructions have you log in to your Harvester cluster and go to the Advanced Settings screen, and update the cluster-registration-url value to the string specified. I\u0026rsquo;ve copied the string from my Rancher server settings and pasted it in my Harvester cluster-registration-url.\nWhen you\u0026rsquo;ve completed that step, you can go back to your Rancher server. You\u0026rsquo;ll notice that the Harvester cluster shows up now and you can now mange the Harvester Kubernetes cluster from the Rancher server.\nI\u0026rsquo;ll go to my cluster and download the Kubeconfig file to my workstation. Then I\u0026rsquo;d set my Kubeconfig environment variable on my laptop so that the kubectl client knows how to find it.\nexport KUBECONFIG=~/.kube/filename\nOnce I\u0026rsquo;ve completed these tasks I can run kubectl commands on laptop against my Kubernetes cluster. To show that its working, I\u0026rsquo;ve listed the Kubernetes Nodes.\nSummary We\u0026rsquo;ve not got a set of physical servers running not only Harvester to manage virtual machines, but also Rancher for Kubernetes. I can now use the same set of bare metal hosts to deploy virtual machines and containers side by side and this is all free software.\n","permalink":"https://theithollow.com/2024/01/18/add-kubernetes-to-harvester-installation/","summary":"\u003cp\u003eDuring the previous posts in this series, we \u003ca href=\"/?p=10772\"\u003edeployed a Harvester cluster\u003c/a\u003e, \u003ca href=\"/?p=10810\"\u003esetup virtual networks\u003c/a\u003e to segment traffic, and \u003ca href=\"/?p=10824\"\u003edeployed our first virtual machine\u003c/a\u003e. So far this has been a pretty good experience, but most of my day job requires me to do a lot of work on containers so I\u0026rsquo;d like to have Kubernetes clusters at home. Of course I could deploy a bunch of VMs for a new Kubernetes cluster, but \u003ca href=\"https://harvesterhci.io/\"\u003eHarvester\u003c/a\u003e is built on top of Kubernetes already. So this post will show us how we can connect \u003ca href=\"http://rancher.com\"\u003eRancher\u003c/a\u003e to our Harvester cluster so we can use the underlying Kubernetes cluster that Harvester runs on to run our own containers.\u003c/p\u003e","title":"Add Kubernetes to Harvester Installation"},{"content":"In the previous posts, we setup the Harvester cluster and optionally added a virtual machine network and some cluster configs. In this post we\u0026rsquo;ll deploy our first Harvester virtual machine on our cluster. Before we do this, I wanted to point out that we\u0026rsquo;ve gotten to this point really without ever needing Kubernetes. If you\u0026rsquo;re paying close attention to the screens that have been showing up in the GUI, you might notice things like a namespace or labels or annotations that are clearly Kubernetes references. Even still, there\u0026rsquo;s really nothing in the steps up to this point where it would\u0026rsquo;ve required any Kubernetes experience at all. We\u0026rsquo;ll continue that trend as we build our first virtual machine.\nAdd an Image Before we deploy our VM, we\u0026rsquo;ll want to upload an image that we\u0026rsquo;ll use for our operating system. Navigate to the Images menu of the Harvester UI and create a new image. Leave the namespace as default (one of those Kubernetes references right there) and give the image a name and description as is customary. Then in the Basics tab, you\u0026rsquo;ll find an option to upload your iso or qcow2 image here. I\u0026rsquo;m using an Ubuntu 22.04 server iso for my installation media. Then click create.\nIn the Summary screen you\u0026rsquo;ll see the status of your image upload. After its complete, we can use it while we boot up our virtual machine.\nCreate a Virtual Machine Finally we\u0026rsquo;re ready to deploy a virtual machine, the real test of our configuration comes down to this. Now we can go to the Virtual Machines menu in the Harvester UI. Click Create VM and begin the process of giving it a name and description. Then within the Basics tab, specify the number of CPUs and the amount of Memory for your VM. You can also specify the SSH key to use for this VM in the drop down.\nNote: if you don\u0026rsquo;t have any SSH keys yet, you can add them in the Advanced Menu and come back to this screen when you\u0026rsquo;ve setup your SSH key.\nOK, back to our virtual machine creation. In the Volumes tab, you can specify our boot device. This is where we\u0026rsquo;ll specify a type of cd-rom and specify the image we uploaded earlier in this post as our image. We will also want to add a second volume where we will install the operating system. So, there are two disks for the virtual machine\u0026mdash;the boot image, and the virtual disk where we\u0026rsquo;ll install the operating system.\nNow on to the Networks tab. On this screen we can select the network that was created in the previous post. This should set the Type to bridge network. You an also use the mgmt network if you wish to skip the network creation post.\nIn the Node Scheduling tab, you can select which node in your cluster that the machine will be deployed on. In my case, any of the nodes will work. Click Create.\nIn a moment, your VM should start up and go into a running state. At this point, you\u0026rsquo;ll want to use the Console button on the VM to access a console to configure your virtual machine.\nInstall your OS.\nIf you get done with your installation, you can click the Kabob menu next to your VM to eject the CD-rom. When you\u0026rsquo;re done with the operating system installation, you can play with the virtual machine features. If you\u0026rsquo;re like me, the first thing you\u0026rsquo;re going to try is to migrate the machine to another node. Well I did just this and the migration worked well, but I lost about 6 pings to the machine while it was being moved to a new node. This might be too much time for some applications, but other apps might do just fine with this. It seems a very acceptable amount of time for a home lab.\nSummary In this post we finally deployed our Harvester virtual machine and got our operating system installed. We were able to accomplish this without really having to know anything about Kubernetes. In our next post, we\u0026rsquo;re going to get started with Kubernetes. The next post shows how to connect a Rancher server to the cluster we\u0026rsquo;ve deployed to give us Kubernetes capabilities on the same set of nodes. If you\u0026rsquo;re not a Kubernetes expert, don\u0026rsquo;t worry, these steps are very simple to keep going in this series.\n","permalink":"https://theithollow.com/2024/01/18/create-a-harvester-virtual-machine/","summary":"\u003cp\u003eIn the previous posts, \u003ca href=\"/?p=10772\"\u003ewe setup the Harvester cluster\u003c/a\u003e and optionally \u003ca href=\"/?p=10810\"\u003eadded a virtual machine network\u003c/a\u003e and some cluster configs. In this post we\u0026rsquo;ll deploy our first Harvester virtual machine on our cluster. Before we do this, I wanted to point out that we\u0026rsquo;ve gotten to this point really without ever needing \u003ca href=\"http://kubernetes.io\"\u003eKubernetes\u003c/a\u003e. If you\u0026rsquo;re paying close attention to the screens that have been showing up in the GUI, you might notice things like a \u003cem\u003enamespace\u003c/em\u003e or \u003cem\u003elabels\u003c/em\u003e or \u003cem\u003eannotations\u003c/em\u003e that are clearly Kubernetes references. Even still, there\u0026rsquo;s really nothing in the steps up to this point where it would\u0026rsquo;ve required any \u003ca href=\"http://kubernetes.io\"\u003eKubernetes\u003c/a\u003e experience at all. We\u0026rsquo;ll continue that trend as we build our first virtual machine.\u003c/p\u003e","title":"Create a Harvester Virtual Machine"},{"content":"In the first post in this series I deployed a Harvester cluster to my home lab servers. Before I get into deploying virtual machines though, I want to make some common configuration updates to my cluster. For instance, I want my virtual machine network traffic to be placed on a different network. Placing the small amount of network traffic that my virtual machines might be using on their own NIC is probably not necessary in my lab. But I wanted to build the lab in a way that somewhat mimics the way virtual machines are deployed in production environments so I can see how it works. When deploying VMware ESXi hosts, I would commonly have several virtual machine networks on their own NIC, each on their own VLAN. In this post we\u0026rsquo;ll deploy a Harvester virtual machine network.\nNetwork Configs From the Harvester console, click on the Cluster Networks/Config menu. From here click the Create Cluster Network button.\nIn the Cluster Network page there isn\u0026rsquo;t too much to do. Give the Cluster Network a name and a description. You an also add some additional metadata in the form of Labels and Annotations. When you\u0026rsquo;ve completed this task, click Create.\nAfter you click the create button you\u0026rsquo;ll be taken back to the \u0026quot; Cluster Networks/Configs\u0026quot; page. You\u0026rsquo;ll see that I\u0026rsquo;ve got a mgmt network listed. This comes from the management network that was configured when standing up the Harvester nodes in our first post. There is also a new network that we just created a moment ago. Click the Create Network Config button on the new network named \u0026ldquo;vms.\u0026rdquo;\nIn the network config settings, you\u0026rsquo;ll need to give the config a name and a description as well. Then you\u0026rsquo;ll begin the task of assigning the config to the nodes and telling them which NIC to use. In the Node Selector section, you have a few choices to define what Harvester nodes to assign this network config to. In my case I want every node in my cluster to be assigned the same config so I\u0026rsquo;ll choose \u0026quot; Select all nodes\u0026quot;. You can also pick specific nodes or match nodes based on a label selector. Click the Uplinks menu of the config next.\nOn the uplinks screen, choose the physical NIC to assign this config to. On all of my nodes, I want the interface \u0026quot; eno3\u0026quot; configured. I\u0026rsquo;m only using one NIC (not so production worthy now is it?) so the bond can be left as active-backup. Then click the Create button.\nThe config is done but now we can create a set of VM Networks. Navigate to the VM Network page and click the Create button.\nHere we\u0026rsquo;ll leave the namespace default and give it a name and description as is becoming a common task. In the Basics tab, I want to set my Type to \u0026quot; L2VlanNetwork\u0026quot; because the physical network I\u0026rsquo;m connecting to is an 802.11q trunk port, also called a Tagged VLAN in some parlance. I then want to set the VLAN to one of the VLANs I tagged on the physical network, in my case VLAN 150 is my virtual machine VLAN. Then in the last drop down, select the Cluster Network we configured earlier in this post. Then click the Route tab.\nIn the Route tab, you can select how your virtual machines will get their IP Addresses assigned. If you choose the DHCP option, then you can specify the IP address of the DHCP server. In my situation, I don\u0026rsquo;t have a DHCP server setup on my network, so I\u0026rsquo;ll choose the Manual option and specify the network CIDR and gateway. Then click Create.\nWhen you\u0026rsquo;ve completed your task, you\u0026rsquo;ll see the VM Network available in your summary screen.\nAt this point, if you deploy a virtual machine, you can specify this VM Network to define how the virtual machine\u0026rsquo;s network traffic is passed to the physical network over your specified VLAN across a set of defined physical NICs. This is precisely what we\u0026rsquo;re going to do in our next post within this series.\n","permalink":"https://theithollow.com/2024/01/18/creating-a-harvester-virtual-machine-network/","summary":"\u003cp\u003eIn the \u003ca href=\"/?p=10772\"\u003efirst post in this series\u003c/a\u003e I deployed a \u003ca href=\"https://harvesterhci.io/\"\u003eHarvester\u003c/a\u003e cluster to my home lab servers. Before I get into \u003ca href=\"/?p=10824\"\u003edeploying virtual machines\u003c/a\u003e though, I want to make some common configuration updates to my cluster. For instance, I want my virtual machine network traffic to be placed on a different network. Placing the small amount of network traffic that my virtual machines might be using on their own NIC is probably not necessary in my lab. But I wanted to build the lab in a way that somewhat mimics the way virtual machines are deployed in production environments so I can see how it works. When deploying VMware ESXi hosts, I would commonly have several virtual machine networks on their own NIC, each on their own VLAN. In this post we\u0026rsquo;ll deploy a Harvester virtual machine network.\u003c/p\u003e","title":"Creating a Harvester Virtual Machine Network"},{"content":"The installation of Harvester starts with deploying your nodes. This process would be similar to deploying ESXi hosts in a VMware deployment. To start the process, download the latest stable harvester release from their github repository. You can then mount this ISO file from your bare metal server.\nCreate the First Harvester Node in a Cluster After booting to the iso choose the version of Harvester to deploy from their GRUB list. In this example we\u0026rsquo;ll be using v1.2.1. Since this is the first node in my cluster, I will choose to create a new cluster, and then hit enter.\nTo deploy the harvester cluster we need to select a disk to install it on. Select your disk and hit enter.\nGive your node a name and click Enter.\nNext, you configure your node\u0026rsquo;s management network configuration. For the management NIC, you can select any of the NICs that you want to bond together for high availability. For a production cluster this should include at least two NICs, but since this is a home lab, I have a single NIC configured for my management network. I\u0026rsquo;m also using an untagged VLAN (or access port) so I\u0026rsquo;m not specifying any VLANs during this deployment. Your network setup may be different. Since I\u0026rsquo;m only using a single NIC, my \u0026ldquo;Bond Mode\u0026rdquo; setting should be left as \u0026ldquo;active-backup\u0026rdquo;. I\u0026rsquo;ll specify a static IP Address for my node as well, since in many home lab situations, the DHCP server doesn\u0026rsquo;t exist until after the virtualization layer has been created. And you can specify your MTU if you\u0026rsquo;d like to use jumbo frames.\nSpecify the DNS servers in a comma separated list.\nAnd since this is the first node in our cluster, we\u0026rsquo;ll also need to specify a VIP (Virtual IP). This VIP will serve as a load balanced communication endpoint for accessing our harvester cluster once it\u0026rsquo;s deployed. When you join other nodes to this cluster (later in this post) you\u0026rsquo;ll specify this VIP again for accessing the control plane of the cluster.\nNext, you\u0026rsquo;ll specify a super secure token. Whatever you make this cluster token, be sure to remember it or have it stored securely someplace, so that you can add more nodes to the cluster using this token.\nAfter you configure your token, you\u0026rsquo;ll also need to specify a password for access to this Harvester node directly through ssh. Note: the user to access the node over ssh is \u0026ldquo;rancher.\u0026rdquo;\nSpecify your NTP server. I\u0026rsquo;ve left it as the default.\nIf you\u0026rsquo;re running in a network that requires access through a proxy, you can add the proxy address along with usernames/passwords in the configure proxy screen.\nIf you\u0026rsquo;d like to add ssh keys to your login, you can import them through an http url such as github.com.\nAnd if you want to load a configuration, you can specify an http url with the config file for your cluster. This config file can help you setup a base configuration right from the get go, instead of having to configure everything by hand like we\u0026rsquo;ll be doing in this blog series.\nOn the Summary screen, confirm your settings and continue the installation. At this point your disk will be formatted. Once the deployment has completed, your harvester node will restart and eventually you should see the node and the cluster are both in a \u0026ldquo;Ready\u0026rdquo; status. Don\u0026rsquo;t panic if they come up with a not-ready status initially, you probably just need to wait a touch longer for all of the pods to be running to host your cluster.\nAdd Additional Nodes Once you have your first Harvester node up and running, you can deploy additional nodes. If you add three nodes, the cluster will use a three node control plane for high availability. If you\u0026rsquo;re deploying across fault domains, you can also label these nodes so they are distributed across zones to keep your cluster running in the event of a zone failure.\nTo add the additional nodes, you can boot them from the same iso file. This time when you\u0026rsquo;re presented with the choice, choose \u0026ldquo;join an existing Harvester cluster\u0026rdquo;. The rest of the screens will be very similar to what you did when setting up the first host.\nChoose your installation disk.\nGive your nodes a name.\nAgain, setup your management network for your nodes. Don\u0026rsquo;t forget to choose multiple NICs to provide HA if you need it.\nSpecify your DNS server.\nFor the management address, specify the VIP that was created during the installation of your first Harvester node.\nSpecify the token you used when setting up the initial node. This token needs to match that token.\nJust as before, specify a node password for ssh access.\nSpecify the NTP server. The NTP servers should match across each node in the cluster ideally.\nIf you have a network proxy, you can configure that for these additional nodes.\nImport your SSH keys if you\u0026rsquo;d like.\nAnd again specify an http URL for a harvester config file if you want to automate the configuration of the harvester node.\nAfter verifying the settings, begin the installation. Just like before when the installation is done the Harvester screen should appear and show the management cluster and node in a ready status.\nAccess the Harvester Dashboard Your cluster should be up and running with 1 or more nodes in it. To access the UI navigate to the management URL. In the case of my lab this was http://10.10.10.9 as you can see from the Harvester status page.\nWhen you access the Dashboard, you\u0026rsquo;ll see information pertaining to Hosts, VMs, storage, and other resources. The below screenshot shows the four hosts I\u0026rsquo;ve added to my Harvester cluster.\nSummary Harvester is a very simple installation that requires you to boot one or more servers from an ISO file. I found the process of deploying this solution very simple and even though this is built on top of Kubernetes, none of the steps we performed as part of this post involved running any Kubernetes commands. We\u0026rsquo;ll investigate how to deploy VMs in future posts, but for now we have a working Harvester cluster and we can take a look around at some of our capabilities. In the next post we\u0026rsquo;ll walk through setting up a virtual machine network for segmenting traffic on different network interface cards. After that, we\u0026rsquo;ll deploy a virtual machine.\n","permalink":"https://theithollow.com/2024/01/18/harvester-installation/","summary":"\u003cp\u003eThe installation of Harvester starts with deploying your nodes. This process would be similar to deploying ESXi hosts in a VMware deployment. To start the process, download the latest stable harvester release from their \u003ca href=\"https://github.com/harvester/harvester/releases\"\u003egithub repository\u003c/a\u003e. You can then mount this ISO file from your bare metal server.\u003c/p\u003e\n\u003ch2 id=\"create-the-first-harvester-node-in-a-cluster\"\u003eCreate the First Harvester Node in a Cluster\u003c/h2\u003e\n\u003cp\u003eAfter booting to the iso choose the version of Harvester to deploy from their \u003ca href=\"https://en.wikipedia.org/wiki/GNU_GRUB\"\u003eGRUB\u003c/a\u003e list. In this example we\u0026rsquo;ll be using v1.2.1. Since this is the first node in my cluster, I will choose to create a new cluster, and then hit enter.\u003c/p\u003e","title":"Harvester Installation"},{"content":"Yes, that\u0026rsquo;s right, this website is now 10 years old, and while there is no cake or ice cream for this celebration, I thought I\u0026rsquo;d take a second to reflect on what this website, and the 550 blog posts in it, have meant to me personally.\nThis site has in many ways chronicled my technology career, starting when I was a System Administrator, trying to learn Virtualization, up until now. I had no idea what I wanted this site to be, but I knew that I\u0026rsquo;d gained so much help from other bloggers in my day to day work, that I wanted to help out somebody myself. It seemed like the leave-a-penny / take-a-penny tray at a store. I didn\u0026rsquo;t want to only be taking penny\u0026rsquo;s out of that thing, sometimes you\u0026rsquo;ve got to be the one putting pennies in. That\u0026rsquo;s how I felt about my blog when I started.\nIs The U.S. Penny Going The Way Of The Dinosaur? | Buy \u0026hellip;\nGiving Back Gave Me Everything After 10 years though, it\u0026rsquo;s plainly obvious to me that the real value was putting those pennies INTO the tray. Every blog post I created (and for about the first seven years this was a post per week) was an opportunity to try to teach something I\u0026rsquo;d learned to someone else. Now instead of learning something to do my job, I needed to learn it well enough to teach it. And you know what? Teaching how to use something requires much more effort than just learning the concept to complete a task.\nThe sheer force of maintaining this blog brought me to other opportunities in my career. I found myself in job roles as a consultant. You know what consultants do? (I\u0026rsquo;ll wait here for one more second to allow you to make your joke about what consultants do \u0026hellip;. and now I\u0026rsquo;m moving on.) They learn how to use technology, and then figure out the best way to teach customers how to use it without making some of the introductory mistakes. Consulting and blogging actually have a lot in common, and I took A LOT of pride in being able to clearly communicate concepts to customers so that they could make the best informed decisions. Only my customers could say how effective I was at this, but I hope I was, because this was a priority of mine over the past decade.\nThis blog kept me looking into new technologies and building guides. Anytime I was learning something new, I started figuring out how to explain it. This lead me to many \u0026ldquo;guides\u0026rdquo; where I would look at quite a few aspects of a certain product set so that others learning it would have a guided path to learn it. When you tackle a big new subject figuring out where to start is sometimes the hard part. Here are just a few of the more popular guides, I\u0026rsquo;ve written.\nGetting Started Guide - vSphere with Tanzu Getting Started with Kubernetes vRealize Automation 7 Guide Tanzu Mission Control Guide Another unintended consequence of writing this blog was that I got much better at being OK not understanding things. Let me be clear, I\u0026rsquo;m still bad at this, but I will ask the \u0026ldquo;stupid\u0026rdquo; questions when I don\u0026rsquo;t understand something. Having lots of work published gave me the confidence to ask questions when I didn\u0026rsquo;t understand things. I myself, was not stupid, or broken, I just hadn\u0026rsquo;t learned it yet. Now, I ask questions all the time when I don\u0026rsquo;t understand things. I mean, I\u0026rsquo;ve written all these help documents, so it\u0026rsquo;s OK for me to ask for help when I need it right? In the end, I\u0026rsquo;ll probably teach others how its used as well.\nNow, I work as a Technical Marketing Architect where my job role is to learn products, and teach them to customers, partners, and internal employees through blogs, speaking sessions, and training videos. I wonder where I might\u0026rsquo;ve learned those skills?\nPeople I\u0026rsquo;ve me so many great people since starting this blog. Some of them are a direct result of this blog such as Stephen Foskett, who graciously invited me to several techfieldday events. Through these events, I got to meet other delegates, and presenters from a variety of companies and disciplines. Again, the real benefit to these techfieldday events are the conversations between other delegates. Questioning ideas (respectfully, always respectfully), and discussing where things might go in the future.\nIndirectly my blog has affected my career path, which has led me to meet, not only some of the best Architects, Engineers, and managers of my life, but also some of my best friends. I have a small group of technology professionals who I am very close friends with. A group of people who pick each other up when life hands them a Casino Royal style torture session. My friendships with these gentleman (who know who they are) mean an awful lot to me.\nTorture scene almost cost Daniel Craig his licence to \u0026hellip;\nParting Thoughts and Thank You This blog would\u0026rsquo;ve only been a technical diary without people reading it. I don\u0026rsquo;t publish as much as I once did, but this blog still gets good traffic each and every day, and that fills me with pride. I hope people are finding the answers they seek and I\u0026rsquo;m not wasting their time (like I am with this blog post). I hope that this site has encouraged people to learn new things, and pursue their career goals. It has certainly benefited me, as you can see.\nAs this is vExpert season, my twitter feed included many vExperts touting their accomplishments. Good for you, and congratulations! However, there was at least one tweet that was suggesting that you get nothing good out of the vExpert program (which has now been deleted). I think their point was that you don\u0026rsquo;t really get some great swag or gift for being a vExpert. This is totally the wrong take, in my opinion. vExperts were awarded a designation for helping their peers with VMware specific technology. If my blog is any indication, the gifts were given to you by what you put back into the world, not what you got from being a vExpert. I\u0026rsquo;ve gained plenty.\nTo my readers, thank you so much for reading this little blog all these years.\n","permalink":"https://theithollow.com/2022/02/19/theithollow-turns-10-years-old/","summary":"\u003cp\u003eYes, that\u0026rsquo;s right, this website is now 10 years old, and while there is no cake or ice cream for this celebration, I thought I\u0026rsquo;d take a second to reflect on what this website, and the 550 blog posts in it, have meant to me personally.\u003c/p\u003e\n\u003cp\u003eThis site has in many ways chronicled my technology career, starting when I was a System Administrator, trying to learn Virtualization, up until now. I had no idea what I wanted this site to be, but I knew that I\u0026rsquo;d gained so much help from other bloggers in my day to day work, that I wanted to help out somebody myself. It seemed like the leave-a-penny / take-a-penny tray at a store. I didn\u0026rsquo;t want to only be taking penny\u0026rsquo;s out of that thing, sometimes you\u0026rsquo;ve got to be the one putting pennies in. That\u0026rsquo;s how I felt about my blog when I started.\u003c/p\u003e","title":"theITHollow Turns 10 Years Old"},{"content":"Early this year I saw a challenge drop in my inbox for VMware vExperts to deploy a Plex server with Tanzu Community Edition (TCE). I hadn\u0026rsquo;t gotten to try out TCE yet, and had never messed with Plex so this sounded like a fun adventure. The goal was to architect an entire solution as though this Plex server was going to be a production app that a company might run their business off of. Which means, that I needed to not only get the thing working, but start making some steps to operationalize it. You know of course that production systems come with security, observability, auto-scaling, an incident response routine, etc. Well, this is my home lab so we\u0026rsquo;ll have to assume that this is my minimum viable product, because as you\u0026rsquo;ll see, I could still put a lot of work in here.\nIf you\u0026rsquo;d like to see some of the YAML manifests that I used to deploy packages, clusters, and plex, you can check out this github repository.\nConceptual Design If this is a real app, it probably could use a read conceptual design. Here were my abbreviated requirements, constraints, risks, and assumptions.\nRequirements\nMust be able to access the Plex server from a local address on my home network. Backups must be kept offsite Constraints\nSolution must fit within existing hardware specs. No additional home lab purchases allowed. Must run on Tanzu Community Edition Assumptions\nThis lab will be torn down after building it. In a real world, there would be much more to deal with including risks to the project but this is a good enough start for a home lab build.\nArchitecture The general layout of the solution can be found below.\nTanzu Community Edition - Kubernetes Cluster Deployment The real goal of this challenge was to get familiar with Tanzu Community Edition (TCE). TCE lets me deploy a Kubernetes cluster in a couple of different ways including a management cluster / workload cluster model or a single cluster. Since I was trying to mimic a more production type deployment, I opted for deploying a management cluster with a development workload cluster consisting of a single control plane node to reduce my resource utilization, and a production cluster where my app will run full time. The production cluster will run a highly available control plane load balanced by kube-vip. This load balancer is configured automatically through TCE.\nObviously I don\u0026rsquo;t need this in the lab, but in a production world, I\u0026rsquo;d want a development cluster where I can test things out without affecting my active users. I\u0026rsquo;ll assume I can get away with a small dev cluster for this scenario.\nThe process of deploying a TCE cluster is documented here, but you might also take a look at my installation notes post. There are a couple of workarounds needed as of the time of this writing, to get the management cluster deployed correctly on vSphere.\nLoad Balancing So my control plane nodes are load balanced with kube-vip. There wasn\u0026rsquo;t anything special I needed to do to make this work other than completing the cluster deployment. However, in my environment I\u0026rsquo;ll need a load balancer periodically in order to present my Kubernetes applications to my users.\nIn my situation, I opted for deploying Metal-lb. This is a load balancer solution that is deployed within the Kuberenetes cluster.\nPackage Deployments Before I deploy the Plex server, I should deploy some tools I\u0026rsquo;ll need to operate the application. Tanzu Community Edition provides \u0026ldquo;packages\u0026rdquo; that can be installed with some simple commands. I\u0026rsquo;ve deployed a few of these packages to meet my operational requirements.\nContour - I deployed the Contour Ingress Controller so that I can present my applications through a reverse proxy. I could probably get away with not using this, but this is a good pattern to get into. If I was deploying my solution on a hyperscaler like AWS I\u0026rsquo;d get billed for each of my load balancers, and an ingress controller will make sure that I only need a single load balancer for all of the apps in my cluster. Also I can get additional layer 7 capabilities if I really needed them. Cert-Manager - I might as well deploy cert-manager as well so that I can get certificates created for my applications including plex. Using Cert-manager with contour means that I can add certificates to my applications without re-configuring my applications. The best certificates are the ones you don\u0026rsquo;t think about. Prometheus - I deployed the prometheus package so that I can see the performance of my application. If I have an outage because my apps ran out of resources, its still on me so I need a way to at least see if not alert on those rules. Grafana - Since I\u0026rsquo;m deploying Prometheus, I\u0026rsquo;m also deploying the Grafana plugin so that I can get visualizations around the prometheus data. Fluentbit - Logs are critical to production applications and this includes audit logs. While deploying the TCE clusters, I enabled audit logging so I can be sure that I\u0026rsquo;m the only administrator playing with configuration settings on my cluster. Fluentbit will forward the logs from my clusters to a log aggregation solution like vRealize Log Insight. Storage Before I deploy my plex server, I also need to consider how storage will be laid out. The plex server is looking for some persistent storage to store two specific things. The first is the media that you\u0026rsquo;re sharing through plex. All those movies, TV shows, music albums have to be read by Plex so that they can be watched or listened to. For this, I created an NFS mount on my Synology Disk Station. After building the NFS mount on the Synology, I can drop my media files on my array and when I deploy the plex server, plex will mount this media directory. So even if I delete, upgrade, re-configure, etc. my plex server, the media will still be safe and sound on my storage array. I can even move my plex server to different clusters/hardware as long as it can reach the NFS mount.\nThe second thing needed for persistent storage was the plex database . Since this is running a database I chose to store this on a vSAN persistent volume instead of NFS. My clusters were deployed into a vSphere environment so I can leverage Cloud Native Storage to request storage. In my case, I have a Kubernetes default storage class that allows me to request a Persistent Volume Claim, where in turn a persistent volume is created on vSAN and attached to my Kubernetes nodes, to be used by containers.\nDisaster Recovery I\u0026rsquo;m storing my Plex Server configurations in git so I\u0026rsquo;m actually not worried about my plex server at all. If I was concerned about it, I could use the Velero package provided by TCE so that I could run backups of my Kubernetes object and store them in an S3 bucket. However, in my case I\u0026rsquo;m ONLY worried about the media on my NAS. Everything else I can re-create if I had a failure but the media is critical.\nMy backup solution was outside of Kubernetes. I have a weekly backup that stores my media files on a second Synology array in my lab (yeah I have two, stop judging me) and that array has a monthly job where it will backup files to Amazon S3 for off site, long term storage.\nPlex Now it\u0026rsquo;s time to deploy Plex. I found several blog posts helpful, but the post the MOST helpful was this one from debontonline.com because it matched my lab environment really well. In this post, Mr. de Bont has laid out the kubernetes manifests pre-built with the ports required to operate Plex from other devices. I needed to modify my storage because I chose to use a persistent volume on vSAN for my configuration directory instead of NFS, but in other respects the YAML here was all that was needed to deploy my resources. Thanks to deploying Metal-lb and Contour first, I can even re-use the services and Ingress rules used in his post.\nNote: The plex media server cam up and I could stream content. However, I did have an issue that I was unable to resolve, where the plex media server wasn\u0026rsquo;t registering with the service. My understanding was that if a plex claim [token] was added to the container environment variables that registration would happen after startup. I have yet to resolve this issue.\nBeyond the Minimum Viable Product It\u0026rsquo;s obvious that I wasn\u0026rsquo;t actually putting in the effort you would for a production application that a business depends on, but I tried to touch on a lot of it. I haven\u0026rsquo;t created customized dashboards in Grafana, or configured the appropriate alerts specific to my application (plex) in my log aggregation tool, but it\u0026rsquo;s a lab and I\u0026rsquo;m not getting paid to make improvements to this system all day long. :)\nIf this was really a production system I\u0026rsquo;d need to spend some more time on health checks, and network policies, and a secure deployment pipeline for updates, and some webhooks for my git repo. That list goes on and on. There are a million things to do for a real production application, but hopefully I covered the most common ones for getting an up up and running with a minimum amount of tools ready to observe and handle failures.\n","permalink":"https://theithollow.com/2022/01/28/vmware-tanzu-challenge-plex-server/","summary":"\u003cp\u003eEarly this year I saw a challenge drop in my inbox for \u003ca href=\"https://vexpert.vmware.com/\"\u003eVMware vExperts\u003c/a\u003e to deploy a Plex server with \u003ca href=\"https://tanzucommunityedition.io/\"\u003eTanzu Community Edition (TCE)\u003c/a\u003e. I hadn\u0026rsquo;t gotten to try out TCE yet, and had never messed with \u003ca href=\"https://plex.tv\"\u003ePlex\u003c/a\u003e so this sounded like a fun adventure. The goal was to architect an entire solution as though this Plex server was going to be a production app that a company might run their business off of. Which means, that I needed to not only get the thing working, but start making some steps to operationalize it. You know of course that production systems come with security, observability, auto-scaling, an incident response routine, etc. Well, this is my home lab so we\u0026rsquo;ll have to assume that this is my minimum viable product, because as you\u0026rsquo;ll see, I could still put a lot of work in here.\u003c/p\u003e","title":"VMware Tanzu Challenge - Plex Server"},{"content":"I was messing around in my vSphere home lab and wanted to try out the new Tanzu Community Edition that was recently announced. After getting up to speed on some of the documentation, I tried the installation out in my vSphere 7 lab. There were a couple of notes that I think will help some other people get started with their installation.\nTCE logo\nBootstrapping on MacBook The installation on my MacBook kept failing when trying to deploy the management cluster in my vSphere environment. It turned out that the version of docker I was using was leveraging cgroupsv2 which caused an issue with the bootstrap process. You can read more info about the fixes being worked on in this GitHub issue. To workaround this issue, I pre-deployed a kind cluster using the command:\nkind create cluster Once the kind cluster was up and running, I re-ran the deployment with the\n--use-existing-bootstrap-cluster kind-kind switch, which tells the cli to use the pre-built kind cluster as the bootstrap cluster. This worked around the cgroupsv2 interoperability issue for me. Under normal circumstances you could just run a deployment with the \u0026ndash;ui switch and fill out some information in a web page. But, that process currently has an issue with the cgroupsv2 so we use an existing bootstrap cluster instead. Just copy the command at the bottom of the ui installer as seen here.\nThen use it in a command prompt with the additional \u0026ndash;use-existing-bootstrap-cluster kind-kind switch.\nPersistent Volumes Won\u0026rsquo;t Deploy After my clusters were deployed I started to deploy an app that required the use of persistent volumes. Luckily, my cluster was deployed with a default storage class from the beginning, so I built a persistent volume claim to provision a volume for my app. Unfortunately I wasn\u0026rsquo;t able to get a persistent volume to be created. After looking at the logs in the vsphere-csi-controller container located in the kibe-system namespace, I started seeing errors related to:\ncreateSpecs.datastores is empty After reading through this issue vsphere-csi driver issue in GitHub, I decided to re-deploy my Tanzu Kubernetes clusters with my username changed from \u0026quot; eshanks\u0026quot; to \u0026quot; eshanks@domain.com\u0026quot;. After which, I tried to use persistent volumes again and there were no more issues. So when you\u0026rsquo;re deploying your TCE clusters use a User Principal Name format and save yourself the headaches.\nIf you\u0026rsquo;d like to see if you have similar errors run the command below on your workload cluster. If you\u0026rsquo;re having a storage issue on Tanzu Kubernetes clusters, this is a great container to check for hints on where the issue might lie.\nkubectl logs vsphere-csi-controller-[replace-with-your-container-name] -n kube-system vsphere-csi-controller Summary These two issues took me some time to troubleshoot so I hope that giving you a heads up on my experiences can help speed up your own deployments in your endeavors. Be sure to use a full User Principal Name during your installation, and pre-build your bootstrap cluster when you start your deploy.\n","permalink":"https://theithollow.com/2022/01/17/tanzu-community-edition-on-vsphere-installation-notes/","summary":"\u003cp\u003eI was messing around in my vSphere home lab and wanted to try out the new \u003ca href=\"https://tanzucommunityedition.io\"\u003eTanzu Community Edition\u003c/a\u003e that was recently \u003ca href=\"https://tanzu.vmware.com/content/blog/vmware-tanzu-community-edition-announcement\"\u003eannounced\u003c/a\u003e. After getting up to speed on some of the \u003ca href=\"https://tanzucommunityedition.io/docs/latest/\"\u003edocumentation\u003c/a\u003e, I tried the installation out in my vSphere 7 lab. There were a couple of notes that I think will help some other people get started with their installation.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://tanzucommunityedition.io/docs/img/tce-logo.png\"\n         alt=\"TCE logo\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003eTCE logo\u003c/p\u003e\n        \u003c/figcaption\u003e\n\u003c/figure\u003e\n\n\u003ch2 id=\"bootstrapping-on-macbook\"\u003eBootstrapping on MacBook\u003c/h2\u003e\n\u003cp\u003eThe installation on my MacBook kept failing when trying to deploy the management cluster in my vSphere environment. It turned out that the version of docker I was using was leveraging cgroupsv2 which caused an issue with the bootstrap process. You can read more info about the fixes being worked on in \u003ca href=\"https://github.com/vmware-tanzu/community-edition/issues/2798\"\u003ethis GitHub issue\u003c/a\u003e. To workaround this issue, I pre-deployed a kind cluster using the command:\u003c/p\u003e","title":"Tanzu Community Edition on vSphere Installation Notes"},{"content":"A really common task after deploying a Kubernetes cluster is to configure it to use a container registry where the container images are stored. A Tanzu Kubernetes Cluster (TKC) is no exception to this rule. vSphere 7 with Tanzu comes with an embedded harbor registry that can be used, but in many cases you all ready have your own container registry and so you\u0026rsquo;d like to continue using that instead.\nTrust the Registry Certificate A container registry should be configured to use a TLS certificate to prevent logins from being sent over clear text. If your container registry uses a publicly trusted certificate then your work is done. However, if you\u0026rsquo;re using an internal certificate authority to mint your certificates, then your Kubernetes nodes will need to be configured to trust this certificate chain.\nTo configure the trust, we\u0026rsquo;ll apply a TkgServiceConfiguration to the Supervisor cluster. Doing so will trigger a rolling update to the cluster until all of the nodes have the new configuration. This TkgServiceConfiguration will need a base64 encoded string of the PEM encoded certificate.\nTo create a base64 encoded string you can run:\nbase64 -i [certificatename.crt] A sample configuration is shown below. You will notice that you can include more than one certificate. Remember to include your internal RootCA in the list.\napiVersion: run.tanzu.vmware.com/v1alpha1 kind: TkgServiceConfiguration metadata: name: tkg-service-configuration spec: defaultCNI: antrea trust: additionalTrustedCAs: - name: HollowCA data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNyR...... - name: second-cert-name data: base64-encoded string of a PEM encoded public cert 2 Once you\u0026rsquo;ve created a yaml file from the template above and placed your base64 encoded certificate string into it, you\u0026rsquo;ll need to switch to the Supervisor cluster\u0026rsquo;s context and apply it. Login to your Supervisor cluster context.\nThen apply the configuration.\nkubectl apply -f [tkgconfig_file.yaml] Deploy Containers from the Private Registry After the TKGServiceConfiguration has been applied to the supervisor cluster, the Tanzu Kubernetes Clusters should start to update. The update process consists of replacing all the existing nodes with new nodes that have the appropriate TKGServiceConfigurations that we just applied.\nAfter the clusters have updated you can start to deploy your containers using the private registry.\nLogin to the Tanzu Kubernetes Cluster (TKC) if you haven\u0026rsquo;t already, and then switch your Kubernetes context to this TKC. Deploy a pod from an image located in the registry and you should be able to successfully download the image.\n","permalink":"https://theithollow.com/2021/09/22/configure-a-private-registry-for-tanzu-kubernetes-clusters/","summary":"\u003cp\u003eA really common task after deploying a Kubernetes cluster is to configure it to use a container registry where the container images are stored. A Tanzu Kubernetes Cluster (TKC) is no exception to this rule. vSphere 7 with Tanzu comes with an embedded harbor registry that can be used, but in many cases you all ready have your own container registry and so you\u0026rsquo;d like to continue using that instead.\u003c/p\u003e","title":"Configure a Private Registry for Tanzu Kubernetes Clusters"},{"content":"My day job requires me to do a lot of work with VMware Cloud on AWS. If I plan on doing any real work with the virtual machines, kubernetes clusters, or applications I really need a VPN tunnel to securely access those resources. My problem has been setting up my aging Ubiquiti USG firewall with BGP. This post will show how I setup a route based VPN tunnel with my Ubiquiti USG. Big shoutout to Brian Beach for his work setting up the USG with an AWS Transit Gateway.\nOverview For this setup, I\u0026rsquo;ll be creating a route-based VPN in the VMware Cloud side. The default ASN for VMware Cloud is 65000 so I\u0026rsquo;ll use that and an ASN of 64512 for the home network side. I\u0026rsquo;e also selected the 169.254.254.0/30 range for my inside tunnel interface network.\nNOTE: These ranges are reserved in VMware Cloud on AWS so avoid these ranges: 169.254.0.0-169.254.31.255, 169.254.101.0-169.254.101.3\nVMware Cloud on AWS Setup First we\u0026rsquo;ll setup the VPN on the VMware Cloud on AWS side. In the VPN settings select the Route Based tab and then click the \u0026ldquo;Add VPN\u0026rdquo; button. Give it a name and then change the Local IP Address field to use the Public address and not the private address. For the remote public IP address specify the public IP address for your USG firewall. (If you\u0026rsquo;re on your home network, go to myipaddress.com to get this info). For the BGP local IP/prefix pick a network to use for your internal network that doesn\u0026rsquo;t include the reserved ranges mentioned earlier. Be sure to add the prefix /30 to this field. Then specify the internal tunnel IP Address for the home network side and finally the BGP neighbor ASN which in our case is 64513.\nI also needed to open up the advanced tunnel properties and changed both the IKE Digest Algorithm to IKE v1 and the Tunnel Digest Algorithm to use SHA-1 because the USG can\u0026rsquo;t do SHA-2.\nUbiquiti USG Setup The USG setup is interesting. The USG web interface can\u0026rsquo;t do a BGP vpn tunnel so I had to restort to the advanced configuration described here. This means creating a config.gateway.json file with our configuration and putting it in our Ubiquiti controller. In my case thats a cloud key.\nFirst start by creating the json file described above. If you\u0026rsquo;d like, you can use the template below to fill in your own values. The important components are listed below. My suggestion is to draw a diagram similar to the first one in this post and write down what your configurations will be. Then you can go and fill in the IP Addresses, ASNs, passphrase, etc.\n{ \u0026#34;interfaces\u0026#34;: { \u0026#34;vti\u0026#34;: { \u0026#34;vti0\u0026#34;: { \u0026#34;address\u0026#34;: [ \u0026#34;169.254.254.2/30\u0026#34; ], \u0026#34;firewall\u0026#34;: { \u0026#34;in\u0026#34;: { \u0026#34;ipv6-name\u0026#34;: \u0026#34;LANv6_IN\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;LAN_IN\u0026#34; }, \u0026#34;local\u0026#34;: { \u0026#34;ipv6-name\u0026#34;: \u0026#34;LANv6_LOCAL\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;LAN_LOCAL\u0026#34; }, \u0026#34;out\u0026#34;: { \u0026#34;ipv6-name\u0026#34;: \u0026#34;LANv6_OUT\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;LAN_OUT\u0026#34; } }, \u0026#34;mtu\u0026#34;: \u0026#34;1436\u0026#34; } } }, \u0026#34;protocols\u0026#34;: { \u0026#34;bgp\u0026#34;: { \u0026#34;64513\u0026#34;: { \u0026#34;neighbor\u0026#34;: { \u0026#34;169.254.254.1\u0026#34;: { \u0026#34;remote-as\u0026#34;: \u0026#34;65000\u0026#34;, \u0026#34;soft-reconfiguration\u0026#34;: { \u0026#34;inbound\u0026#34;: \u0026#34;\u0026#39;\u0026#39;\u0026#34; }, \u0026#34;timers\u0026#34;: { \u0026#34;holdtime\u0026#34;: \u0026#34;30\u0026#34;, \u0026#34;keepalive\u0026#34;: \u0026#34;10\u0026#34; } } }, \u0026#34;parameters\u0026#34;: { \u0026#34;router-id\u0026#34;: \u0026#34;73.9.249.240\u0026#34; }, \u0026#34;redistribute\u0026#34;: { \u0026#34;connected\u0026#34;: \u0026#34;\u0026#39;\u0026#39;\u0026#34;, \u0026#34;static\u0026#34;: \u0026#34;\u0026#39;\u0026#39;\u0026#34; } } } }, \u0026#34;vpn\u0026#34;: { \u0026#34;ipsec\u0026#34;: { \u0026#34;auto-firewall-nat-exclude\u0026#34;: \u0026#34;enable\u0026#34;, \u0026#34;nat-traversal\u0026#34;: \u0026#34;enable\u0026#34;, \u0026#34;ipsec-interfaces\u0026#34;: { \u0026#34;interface\u0026#34;: [ \u0026#34;eth0\u0026#34; ] }, \u0026#34;nat-networks\u0026#34;: { \u0026#34;allowed-network\u0026#34;: { \u0026#34;0.0.0.0/0\u0026#34;: \u0026#34;\u0026#39;\u0026#39;\u0026#34; } }, \u0026#34;esp-group\u0026#34;: { \u0026#34;ESP_54.148.170.2\u0026#34;: { \u0026#34;compression\u0026#34;: \u0026#34;disable\u0026#34;, \u0026#34;lifetime\u0026#34;: \u0026#34;3600\u0026#34;, \u0026#34;mode\u0026#34;: \u0026#34;tunnel\u0026#34;, \u0026#34;pfs\u0026#34;: \u0026#34;enable\u0026#34;, \u0026#34;proposal\u0026#34;: { \u0026#34;1\u0026#34;: { \u0026#34;encryption\u0026#34;: \u0026#34;aes256\u0026#34;, \u0026#34;hash\u0026#34;: \u0026#34;sha1\u0026#34; } } } }, \u0026#34;ike-group\u0026#34;: { \u0026#34;IKE_54.148.170.2\u0026#34;: { \u0026#34;key-exchange\u0026#34;: \u0026#34;ikev1\u0026#34;, \u0026#34;lifetime\u0026#34;: \u0026#34;86400\u0026#34;, \u0026#34;mode\u0026#34;: \u0026#34;main\u0026#34;, \u0026#34;proposal\u0026#34;: { \u0026#34;1\u0026#34;: { \u0026#34;dh-group\u0026#34;: 14, \u0026#34;encryption\u0026#34;: \u0026#34;aes256\u0026#34;, \u0026#34;hash\u0026#34;: \u0026#34;sha1\u0026#34; } }, \u0026#34;dead-peer-detection\u0026#34;: { \u0026#34;action\u0026#34;: \u0026#34;restart\u0026#34;, \u0026#34;interval\u0026#34;: \u0026#34;15\u0026#34;, \u0026#34;timeout\u0026#34;: \u0026#34;60\u0026#34; } } }, \u0026#34;site-to-site\u0026#34;: { \u0026#34;peer\u0026#34;: { \u0026#34;54.148.170.2\u0026#34;: { \u0026#34;authentication\u0026#34;: { \u0026#34;mode\u0026#34;: \u0026#34;pre-shared-secret\u0026#34;, \u0026#34;pre-shared-secret\u0026#34;: \u0026#34;MYSUPERSECRETPASS\u0026#34; }, \u0026#34;connection-type\u0026#34;: \u0026#34;initiate\u0026#34;, \u0026#34;ike-group\u0026#34;: \u0026#34;IKE_54.148.170.2\u0026#34;, \u0026#34;local-address\u0026#34;: \u0026#34;73.9.249.240\u0026#34;, \u0026#34;vti\u0026#34;: { \u0026#34;bind\u0026#34;: \u0026#34;vti0\u0026#34;, \u0026#34;esp-group\u0026#34;: \u0026#34;ESP_54.148.170.2\u0026#34; } } } } } } } When you have the config file created and updated to your own IP Addresses and ASNs, you\u0026rsquo;ll need to copy that file over to your Ubiquit cloud controller. For the cloud key, the path is: /srv/unifi/data/sites/default\nNote: if you are not using a cloud key this path might be different. Also, if you\u0026rsquo;re using a site other than default, the directory will change to the name of your site. If not using sites, you\u0026rsquo;ll use default like I did.\nOnce the file has been copied over to the Ubiquiti controller, you\u0026rsquo;ll need to re-provision your USG. Go to the UI and force provision the USG.\nAfter the USG is provisioned you can test your VPN tunnel. If the provisioning fails, you can see info in the server.log file on the controller. Worst case, you can remove the config.gateway.json file and re-provision.\nI want to note that you will NOT see any of these configurations in the UI after deploying. These advanced settings are merged with the UI configs but they are not displayed in the UI anywhere.\nSummary If you\u0026rsquo;re running a Ubiqiti USG firewall and want to setup a VPN tunnel to VMware Cloud on AWS, these instructions should get you setup.\n","permalink":"https://theithollow.com/2021/07/02/ubiquiti-usg-vpn-setup-for-vmware-cloud-on-aws/","summary":"\u003cp\u003eMy day job requires me to do a lot of work with VMware Cloud on AWS. If I plan on doing any real work with the virtual machines, kubernetes clusters, or applications I really need a VPN tunnel to securely access those resources. My problem has been setting up my aging \u003ca href=\"https://amzn.to/3Aw7TE3\"\u003eUbiquiti USG\u003c/a\u003e firewall with BGP. This post will show how I setup a route based VPN tunnel with my Ubiquiti USG. Big shoutout to \u003ca href=\"https://twitter.com/brianjbeach\"\u003eBrian Beach\u003c/a\u003e for his work \u003ca href=\"https://blog.brianbeach.com/posts/2020-09-06-unifi-usg-aws-vpn/\"\u003esetting up the USG with an AWS Transit Gateway\u003c/a\u003e.\u003c/p\u003e","title":"Ubiquiti USG VPN Setup for VMware Cloud on AWS"},{"content":"At some point, you\u0026rsquo;ll be faced with an upgrade request. New Kubernetes features, new security patches, or just to maintain your support. A vSphere 7 with Tanzu deployment has several components that may need to be updated and most of which can be updated independently of one another. In this post we\u0026rsquo;ll walk through an update to vSphere, then update the Supervisor namespace, and then finally the Tanzu Kubernetes cluster.\nvSphere Update To begin we\u0026rsquo;ll start with a vSphere update. According to the vSphere update planner from the vCenter UI, I have some patches available for deployment.\nWhen we go to the Update Planner screen within Center, we\u0026rsquo;ll select the update that we plan to deploy and click the generate report button for a pre-update check. This makes sure the update should be able to complete successfully before we actually run the update.\nAs you can see, the pre-update checks passed, so it should be safe to deploy the update. If you click the Open Appliance Management button, it will send you management UI for vCenter.\nIn the management UI, navigate to the Update tab. Here you should be able to check for updates, and or apply them. Here You can see that the 7.0.2 update is available and ready for deployment. I select the update and click the Stage and Install link.\nThe link walks us through a few extra steps in a wizard format. First, you must accept the user agreement.\nNext, you must provide the SSO password.\nThen you need to decide whether you want to join the VMware Customer Experience Improvement Program (CEIP). This is recommended because it can provide interoperability information to you in the vCenter console, as well as helping to improve the product in future versions.\nLastly, you have to verify that you\u0026rsquo;ve backed up your vCenter. You could always lie about your backup status, but I would recommend actually taking a good backup first before these upgrades.\nWhen done you\u0026rsquo;ll see a status message about the progress.\nEventually it should complete.\nUpdate Supervisor Namespaces Once the vCenter has been updated, you can continue updating your supervisor namespaces. If you navigate to the Workload Management screen, you go to the updates tab. Here you\u0026rsquo;ll select your supervisor cluster where you\u0026rsquo;ll see the current version and will be able to select the supported available versions. In my case I\u0026rsquo;m selecting the most recent update. Click the Apply Updates link.\nWhen you being the update process you\u0026rsquo;ll see some activity in the recent tasks window in vCenter. The update performs a rolling update to the Supervisor cluster VMs. The Supervisor cluster should remain online during these updates as one node at a time is upgraded and placed back into the cluster.\nTanzu Kubernetes Cluster Updates After the Supervisor namespaces have been upgraded, you can shift your attention to the Tanzu Kubernetes clusters (TKC). NOTE: These are often referred to as child clusters, or workload clusters.\nFirst we login to the Supervisor namespace that contains our workload clusters. This is done through the kubectl vsphere login process covered in other posts. Once logged in, you can run kubectl get tkc which will list your clusters, and their versions. In the example below my cluster is running version 1.18.15.\nBefore we can upgrade the cluster, we need to know what releases are available to us. We can do this by running kubectl get tkr. NOTE: that tkr stands for TanzuKubernetesReleases which can also be used in the command line instead of the short form of tkr.\nOnce you\u0026rsquo;ve found the version you plan to upgrade to, it\u0026rsquo;s time to edit your cluster config to use your new version. You should be aware that you can only update minor versions and that you must upgrade them sequentially. In our case we\u0026rsquo;ll upgrade from 1.18 to 1.19 before upgrading to 1.20.\nTo update my cluster config we\u0026rsquo;ll run kubectl edit tkc tkg-cluster-1 and we\u0026rsquo;ll update both the \u0026ldquo;fullVersion\u0026rdquo; and the \u0026ldquo;version\u0026rdquo; spec.\nFor the full version, we\u0026rsquo;ll replace it with null. For the version, we\u0026rsquo;ll specify the short form version of our update. You\u0026rsquo;ll notice that after the update, the fullVersion is fully populated again. Save the config.\nOnce you set the version and save the config you can check the cluster status with the kubectl get tkc again and you\u0026rsquo;ll notice that the phase is in an updating state. Just like the Supervisor cluster, the workload cluster will be updated in a rolling update fashion. If you have a \u0026ldquo;Production\u0026rdquo; cluster deployed with three control plane nodes, you should see no downtime as the nodes are replaced one by one.\nAfter the deployment is fully complete, you can re-run the process to update to any other versions. You can see that after I did my first update, I have a new column letting me know there is another version i\u0026rsquo;m eligible to upgrade to if I chose.\nSummary Kubernetes clusters are not immune to upgrades but with a rolling update methodology, we can limit or prevent downtime to any applications running on them. vSphere with Tanzu allows you to upgrade Tanzu Kubernetes clusters independently from the Supervisor cluster, but you need to be within two minor versions to be supported. Upgrade your vCenters, then update your Supervisor Namespaces, and lastly you can update the Tanzu Kubernetes clusters.\n","permalink":"https://theithollow.com/2021/05/13/vsphere-7-with-tanzu-updates/","summary":"\u003cp\u003eAt some point, you\u0026rsquo;ll be faced with an upgrade request. New Kubernetes features, new security patches, or just to maintain your support. A vSphere 7 with Tanzu deployment has several components that may need to be updated and most of which can be updated independently of one another. In this post we\u0026rsquo;ll walk through an update to vSphere, then update the Supervisor namespace, and then finally the Tanzu Kubernetes cluster.\u003c/p\u003e","title":"vSphere 7 with Tanzu Updates"},{"content":"I was asked for a post detailing my home A/V setup. So this post will outline the equipment in my office that I use for video conferencing and recording videos.\nDesk One of the best things I did for my office was to switch to a standing desk. I was spending way too much time sitting in a chair and a standing desk helped to alleviate muscle pain. It also kept me in a bit more active mood if that\u0026rsquo;s a thing. After doing a bit of research I decided to try the Terra 2 desk. My biggest concern with a standing desk was hiding cables in a desk with no front. So I added the cable chase which did help. It does block the front of the desk top though so if you want to mount anything like a monitor/camera/light mount beware that with the cable chase on, you can\u0026rsquo;t use a clamp.\nCamera I upgraded my Logitech c920 webcam to a Sony Alpha a6400 camera a few months ago. I just stepped into a new role in Technical Marketing, and expected to be doing webinars, and videos so I invested in a good camera. I use the camera for web conferencing as well, but it was a bit tricky to get working in this fashion.\nTo setup the camera with web conferencing software like Zoom, I needed to purchase an Elgato Camlink adapter. I also found that when I had my video on, I couldn\u0026rsquo;t use my Streamdeck when it was plugged into the second Thunderbolt port on the same side of my Macbook Pro. I assume I was pushing too much data through the bus.\nTeleprompter I have been creating videos for KubeAcademy.com and had a problem when creating videos. We wanted to have our instructors visible in the videos as much as possible. Now I don\u0026rsquo;t have a problem talking to people without a script, but for a professional video, I don\u0026rsquo;t want people to have to listen to my word flubs (and scattered thoughts sometimes, I can admit it). So for a professional video I often read from a script. I know that doesn\u0026rsquo;t make it seem like I\u0026rsquo;m sharp enough to do it in my head, but in the end I\u0026rsquo;d rather have a concise message the listener can understand. I also didn\u0026rsquo;t want to look like I wasn\u0026rsquo;t looking at the camera.\nThe answer was a teleprompter that I can use with my smartphone. There are several teleprompters on the market but I went with a Glide Gear.\nThe Sony Camera mounts directly to the teleprompter and my smartphone acts as the prompter. I purchased an iOS app called \u0026quot; Prompt Smart\u0026quot; which lets me type or upload a script and then run the program.\nThe software lets me mirror text (for the teleprompter) and flip horizontal/vertical etc. It also uses the microphone, so that when you are reading the script, it will scroll for you. I will admit that sometimes I have to redo stuff because it scrolled too fast for me, or probably got confused by computer jargon. The software can also just use your phone\u0026rsquo;s front facing camera if you would like.\nGreen Screen I had a background that I had been using for KubeAcademy, but it was painful (for me anyway) to setup and put away. Mainly folding the backdrop up each time was like trying to plug a USB-A cable. It would always take 3 or four tries.\nSo I picked up a new one and while I was doing it, I thought it might as well be a green screen so I can use some backgrounds and things. So I purchased the Elgato Green Screen. It packs away fairly well, its portable and its really easy to setup.\nFlip open the cover and pull it up until it locks. When you\u0026rsquo;re ready to put it away, push it back down into the case. It works very well for my purposes.\nLights My office has pretty poor lighting. A single light fixture in the ceiling that doesn\u0026rsquo;t put out much light. I\u0026rsquo;m usually OK working in a dimly lit room, but when recording videos you need a bit more light. And if you plan to use a green screen, I found you you could use a LOT of light. Initially, I only planned to buy a single light but when I started using my green screen, found out that the shadows or light differences would really affect the ability to overlay backgrounds on my video without bleed.\nSo now I have two Elgato Key Light Professional lights mounted to the sides of my desk. (Remember, that my cable chase prevents mounting anything to the front of the desk.)\nThese lights work great, are simple to adjust and the software can adjust both the temperature and brightness. You can even set it so whatever you set for one light, the other lights will match it.\nStream Deck I was gifted an Elgato Stream Deck and think its pretty neat, but at this point it feels more like a toy than a tool. The Stream deck has lots of cool features but I\u0026rsquo;m barely scratching the surface. Here are a list of some of the hotkeys I\u0026rsquo;ve got setup.\nElgato Lights - Brighten, Dim, On/Off Smart Home - Turn on Smart Home Devices connected to my hubitat hub. Mute/Unmute - Sometimes really necessary on zoom. Luxafor - The luxafor light so that I can set a DND on my office door. Camtasia - I\u0026rsquo;m in the process of setting up some Camtasia hotkeys so I can complete repetitive tasks. Microphone I had done some training videos from Pluralsight a few years back and when I did that, I got a good microphone. I purchased a Rode Podcaster which is a dynamic microphone and I\u0026rsquo;ve been really happy with it. I put it on a swing arm so that I can move it around.\nSummary Thats my home setup. It\u0026rsquo;s really nice that I can use all of these tools while in my house and can do it sitting or standing at my desk. I\u0026rsquo;ve never really gotten into audio or visual setups before. It just wasn\u0026rsquo;t something that interested me, but as I\u0026rsquo;ve done more educational work, I wanted to have something I could build quality content with. The last thing someone trying to learn a new concept wants to deal with is a poor audio or video feed. Hopefully I\u0026rsquo;m preventing that from happening.\n","permalink":"https://theithollow.com/2021/04/12/home-audio-visual-setup/","summary":"\u003cp\u003eI was asked for a post detailing my home A/V setup. So this post will outline the equipment in my office that I use for video conferencing and recording videos.\u003c/p\u003e\n\u003ch2 id=\"desk\"\u003eDesk\u003c/h2\u003e\n\u003cp\u003eOne of the best things I did for my office was to switch to a standing desk. I was spending way too much time sitting in a chair and a standing desk helped to alleviate muscle pain. It also kept me in a bit more active mood if that\u0026rsquo;s a thing. After doing a bit of research I decided to try the \u003ca href=\"https://www.xdesk.com/terra-2\"\u003eTerra 2 desk\u003c/a\u003e. My biggest concern with a standing desk was hiding cables in a desk with no front. So I added the cable chase which did help. It does block the front of the desk top though so if you want to mount anything like a monitor/camera/light mount beware that with the cable chase on, you can\u0026rsquo;t use a clamp.\u003c/p\u003e","title":"Home Audio/Visual Setup"},{"content":"Time for an update on the home lab. 2020 meant spending a lot of time at home and there were plenty of opportunities to tinker around with the home lab. I did purchase some new hardware, and did plenty of reconfiguring so here\u0026rsquo;s the 2021 version of my home lab in case anyone is interested.\nRack The rack is custom made and been in use for a while now. My lab sits in the basement on a concrete floor. So I built a wooden set of shelves on casters so I could roll it around if it was in the way. I place the UPS on the shelf so that I can unplug the power to move the lab. As long as I have a long enough Internet cable, I can wheel my lab around for as long as the UPS holds on. On one side I put a whiteboard so I could draw something out if I was stuck. I don\u0026rsquo;t use it that often, but I like that it covers the side of the rack.\nOn the back of the shelves, I added some cable management panels.\nPower As mentioned, I have a UPS powering my lab. It\u0026rsquo;s a CyberPower 1500 AVR. I\u0026rsquo;m currently running around 550 Watts for the lab under normal load. I\u0026rsquo;ve mounted a large power strip along the side and a few small strips on each shelf. I also bought some 6 inch IEC cables which really cuts down the cable clutter behind the lab.\nCompute Last year I bought new servers because I needed more capacity and couldn\u0026rsquo;t run the AES instructions with my old processors. I bought new compute so that I could run the vSphere 7 stack, complete with NSX-T, Tanzu Kubernetes Grid, and anything else you can think of. So I bought three new E200-8d Supermicro servers with a six core Intel processor and 128 GB of memory. In 2021 I added two more of these and decommissioned one of my old home built servers.\nFor local storage I use a 64 GB USB drive for the ESXi host OS disk and I added a 1 TB SSD and a 500 GB NVMe drive. These drives are added for capacity and caching tiers for VMware vSAN. There isn\u0026rsquo;t a lot of room for disk drives in this model, but they sure are compact enough to fit on a shelf.\nThese servers have two 10GbE NICs, two 1 GbE NICs, and an IPMI port for out of band management of the server. I wanted to be sure to have a way to power on and off the server, load images into a virtual cd-rom, etc.\nI have one other server built out of spare parts I had lying around. It includes another 6 cores and 128 GB of memory.\nvSphere Clusters are built into three clusters. My main \u0026ldquo;HollowCluster\u0026rdquo; is where most of my machines are built and tested. I have an edge cluster with a single node that is used to run my NSX-T Edge nodes. And finally I have an auxiliary cluster where I run workloads not critical to my infrastructure (Its the spare parts machine).\nStorage For Storage, I have a tiered system. I have an eight bay Synology array used for virtual machines and file stores. Then I have a secondary Synology used as a backup device. Important information on the large Synology is backed up to the smaller one, and then pushed to Amazon S3 once a month for an offsite.\nvSphere Storage Array: Synology DS1815+ 8 TB available of spinning disks with dual 256 GB SSD for Caching File Storage and Backup Array: Synology DS1513+ 3.6 TB available of spinning Disk vSAN has become my place for ephemeral data. I found that when running Kubernetes clusters, my Synology arrays with ssd cache wasn\u0026rsquo;t good enough. I was getting etcd timeouts. (Full disclosure I started using a log aggregation tool as well which was chewing up some IOPS and maybe part of the problem)\nSo I started using vSAN for my Kubernetes clusters. My VSAN disks include 1TB Kingston SSDs per host, and a Western Digital 500GB NVMe SSD for caching.\nNetwork No real network updates this year as far as hardware, but I did make a fair amount of configuration changes. The biggest change I made this year was to create a second wireless LAN and put all of my IoT devices on it. Then, I created a firewall rule to prevent them from accessing my home wireless or lab equipment. This took a fair amount of time to move my Smart Home controller, Amazon Echos, Wemo plugs, TVs, Appliances, Smoke Detectors, Cameras, etc over to a new network, but it helps me sleep at night. Now I don\u0026rsquo;t worry as much about zero days since I\u0026rsquo;ve segmented them away from any actual data.\nIf you want to see the network design, you can take a look at the diagram below.\nI mounted my basement access point, USG and PoE switch to a piece of plywood and mounted it with a patch panel.\nCore Switch: HP v1910-24G Ethernet Switch Wireless Switch: Ubiquiti UniFi 8 POE-150W Storage/vMotion Switch: Netgear XS708E 10 Gigabit That switch was a gift from fellow vExpert Jason Langer /2016/12/19/unbelievable-gift-home-lab/ Wireless Firewall: Ubiquiti UniFi Security Gateway Wireless: Ubiquiti AC Pro Controller: UniFi Cloud Key The cables are colored according to purpose.\nYellow - Management Networks and Out of Band access. Green - Storage and vMotion Networks (10GbE) Blue - Trunk ports for virtual machines Red - Uplinks Cloud I’ve decided to use Amazon as my preferred cloud vendor. Mainly because I’ve done much more work here than on Azure. My AWS Accounts are configured in a hub spoke model which mimics a production like environment for customers.\nI use the cloud for backup archival, and just about anything you can think of that my home lab either can’t do or doesn’t have capacity for. I like to use solutions like Route53 for DNS so a lot of times my test workloads still end up in the cloud. Most of the accounts below are empty or have resources that don’t cost money, such as VPCs.\nMy overall monthly spend on AWS is around $35, most of which is spent on the VPN tunnel and some DNS records.\n","permalink":"https://theithollow.com/2021/03/08/2021-home-lab/","summary":"\u003cp\u003eTime for an update on the home lab. 2020 meant spending a lot of time at home and there were plenty of opportunities to tinker around with the home lab. I did purchase some new hardware, and did plenty of reconfiguring so here\u0026rsquo;s the 2021 version of my home lab in case anyone is interested.\u003c/p\u003e\n\u003ch2 id=\"rack\"\u003eRack\u003c/h2\u003e\n\u003cp\u003eThe rack is custom made and been in use for a while now. My lab sits in the basement on a concrete floor. So I built a wooden set of shelves on casters so I could roll it around if it was in the way. I place the UPS on the shelf so that I can unplug the power to move the lab. As long as I have a long enough Internet cable, I can wheel my lab around for as long as the UPS holds on. On one side I put a whiteboard so I could draw something out if I was stuck. I don\u0026rsquo;t use it that often, but I like that it covers the side of the rack.\u003c/p\u003e","title":"2021 Home Lab"},{"content":"Kubernetes clusters can come in many shapes and sizes. Over the past 18 months I\u0026rsquo;ve deployed quite a few Kubernetes clusters for customers but these clusters all have different requirements. What image registry am I connecting to? Do we need to configure proxies? Will we need to install new certificates to the nodes? Do we need to tweak some containerd configurations? During many of my customer engagements the answer to the above questions is, \u0026ldquo;yes\u0026rdquo;.\nThe Problem One of the tricker tasks that comes up is determining how we might re-configure a new Tanzu Kubernetes cluster on vSphere 7 with NSX-T. In a vSphere 7 with Tanzu Guest Cluster each namespace that is created in the supervisor cluster has a new NSX-T T1 router deployed and connected to a northbound T0 router. This router does NAT which prevents us from directly accessing the new control plane nodes and worker nodes directly. Remember that when we connect to these clusters through the kubectl command, we\u0026rsquo;re going through a load balancer which has a NAT\u0026rsquo;d address.\nIf you look at the diagram below, you\u0026rsquo;ll see that in many cases the Orchestrator Tool (Denoted by Jenkins in the diagram) or a user on the corporate network, can\u0026rsquo;t directly access the Kubernetes nodes that were spun up by Tanzu. This make automating any non-default configurations like a custom image registry, a bit more challenging.\nThe Solution A solution that we\u0026rsquo;ve started to use is to have the orchestrator (or Admin user) deploy a PodVM into the supervisor namespace and let it do the work for us. Building on some code from the incomparable William Lam a colleague of mine, Mike Tritabaugh and I built a PodVM to do some of this work for us. The PodVM consists of an init container, and a config container. Lets look at what they do.\nThe init container has a very specific task to run before our cluster configuration begins. The init container based on William\u0026rsquo;s code, performs a login function that builds a KUBECONFIG file with the appropriate cluster context. It stores this KUBECONFIG file in an ephemeral volume for future use by a second container.\nOnce the init container has completed its tasks, the config container can run. The config container runs a script to grab the SSH key with access to the guest cluster nodes, but the rest of the code is really up to you on how you want to build it. It depends on what tasks need to be executed on the guest clusters. Update containerd, copy certificate files over, or really anything you want. The important part is that the config container mounts the ephemeral volume used by the init container. Now the config scripts you build can leverage the KUBECONFIG file that has a token length of 10 hours, or SSH into the nodes to make configuration changes. Here\u0026rsquo;s a view of the PodVM to help explain.\nNow that we have a new tool to use, our Orchestrator or Administrator can indirectly update the clusters by deploying our pod to the supervisor namespace which contains our guest clusters. We can login with a temporary (10 hours) login to the guest cluster and perform whatever configuration tasks we desire.\nIf you\u0026rsquo;d like to use this solution to configure your guest clusters, please take a look at the github repository for build instructions. Good luck with your coding.\n","permalink":"https://theithollow.com/2021/02/01/customize-vsphere-7-with-tanzu-guest-clusters/","summary":"\u003cp\u003eKubernetes clusters can come in many shapes and sizes. Over the past 18 months I\u0026rsquo;ve deployed quite a few Kubernetes clusters for customers but these clusters all have different requirements. What image registry am I connecting to? Do we need to configure proxies? Will we need to install new certificates to the nodes? Do we need to tweak some containerd configurations? During many of my customer engagements the answer to the above questions is, \u0026ldquo;yes\u0026rdquo;.\u003c/p\u003e","title":"Customize vSphere 7 with Tanzu Guest Clusters"},{"content":"Your Kubernetes clusters are up and running on vSphere 7 with Tanzu and you can\u0026rsquo;t wait to get started on your first project. But before you get to that, you might want to enable the Harbor registry so that you can privately store your own container images and use them with your clusters. Luckily, in vSphere 7 with Tanzu, the Harbor project has been integrated into the solution. You just have to turn it on and set it up.\nNOTE: This article takes advantage of the updates in vSphere 7 U1c. Prior to the C release, the kubernetes clusters didn\u0026rsquo;t automatically trust the harbor certificate. It\u0026rsquo;s still possible to use it, but takes additional configuration.\nIf you\u0026rsquo;re not familiar with what a container image registry is, in simple terms, its where you store your container images after you\u0026rsquo;ve built them. A very common and public registry is docker hub. Its pretty common to grab some public images like ubuntu, nginx, alpine, redis, etc right off of docker hub. You should, however, take care when using container images from a public repo because you aren\u0026rsquo;t really sure what code has been put into that container. Perhaps it is malicious. This is where a private image registry comes in. Place your containers with your corporate logic in these private registries, hopefully with high bandwidth connections to your clusters for faster downloads.\nEnable the Registry Before you enable the harbor registry, you\u0026rsquo;ll need to have a vSphere 7 with Tanzu Supervisor cluster deployed. This series will help if you haven\u0026rsquo;t done this already.\nTo enable the Harbor Registry, select the vSphere Virtual Machine cluster running your Supervisor cluster. Under the configure tab select Image Registry and click Enable Harbor.\nAfter you do this, you\u0026rsquo;ll need to specify a storage policy to decide what datastore the Harbor registry service will live within. Select your storage policy and click OK.\nThat was pretty easy huh? Well your next step is to wait for the registry to be enabled fully. If you\u0026rsquo;ve logged in as the administrator@vsphere.local user, you\u0026rsquo;ll probably see some new virtual machines being deployed in your cluster. Otherwise, these may be hidden from you.\nAlso, when Harbor has been deployed, you\u0026rsquo;ll get some information in the configure tab that shows the URL and the storage space used for the harbor registry.\nYou can click the link to the Harbor UI and login with your vSphere credentials. You\u0026rsquo;ll notice that in my instance, there is already a project named utility. Thats because I have a namespace in my supervisor cluster called utility. You\u0026rsquo;ll have one project for each namespace within the supervisor cluster.\nUse Harbor with Supervisor Cluster At this point we could start using the harbor registry for our supervisor cluster. I\u0026rsquo;ve already got a container image on my local workstation that I\u0026rsquo;ve built and plan to push it to our new harbor registry.\nBefore we can push anything, we need to login to the harbor registry from our workstation with a docker login command. Then login with your vSphere credentials.\nNOTE: If you don\u0026rsquo;t have the root certificate installed, you won\u0026rsquo;t be able to login with this method. To install the root certificiate, you can download the certificate from the harbor configuration page. You\u0026rsquo;ll need to place that certificate into your trusted store on the client you\u0026rsquo;re using. For Mac OS you can run the following commands:\nsecurity add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain ca.crt When done, you need to restart the docker service.\nOnce you\u0026rsquo;re logged into the registry, you can push your own image. In the screenshot below, I\u0026rsquo;ve tagged my image and pushed it to the repository.\nAfter its been pushed, you\u0026rsquo;ll see the repository listed under your namespace in the harbor UI.\nAnd lastly, login to your supervisor cluster within the namespace configured, and try to pull down the image and run it. You can see the image was pulled to create a new PodVM in the Supervisor cluster.\nUse Harbor with TKG Clusters Our first example was neat, but many customers won\u0026rsquo;t run pods directly on the Supervisor cluster. If you\u0026rsquo;re building Tanzu Kubernetes Grid Clusters (referred to as TKCs, guest clusters or child clusters) you will need to take a couple of extra steps. Namely, you\u0026rsquo;ll need to obtain and deploy an image pull secret for the harbor registry on any of the child clusters.\nTo obtain the image pull secret, login to the Supervisor namespace.\nkubectl get secret -n [vsphere-namespace] [namespace]-default-image-pull-secret -o yaml \u0026gt; registrysecret.yaml Edit that YAML file to remove the namespace entry. You\u0026rsquo;ll deploy this secret into the TKC namespace that will be pulling images. In my example that is the default namespace so I\u0026rsquo;ve removed the namespace: utility entry in my registrysecret.yaml file.\nNext, login to your TKC cluster that was deployed within the same supervisor namespace and apply the secret.\nkubectl apply -f registrysecret.yaml Once your secret has been deployed to the Tanzu Kubernetes Cluster [Child/Workload Cluster] you can add that image pull secret to your YAML manifests to start using the harbor registry for your containers.\n... spec: containers: - name: private-reg-container image: imagePullSecrets: - name: [registrysecretname] ... Summary Now that the vSphere 7.0 U1c patch has been released, you can start using the embedded Harbor registry with both your Supervisor Cluster as well as any Tanzu Kubernetes Clusters you may have provisioned. You can securely store your images here which will be very close to your workload clusters so you can expect quick downloads when new images are called for by your Kubernetes pods.\n","permalink":"https://theithollow.com/2021/01/04/enable-the-harbor-registry-on-vsphere-7-with-tanzu/","summary":"\u003cp\u003eYour Kubernetes clusters are up and running on vSphere 7 with Tanzu and you can\u0026rsquo;t wait to get started on your first project. But before you get to that, you might want to enable the Harbor registry so that you can privately store your own container images and use them with your clusters. Luckily, in vSphere 7 with Tanzu, the Harbor project has been integrated into the solution. You just have to turn it on and set it up.\u003c/p\u003e","title":"Enable the Harbor Registry on vSphere 7 with Tanzu"},{"content":"There is new functionality included in VMware Tanzu Mission Control (TMC) that I\u0026rsquo;m pretty excited about. After the recent update, you can now register your vSphere with Tanzu Supervisor cluster with TMC and then begin provisioning workload clusters.\nBefore you can provision clusters, you\u0026rsquo;ll need to register your TKG Supervisor cluster to TMC. Those procedures require you to apply and update some YAML which you can find here.\nDeploy a vSphere TKG Cluster through TMC Login to your TMC account and go to the Clusters tab. Click the CREATE CLUSTER button where you\u0026rsquo;ll now see a drop down. Select Tanzu Kubernetes Grid Service on vSphere 7.\nSelect the Supervisor cluster from the drop down list. This list is populated with the registration instructions referred to earlier in this post. Then select a provisioner. This is a namespace within the Supervisor cluster where your TKG cluster objects will live.\nNotice the Provisioner in TMC matches my Namespace in vCenter\u0026rsquo;s Workload Management Tab.\nNow define your Cluster details. Names, descriptions, and labels. But also select a TMC cluster group. Cluster Groups often have TMC policies for enforcing controls across groups of clusters. Using groups and policies, is a simple way to apply security or governance policies to fleets of clusters.\nThe next screen asks some cluster configuration questions such as the pod/service CIDRs, and what storage classes to use. Those storage classes come from your configured supervisor cluster and are presented as a drop down.\nSelect the level of availability:\nSingle node: 1 Control Plane Node (could be many workers) Highly available: 3 Control Plane Nodes (could be many workers) You\u0026rsquo;ll also need to select a storageclass and size for the control plane nodes.\nOn the final screen, you can create a node pool which defines your worker nodes. You can select the storage class, quantity and size of the worker nodes. Click the CREATE CLUSTER button.\nAfter the process beings you\u0026rsquo;ll get status reports about the stages from the TMC console.\nWhen you complete the deployment, the policies attached to your cluster group should begin being enforced, and data will start being collected.\nThe steps above can be achieve through the following command line command.\ntmc cluster create --management-cluster-name hollow-supervisor --provisioner-name utility --cluster-group shanks-group --high-availability --name hollow-tmc1 --version v1.18.5+vmware.1-tkg.1.c40d30d --template tkgs --storage-class tanzu-storage --worker-instance-type best-effort-xsmall --instance-type best-effort-xsmall Summary Tanzu Mission Control might become a major part of managing the lifecycles of Kubernetes clusters, especially if those customers are already vSphere customers. TMC can now provision clusters on both AWS and vSphere 7 with Tanzu.\n","permalink":"https://theithollow.com/2020/12/14/deploy-vsphere-tkg-clusters-through-mission-control/","summary":"\u003cp\u003eThere is new functionality included in VMware Tanzu Mission Control (TMC) that I\u0026rsquo;m pretty excited about. After the recent update, you can now register your vSphere with Tanzu Supervisor cluster with TMC and then begin provisioning workload clusters.\u003c/p\u003e\n\u003cp\u003eBefore you can provision clusters, you\u0026rsquo;ll need to register your TKG Supervisor cluster to TMC. Those procedures require you to apply and update some YAML which you can find \u003ca href=\"https://kb.vmware.com/s/article/80727\"\u003ehere\u003c/a\u003e.\u003c/p\u003e\n\u003ch2 id=\"deploy-a-vsphere-tkg-cluster-through-tmc\"\u003eDeploy a vSphere TKG Cluster through TMC\u003c/h2\u003e\n\u003cp\u003eLogin to your TMC account and go to the \u003ccode\u003eClusters\u003c/code\u003e tab. Click the \u003ccode\u003eCREATE CLUSTER\u003c/code\u003e button where you\u0026rsquo;ll now see a drop down. Select \u003ccode\u003eTanzu Kubernetes Grid Service on vSphere 7\u003c/code\u003e.\u003c/p\u003e","title":"Deploy vSphere TKG Clusters Through Mission Control"},{"content":"Have you ever missed when trying to properly size an Kubernetes environment? Maybe the requirements changed, maybe there were wrong assumptions, or maybe the project took off and it just needs more resources. Under normal circumstances, I might suggest to you to build a new Tanzu Kubernetes Grid (TKG) cluster and re-deploy your apps. Unfortunately, as much as I want to treat Kubernetes clusters as ephemeral, they can\u0026rsquo;t always be treated this way. If you need to resize your TKG nodes without re-deploying a new cluster, then keep reading.\nTanzu Kubernetes Grid is built atop of the ClusterAPI project, and as such, we can use the details about how ClusterAPI provisions our clusters to update them.\nMy favorite ClusterAPI reference diagram to visually understand the components of ClusterAPI can be found on Chip Zoller\u0026rsquo;s site at Neon Mirrors. Mr. Zoller visualizes the cluster object dependencies and we will use these to update a running cluster with no down time.\nclusterctl workload cluster manifest reference\nFrom the diagram, we can see that there is a MachineDeployment Object. The machine deployment defines how our nodes are configured. The machine deployments reference a KubeadmConfigTemplate which defines how nodes will join a Kubernetes cluster. It also references a MachineTemplate that defines how the nodes are deployed on a cloud. In the diagram from neonmirrors.net you see a \u0026ldquo;vSphereMachineTemplate\u0026rdquo; object, but for this example, we\u0026rsquo;ll use the AWSMachineTemplate object.\nModify Node Settings Now, that you have some background on the TKG objects, we can try out modifying the configuration. First, lets take a look at the objects we\u0026rsquo;re working with. To get access to these objects, you\u0026rsquo;ll want to set your KUBECONFIG context to the management cluster responsible for your workload clusters. Once you set your context you can run:\nkubectl get machines\nYou can see that in my lab I have a six node Kubernetes cluster. I plan to update one of the workload nodes resources. Notice in the name of the nodes there are some with a md in them. The md stands for \u0026ldquo;machine deployment\u0026rdquo; and these are our workload nodes. Lets pick md-0 as a node to update.\nLets look for the machine deployment object.\nkubectl get machinedeployments\nNotice that I have three different machine deployments. Each machine deployment could consist of multiple machines, but we have three in this case because each AWS Availability zone gets their own machine deployments. Lets check out the md-0 machine deployment in further detail.\nkubectl get machinedeployment tanzuworkloads1-md-0 -o yaml\nThe snippet below shows how the machine deployment reference both a Kubeadmconfig and an AWSMachineTemplate.\nYou can see that our machinedeployment is referencing a tanzuworkloads-md-0 AWSMachineTemplate. Lets go take a look at those.\nkubectl get awsmachinetemplates\nNotice that we have a template for the control plane, as well as a template for each of our availability zones. NOTE: You can create more templates for special use cases such as high memory nodes, high compute nodes or GPUs, etc.\nLets dive deeper and look at the awsmachinetemplate for md-0.\nkubectl get awsmachinetemplates tanzuworkloads-md-0 -o yaml\nAh ha! We now see where the AWS instance size and root volume size are located. Under normal circumstances I\u0026rsquo;d tell you to just edit this manifest and you\u0026rsquo;re all set. However these template are supposed to be immutable, so what we\u0026rsquo;re going to do is copy this template to a file, make our changes and apply it with a new name.\nTo copy the template to a file, you can use the same command from above out to a file.\nkubectl get awsmachinetemplates tanzuworkloads-md-0 -o yaml \u0026gt; myfile.yaml\nAfter your file is written to your workstation, edit the file to make the changes you want to the machines and give it a new name. I\u0026rsquo;ve also removed the status fields and have posted my full file below. For my environment i changed the instancetype to t3.xlarge and a root volume of 100 GB.\napiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AWSMachineTemplate metadata: generation: 1 managedFields: - apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 manager: manager name: tanzuworkloads-md-0-new namespace: default ownerReferences: - apiVersion: cluster.x-k8s.io/v1alpha3 kind: Cluster name: tanzuworkloads uid: 89f5020a-2cf6-4c0b-be15-1c9d9c9deddb spec: template: spec: ami: id: ami-058759e4c2532dc14 iamInstanceProfile: nodes.tkg.cloud.vmware.com instanceType: t3.xlarge rootVolume: size: 100 sshKeyName: vmc-cna-admin After your modifications are made, you can apply the configuration to the cluster. No updates will happen to your workload cluster at this point.\nkubectl apply -f myfile.yaml\nkubectl get awsmachinetemplates\nThe last step for us to update our nodes, is to modify our existing machinedeployment object to point at our new awsmachinetemplate.\nkubectl edit machinedeployment tanzuworkloads-md-0\nuse vim to edit the configuration and save the config.\nNote: to enter insert mode press i and to exit vim use the command :wq\nOnce you save the configuration, you should take a look at your AWS instances. With any luck, a new instance is being provisioned with your correct settings.\nYou can also verify this by running a get on your machine deployments.\nkubectl get machinedeployments\nIf you watch this operation in full, you\u0026rsquo;ll see that new nodes will be provisioned and joined to your cluster. Once they are healthy, tkg will remove the old node and effectively replaced it. This same process can be used for upgrades.\nSummary Sometimes you need to resize an existing Tanzu Kubernetes Grid cluster without disrupting the containers that are already running. You can do this by making a new MachineTemplate from the existing one, and modifying the machinedeployment to use the new template. Tanzu Kubernetes Grid will then do a rolling update to the nodes in the cluster.\n","permalink":"https://theithollow.com/2020/12/09/resizing-tanzu-kubernetes-grid-cluster-nodes/","summary":"\u003cp\u003eHave you ever missed when trying to properly size an Kubernetes environment? Maybe the requirements changed, maybe there were wrong assumptions, or maybe the project took off and it just needs more resources. Under normal circumstances, I might suggest to you to build a new Tanzu Kubernetes Grid (TKG) cluster and re-deploy your apps. Unfortunately, as much as I want to treat Kubernetes clusters as ephemeral, they can\u0026rsquo;t always be treated this way. If you need to resize your TKG nodes without re-deploying a new cluster, then keep reading.\u003c/p\u003e","title":"Resizing Tanzu Kubernetes Grid Cluster Nodes"},{"content":"You\u0026rsquo;ve deployed your first Tanzu Kubernetes Grid Clusters in vSphere 7 and are beyond the learning phase. You\u0026rsquo;re now ready to start automating your Kubernetes cluster builds, and application deployments.\nTypically you\u0026rsquo;d login to your TKG clusters through the kubectl cli with a command like:\nkubectl vsphere login ...\nNormally, you\u0026rsquo;d be right, but that command requires an interactive login, meaning for you to wait for a second prompt to enter a password. The current version of the vSphere plugin doesn\u0026rsquo;t have an option for non-interactive logins so we need to get creative until this feature is added.\nFirst, take a look at an existing KUBECONFIG file that was created for you when you ran the kubectl vsphere login ... command as describe in the previous post. This KUBECONFIG file is most likely stored in a hidden directory named .kube within your user profile. The default file is named config but you can change this by setting your own environment variable named $KUBECONFIG.\nBelow, I\u0026rsquo;ve run my login so you can see it. Note the interactive login where I was prompted for a password.\nAfter which I was able to look at my KUBECONFIG which was the default seen below.\nIf we take a look at that config file, we see that we\u0026rsquo;ve got our supervisor cluster configuration information listed here. The name, IP Addresses, certificates, etc. The only real problem here is the token. The JWT token expires every 10 hours by default. This means, that this KUBECONFIG file is going to be useless in 10 hours.\nLuckily, we can still get a new token without using the kubectl vsphere login ... process. We can call the API directly by using a simple curl command.\nIn the command below, you\u0026rsquo;ll want to enter your own username/password and supervisor cluster URL. For the command to run you\u0026rsquo;ll also need to have curl and jq installed.\ncurl -XPOST -u administrator@vsphere.local:\u0026#39;PASSWORD\u0026#39; https://sup.hollow.local/wcp/login -H \u0026#34;Content-Type: application/json\u0026#34; | jq -r .session_id With your new token, you can either paste it into your existing kubeconfig file and use it for another 10 hours, or you can pass along the --token flag on your kubernetes commands as seen below.\nkubectl get nodes --token=$(curl -XPOST -u administrator@vsphere.local:\u0026#39;PASSWORD\u0026#39; https://sup.hollow.local/wcp/login -k -H \u0026#34;Content-Type: application/json\u0026#34; | jq -r .session_id) One other thing to mention, if you need to do a similar task for your TKG workload clusters, you can also add the following to your curl command.\n-d \u0026#39;{\u0026#34;guest_cluster_name\u0026#34;:\u0026#34;myguestcluster\u0026#34;}\u0026#39; The full code being:\ncurl -XPOST -u administrator@vsphere.local:\u0026#39;PASSWORD\u0026#39; https://sup.hollow.local/wcp/login -k -d \u0026#39;{\u0026#34;guest_cluster_name\u0026#34;:\u0026#34;myguestcluster\u0026#34;}\u0026#39; -H \u0026#34;Content-Type: application/json\u0026#34; Summary I imagine this post won\u0026rsquo;t be needed very long since the vsphere plugin will likely have non-interactive logins in the future, but until then you can query the Kubernetes API directly to obtain a new token for your automation needs. I should also mention that its possible to create a new KUBECONFIG file for a Kubernetes Service Account if you don\u0026rsquo;t care about your JWT token expiring.\n","permalink":"https://theithollow.com/2020/12/01/non-interactive-logins-to-vsphere-7-with-tanzu-clusters/","summary":"\u003cp\u003eYou\u0026rsquo;ve deployed your first Tanzu Kubernetes Grid Clusters in vSphere 7 and are beyond the learning phase. You\u0026rsquo;re now ready to start automating your Kubernetes cluster builds, and application deployments.\u003c/p\u003e\n\u003cp\u003eTypically you\u0026rsquo;d login to your TKG clusters through the \u003ccode\u003ekubectl\u003c/code\u003e cli with a command like:\u003c/p\u003e\n\u003cp\u003e\u003ccode\u003ekubectl vsphere login ...\u003c/code\u003e\u003c/p\u003e\n\u003cp\u003eNormally, you\u0026rsquo;d be right, but that command requires an interactive login, meaning for you to wait for a second prompt to enter a password. The current version of the vSphere plugin doesn\u0026rsquo;t have an option for non-interactive logins so we need to get creative until this feature is added.\u003c/p\u003e","title":"Non-Interactive Logins to vSphere 7 with Tanzu Clusters"},{"content":"If you\u0026rsquo;ve worked with Kubernetes for very long, you\u0026rsquo;ve surely run into a need to manage YAML files. There are a bunch of options out there with their own benefits and drawbacks. One of these tools is called ytt and comes as part of the Carvel tools (formerly k14s).\nIf you\u0026rsquo;re working with the Tanzu Kubernetes Grid product from VMware, you\u0026rsquo;re likely to be using ytt to mange your TKG YAML manifests. This post aims to help you get started with using ytt for your own customizations.\nHow Does YTT Work? Before we get too far, you\u0026rsquo;re going to need to know some basics about how ytt works. There are a ton of things ytt can do for you to update yaml manifests so we\u0026rsquo;ll only cover a few that are commonly used by TKG to get started. If you want to learn more, please check out the Carvel tools page and their examples which you can play with on your own.\nOverlays One of the main things that TKG uses ytt for is the use of overlays. An overlay file describes how another YAML manifests should be altered. In this way, we can take a normal YAML manifest such as a Kubernetes Deployment file, and make changes to the \u0026ldquo;base\u0026rdquo; template with whats in the overlay template. TKG uses overlays to alter the default TKG deployment files. This can be really handy to alter your TKG deployments to include your own customizations like adding web proxies, or custom registries which we\u0026rsquo;ll see later.\nHere\u0026rsquo;s our first example of a basic substitutions using non tech files. The file below is the \u0026ldquo;BASE\u0026rdquo; template which we\u0026rsquo;ll alter with our overlay files.\n--- pizzatype: thin --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: pancakes --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - ketchup We\u0026rsquo;ll also create a values file which will be used to store some custom variables that we might want to reference later on. That values file can be seen below.\n#@data/values --- pizza: deep dish breakfast_side: bacon breakfast_food: waffles breakfast_drinks: coffee condiments: #@ [\u0026#34;mustard\u0026#34;, \u0026#34;onions\u0026#34;, \u0026#34;pickle spear\u0026#34;, \u0026#34;tomatoes\u0026#34;, \u0026#34;celery salt\u0026#34;, \u0026#34;relish\u0026#34;] Our first job will be to create an overlay file that can change the pizzatype entry in the base template. To do this we\u0026rsquo;ll create our first overlay file where the goal will be to replace the pizzatype value of thin to the correct value of Deep Dish.\nHere is our first overlay file. Lets take a look at some of the entries here for our first file.\nLine1 - Used to load the overlay modules for ytt Line2 - Used to load the data values from our variables file Line3 - Used to find a section in our base template. In this case looking for a key/value pair of pizzatype:thin Line5 - Command to replace the values found in the line below Line6 - What we want the pizzatype value to be and it retrieves that value from the values file. The value in that file is deep dish #@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@ load(\u0026#34;@ytt:data\u0026#34;, \u0026#34;data\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;pizzatype\u0026#34;: \u0026#34;thin\u0026#34;}) --- #@overlay/replace pizzatype: #@ data.values.pizza Lets look at the results of running these files through ytt. I\u0026rsquo;ve installed the ytt binary and ran ytt -f values.yaml -f overlay.yaml -f meals.yaml\nAs you can see from the results below the key pizzatype has a new value from the base template. Now its properly set to deep dish\npizzatype: deep dish --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: pancakes --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - ketchup Lets keep going. Next, we\u0026rsquo;ll take a look at the breakfast section which looks more like a Kubernetes YAML structure just to get familiar with it. In this case, we need to do a couple of things. First, we\u0026rsquo;ll replace pancakes with waffles and we\u0026rsquo;ll add a side of bacon.\nLet\u0026rsquo;s update the existing overlay file so it looks like the code below.\n#@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@ load(\u0026#34;@ytt:data\u0026#34;, \u0026#34;data\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;pizzatype\u0026#34;: \u0026#34;thin\u0026#34;}) --- #@overlay/replace pizzatype: #@ data.values.pizza #@overlay/match by=overlay.subset({\u0026#34;spec\u0026#34;:{\u0026#34;meal\u0026#34;:\u0026#34;breakfast\u0026#34;}}) --- spec: items: #@overlay/match missing_ok=True side: #@ data.values.breakfast_side food: waffles drinks: coffee Notice that in the overlay file above, we have a new match condition where we\u0026rsquo;re looking for {\u0026quot;spec\u0026quot;:{\u0026quot;meal\u0026quot;:\u0026quot;breakfast\u0026quot;}} in our base template. If ytt finds this section we want to update the base template with our new values. You can see that food should be waffles and a new item named side is added and the values come from the values file. Also notice that food was hard coded and didn\u0026rsquo;t come from any other value files.\nThe result of running the ytt commands ( ytt -f values.yaml -f overlay.yaml -f meals.yaml) are:\npizzatype: deep dish --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: waffles side: bacon --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - ketchup And one last example, where we\u0026rsquo;ll update lunch. The lunch items from our base template have food: hotdog which is fine Chicago food, but the condiments are clearly the wrong values. In this case we\u0026rsquo;ll update our overlay file to pass a list of values for condiments.\nHere is our updated overlay file.\n#@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@ load(\u0026#34;@ytt:data\u0026#34;, \u0026#34;data\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;pizzatype\u0026#34;: \u0026#34;thin\u0026#34;}) --- #@overlay/replace pizzatype: #@ data.values.pizza #@overlay/match by=overlay.subset({\u0026#34;spec\u0026#34;:{\u0026#34;meal\u0026#34;:\u0026#34;breakfast\u0026#34;}}) --- spec: items: #@overlay/match missing_ok=True side: #@ data.values.breakfast_side food: waffles drinks: coffee #@overlay/match by=overlay.subset({\u0026#34;spec\u0026#34;:{\u0026#34;items\u0026#34;:{\u0026#34;food\u0026#34;:\u0026#34;hotdog\u0026#34;}}}), expects=\u0026#34;0+\u0026#34; --- spec: items: food: hotdog #@overlay/replace condiments: #@ data.values.condiments In this case the overlay has a new matching object {\u0026quot;spec\u0026quot;:{\u0026quot;items\u0026quot;:{\u0026quot;food\u0026quot;:\u0026quot;hotdog\u0026quot;}}} which says that our overlay commands will effect items where food = hotdog, but nothing else. You\u0026rsquo;ll notice here that there is a , expects=\u0026quot;0+\u0026quot; command which states that it won\u0026rsquo;t error out if it can\u0026rsquo;t find hotdog. Think about it, this routine isn\u0026rsquo;t needed if the food type is a hamburger or something. We only want to update if the food item is a hotdog. Then we can apply the appropriate condiments.\npizzatype: deep dish --- apiVersion: v1 kind: meal metadata: time: morning spec: meal: breakfast items: drinks: coffee food: waffles side: bacon --- apiVersion: v2 kind: meal metadata: time: noon spec: meal: lunch items: drinks: tea food: hotdog condiments: - mustard - onions - pickle spear - tomatoes - celery salt - relish Tanzu Kubernetes Grid Customizations Now that you\u0026rsquo;ve had a crash course in ytt you can use some of your new skills, and some additional help from the Carvel tools site, to customize a TKG deployment. This has been written about before such as this post from Chip Zoller when he used ytt to add a custom image registry. Below are some additional customizations that I\u0026rsquo;ve commonly built for customers, and you may find useful when deploying your Tanzu Kubernetes Grid clusters within your own environment.\nCustom Image Registry Settings The file below was created in the providers/infrastucture-[cloud]/ytt folder to specify a custom image registry for the containerd daemon. You would need to replace your own image registry settings within the file.\n#@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;kind\u0026#34;:\u0026#34;KubeadmControlPlane\u0026#34;}), expects=\u0026#34;1+\u0026#34; --- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane spec: kubeadmConfigSpec: #@overlay/match missing_ok=True files: - path: /etc/containerd/config.toml content: | version = 2 [plugins] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;] sandbox_image = \u0026#34;registry.tkg.vmware.run/pause:3.2\u0026#34; [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry.mirrors] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry.mirrors.\u0026#34;harbor.hollow.local\u0026#34;] endpoint = [\u0026#34;http://harbor.hollow.local\u0026#34;] #@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;kind\u0026#34;:\u0026#34;KubeadmConfigTemplate\u0026#34;}), expects=\u0026#34;1+\u0026#34; --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 kind: KubeadmConfigTemplate spec: template: spec: #@overlay/match missing_ok=True files: - path: /etc/containerd/config.toml content: | version = 2 [plugins] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;] sandbox_image = \u0026#34;registry.tkg.vmware.run/pause:3.2\u0026#34; [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry.mirrors] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry.mirrors.\u0026#34;harbor.hollow.local\u0026#34;] endpoint = [\u0026#34;http://harbor.hollow.local\u0026#34;] Set Web Proxies for TKG Control Plane Nodes The file below was created in the providers/infrastructure-[cloud]/ytt folder. You would need to update the proxy and no_proxy rules for your own environment.\n#@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;kind\u0026#34;:\u0026#34;KubeadmControlPlane\u0026#34;}), expects=\u0026#34;1+\u0026#34; --- apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane spec: kubeadmConfigSpec: #@overlay/match missing_ok=True preKubeadmCommands: #! Add HTTP_PROXY to containerd configuration file - echo \u0026#39;[Service]\u0026#39; \u0026gt; /etc/systemd/system/containerd.service.d/http-proxy.conf - echo \u0026#39;Environment=\u0026#34;HTTP_PROXY=http://10.0.4.168:3128\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/systemd/system/containerd.service.d/http-proxy.conf - echo \u0026#39;Environment=\u0026#34;NO_PROXY=.hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/systemd/system/containerd.service.d/http-proxy.conf - echo \u0026#39;PROXY_ENABLED=\u0026#34;yes\u0026#34;\u0026#39; \u0026gt; /etc/sysconfig/proxy - echo \u0026#39;HTTP_PROXY=\u0026#34;http://10.0.4.168:3128\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/sysconfig/proxy - echo \u0026#39;NO_PROXY=\u0026#34;.hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/sysconfig/proxy - systemctl daemon-reload - systemctl restart containerd Setting Web Proxies for TKG Worker Nodes The file below was created in the providers/infrastructure-[cloud]/ytt folder. You would need to update the proxy and no_proxy rules for your own environment.\n#@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;kind\u0026#34;:\u0026#34;KubeadmConfigTemplate\u0026#34;}), expects=\u0026#34;1+\u0026#34; --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3 kind: KubeadmConfigTemplate spec: template: spec: #@overlay/match missing_ok=True preKubeadmCommands: #! Add HTTP_PROXY to containerd configuration file - echo \u0026#39;[Service]\u0026#39; \u0026gt; /etc/systemd/system/containerd.service.d/http-proxy.conf - echo \u0026#39;Environment=\u0026#34;HTTP_PROXY=http://10.0.4.168:3128\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/systemd/system/containerd.service.d/http-proxy.conf - echo \u0026#39;Environment=\u0026#34;NO_PROXY=.hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/systemd/system/containerd.service.d/http-proxy.conf - echo \u0026#39;PROXY_ENABLED=\u0026#34;yes\u0026#34;\u0026#39; \u0026gt; /etc/sysconfig/proxy - echo \u0026#39;HTTP_PROXY=\u0026#34;http://10.0.4.168:3128\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/sysconfig/proxy - echo \u0026#39;NO_PROXY=\u0026#34;.hollow.local,169.254.169.254,localhost,127.0.0.1,kubernetes.default.svc,.svc,.amazonaws.com,10.0.0.0/8,10.96.0.0/12,100.96.0.0/11\u0026#34;\u0026#39; \u0026gt;\u0026gt; /etc/sysconfig/proxy - systemctl daemon-reload - systemctl restart containerd Using Internal Load Balancers within AWS Sometimes you don\u0026rsquo;t want to expose your TKG cluster with a Public IP Address. To deploy internal load balancers, this config can be added to your providers/infrastructure-aws/ytt folder.\n#@ load(\u0026#34;@ytt:overlay\u0026#34;, \u0026#34;overlay\u0026#34;) #@overlay/match by=overlay.subset({\u0026#34;kind\u0026#34;:\u0026#34;AWSCluster\u0026#34;}) --- #! Use a Private Load Balancer instead of the default Load Balancers apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: AWSCluster spec: #@overlay/match missing_ok=True controlPlaneLoadBalancer: scheme: internal Summary This was a long post and it required you to learn a little ytt but now you should have a good start on customizing your Tanzu Kubernetes Grid deployments. Cluster-API does a great job of deploying Kubernetes clusters, but there are always customizations that you will need to make to integrate it into your environments. I hope the primer and examples from this post will be enough to get you on your way to building your own customizations.\n","permalink":"https://theithollow.com/2020/11/09/using-ytt-to-customize-tkg-deployments/","summary":"\u003cp\u003eIf you\u0026rsquo;ve worked with Kubernetes for very long, you\u0026rsquo;ve surely run into a need to manage YAML files. There are a bunch of options out there with their own benefits and drawbacks. One of these tools is called \u003ccode\u003eytt\u003c/code\u003e and comes as part of the \u003ca href=\"https://carvel.dev/\"\u003eCarvel\u003c/a\u003e tools (formerly k14s).\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re working with the Tanzu Kubernetes Grid product from VMware, you\u0026rsquo;re likely to be using \u003ccode\u003eytt\u003c/code\u003e to mange your TKG YAML manifests. This post aims to help you get started with using \u003ccode\u003eytt\u003c/code\u003e for your own customizations.\u003c/p\u003e","title":"Using YTT to Customize TKG Deployments"},{"content":"If you have been following the series so far, you should have a TKG guest cluster in your lab now. The next step is to show how to deploy a simple application and access it through a web browser. This is a pretty trivial task for most Kubernetes operators, but its a good idea to know whats happening in NSX to make these applications available. We\u0026rsquo;ll walk through that in this post.\nConnect To TKG Cluster and Deploy an Application Before we can deploy our applications, let make sure we can connect to the cluster as we did in the previous post. Lets run the following command to make sure we\u0026rsquo;ve been authenticated with our Kubernetes cluster.\nkubectl vsphere login --server=[SupervisorControlPlane] --tanzu-kubernetes-cluster-name [tkg cluster name] --tanzu-kubernetes-cluster-namespace [Supervisor Namespace] Once authenticated, set the Kubernetes context to the TKG Cluster.\nkubectl config use-context tkg-cluster-1\nNow you\u0026rsquo;re ready to run Kubernetes commands to build apps, secrets, expose services, and so on. Let\u0026rsquo;s start by deploying a test application to the guest cluster.\nkubectl run hollowapp --image=theithollow/hugoapp:v1 It might take a moment for the container image to download from dockerhub, but once done you should be able to see running containers by running the command:\nkubectl get pods\nExpose Applications Now that there is an application running in our TKG cluster, we need to expose it to our users so that they might be able to access it through a web browser for example.\nThe simplest way to to do this is through an imperative command against the cluster. The command below will create a Kubernetes service of type `LoadBalancer` on port 80.\nkubectl expose pod hollowapp --port=80 --target-port=80 --name hollowapp --type=LoadBalancer After running the command, lets take a look at what happened. First, let\u0026rsquo;s check our services in the Kubernetes cluster by running:\nkubectl get services\nHey! Thats neat! There is a new service listed called \u0026ldquo;hollowapp\u0026rdquo; and it\u0026rsquo;s of type LoadBalancer and you can see it has an External-IP listed. Try navigating to that IP Address in a web browser. Here\u0026rsquo;s what I saw when I tested in my browser.\nAwesome! We have our first application deployed in a Tanzu Kubernetes Grid cluster and it\u0026rsquo;s doing the magic of setting up ingress routing to our apps through NSX-T.\nLet\u0026rsquo;s take a look at what NSX-T object got created to support this new application for ingress routing.\nNSX-T Resources So, somehow by creating a Kubernetes service of type LoadBalancer NSX was able to route traffic directly to the pod within the TKG Guest Cluster we provisioned. How does it do this?\nWell, TKG clusters come equipped with the NSX Container Plugin (NCP) which listens for certain API calls such as a LoadBalancer being requested. When the plugin sees a call for a load balancer, it informs the NSX-T Control plane how to build a load balancer for this specific service.\nThe result is that NSX-T creates a virtual server for the application. If you look in the NSX-T Manager console, under Networking \u0026ndash;\u0026gt; Load Balancing, you\u0026rsquo;ll see several load balancers already created. Specifically, you\u0026rsquo;ll notice a Load Balancer is configured for each cluster to route traffic to the Kubernetes API for each cluster. Here, I\u0026rsquo;ve expanded the TKG Guest Cluster Load Balancer, and you can see it has 2 virtual servers.\nAfter clicking the Virtual Server hyperlink in the load balancer object, I can now see there are 2 virtual servers. One of these is on port 6443 which is the Kubernetes Control Plane API access. The other was provisioned for the application that we exposed as a type LoadBalancer.\nYou might be wondering about other types of access methods such as an Ingress resource. These are supported in Tanzu clusters, but at this time require an ingress controller, such as Contour, to be deployed in your clusters first. You can of course expose the ingress controller itself, though a load balancer as we\u0026rsquo;ve done in this post.\nSummary Tanzu Kubernetes Grid Guest Clusters with NSX-T integration can really save you some time when setting up Load Balancing. Simply create your apps and deploy a service of type LoadBalancer and let the NSX Controller Plugin and NSX-T do the rest of the work for you.\n","permalink":"https://theithollow.com/2020/09/15/ingress-routing-tkg-clusters/","summary":"\u003cp\u003eIf you have been following \u003ca href=\"/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/\"\u003ethe series\u003c/a\u003e so far, you should have a TKG guest cluster in your lab now. The next step is to show how to deploy a simple application and access it through a web browser. This is a pretty trivial task for most Kubernetes operators, but its a good idea to know whats happening in NSX to make these applications available. We\u0026rsquo;ll walk through that in this post.\u003c/p\u003e","title":"Ingress Routing - TKG Clusters"},{"content":"This post will focus on deploying Tanzu Kubernetes Grid (TKG) clusters in your vSphere 7 with Tanzu environment. These TKG clusters are the individual Kubernetes clusters that can be shared with teams for their development purposes.\nI know what you\u0026rsquo;re thinking. Didn\u0026rsquo;t we already create a Kubernetes cluster when we setup our Supervisor cluster? The short answer is yes. However the Supervisor cluster is a unique Kubernetes cluster that probably shouldn\u0026rsquo;t be used for normal workloads. We\u0026rsquo;ll discuss this in more detail in a follow-up post. For now, let\u0026rsquo;s focus on how to create them, and later we\u0026rsquo;ll discuss when to use them vs the Supervisor cluster.\nGather Deployment Information These steps assume that you\u0026rsquo;ve followed the series so far and have configured the prerequisites such as a Supervisor Cluster, a namespace, a content library, and a user with edit permissions.\nThe steps to deploy a new TKG cluster consists of running a single command from the CLI.\nkubectl apply -f tkgcluster.yaml\nYeah, that\u0026rsquo;s it. If you\u0026rsquo;re a Kubernetes operator, this command is going to seem very familiar! We\u0026rsquo;re deploying entire Kubernetes clusters based off of a desired state expressed by a YAML file. This means that building a TKG cluster really consists of us gathering the information we need to layout the desired state.\nLet\u0026rsquo;s look at the contents of a TKG YAML file and then begin to fill in some desired state information.\napiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: [clustername] namespace: [namespace-name] spec: distribution: version: [v1.16] topology: controlPlane: count: [3] class: [guaranteed-large] storageClass: [tkc-storage-policy-yellow] workers: count: [5] class: [guaranteed-xlarge] storageClass: [tkc-storage-policy-green] The items in brackets [] are items we need to fill in to create a new cluster. Most of these items seem pretty self explanatory. For example, how many control plane nodes and worker nodes the Kubernetes clusters should have. The clustername is completely up to you, the namespace must match the namespace deployed in the Supervisor Cluster that you have edit permissions on.\nNow, lets discuss a few fields that might need a bit more explanation.\nNOTE: For full descriptions of ALL fields, consult the official VMware documentation here.\nVersion Version is the Kubernetes version that will be deployed. This is a nice feature since you can have multiple versions of Kubernetes clusters all managed from the Supervisor Cluster. A short version such as 1.16 can be used, or you can specify the image name of the kubernetes version. These image names come from the Content Library templates.\nClass Class refers to the pre-set sizes of the nodes. You can specify different size VMs for the control plane nodes and worker nodes. So how do you know what classes are options for your cluster? Well, you could look at the official documentation here. Or you could look at the resources in the supervisor cluster.\nkubectl get virtualmachineclasses\nIf you want details about those objects, you can run a describe to find more information, just like you would in a normal Kubernetes environment.\nkubectl describe virtualmachineclasses best-effort-small\nStorageClass Storage Classes are used by Kubernetes to know how to create persistent volumes. In the TKG clusters, we specify the vSphere Storage Policy as a storageclass so that the clusters will understand how to provision persistent volumes on vSphere datastores.\nThe names of the storage classes can be obtained by running the following command from the Supervisor cluster namespace:\nkubectl describe ns\nCreate a TKG Cluster OK, I\u0026rsquo;ve filled out my entire TKG YAML manifest and I\u0026rsquo;m ready to create my cluster. Here is the YAML I\u0026rsquo;m deploying for my cluster.\napiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkg-cluster-1 namespace: devteam1 spec: distribution: version: v1.16 topology: controlPlane: count: 3 class: best-effort-small storageClass: hollow-storage-profile workers: count: 5 class: best-effort-small storageClass: hollow-storage-profile After logging into the Supervisor cluster, I can simply apply the manifest to create the cluster.\nkubectl apply -f [filename].yaml\nOnce applied, you can check the status of your cluster by running\nkubectl get tanzukubernetesclusters\nThere are also a few other resources that might be checked for troubleshooting purposes. You can also list the machines and virtual machine objects.\nkubectl get machines\nkubectl get virtualmachines\nThese two objects can help identify if the IaaS Provider is creating the resources correctly.\nOnce provisioning is over, you should see your cluster provisioned in vCenter, the get cluster command should show a provisioned cluster, and you can begin the fun work of building apps for your cluster!\nConnect to the TKG Cluster OK, you want to know how to access your new cluster, right? Well, here you go. To login to the guest cluster you just created, run:\nkubectl vsphere login --server=[SupervisorControlPlane] --tanzu-kubernetes-cluster-name [tkg cluster name] --tanzu-kubernetes-cluster-namespace [Supervisor Namespace] Now you only need to change your context to start deploying resources on your cluster!\nSummary In this post we gathered the appropriate information to build a desired state configuration file for our TKG guest clusters. We deployed the cluster and connected to it through the Kubectl cli and can now provision workloads.\nStay tuned for future posts where we discuss how this cluster can be updated, upgrade, modified, and destroyed.\n","permalink":"https://theithollow.com/2020/09/09/deploying-tanzu-kubernetes-clusters-on-vsphere-7/","summary":"\u003cp\u003eThis post will focus on deploying Tanzu Kubernetes Grid (TKG) clusters in your vSphere 7 with Tanzu environment. These TKG clusters are the individual Kubernetes clusters that can be shared with teams for their development purposes.\u003c/p\u003e\n\u003cp\u003eI know what you\u0026rsquo;re thinking. Didn\u0026rsquo;t we already create a Kubernetes cluster when we setup our Supervisor cluster? The short answer is yes. However the Supervisor cluster is a unique Kubernetes cluster that probably shouldn\u0026rsquo;t be used for normal workloads. We\u0026rsquo;ll discuss this in more detail in a follow-up post. For now, let\u0026rsquo;s focus on how to create them, and later we\u0026rsquo;ll discuss when to use them vs the Supervisor cluster.\u003c/p\u003e","title":"Deploying Tanzu Kubernetes Clusters on vSphere 7"},{"content":"In this post we\u0026rsquo;ll setup a vSphere Content Library so that we can use it with our Tanzu Kubernetes Grid guest clusters. If you\u0026rsquo;re not familiar with Content libraries, you can think of them as a container registry, only for virtual machines.\nWhy do we need a content library? Well, the content library be used to store the virtual machine templates that will become Kubernetes nodes when you deploy a TKG guest cluster.\nCreate a Content Library You can create content libraries by navigating to Menu \u0026ndash;\u0026gt; Content Libraries or you can select your Supervisor Cluster and in the Configure menu, click General. You\u0026rsquo;ll see there, whether a content library has already been assigned to your cluster or not. Click the EDIT hyperlink to take you to the content libraries.\nThe screen that opens allows you to pick a content library if one has already been created. In our case, we don\u0026rsquo;t have any content libraries available so we need to create one. Luckily we can click the CREATE LIBRARY button.\nHere you\u0026rsquo;ll begin the process of creating a new Content Library through the wizard. Give the Content Library a name and any notes that you may have. Select the vCenter and then click the NEXT button.\nThe critical screen is next. Select Subscribed content library and then enter the following subscription URL: https://wp-content.vmware.com/v2/latest/lib.json\nThis URL is a publicly accessible repository which has the virtual machines templates which are configured for Tanzu Kubernetes Grid. You can use other subscription URLs with content libraries, but for Tanzu you should use this URL to get the appropriate templates.\nUnder the Download content section, pick whether you want to immediately download the templates, or download them when they are requested to save space at the cost of the first deployment taking longer.\nAfter you click next, you\u0026rsquo;ll need to accept the SSL thumbprint of the certificate by clicking the YES button.\nThen you\u0026rsquo;ll select the vSphere datastore which will store the virtual machine templates. Select your favorite datastore and click Next.\nVerify the settings and click the finish button.\nLastly, go back to the supervisor cluster and click the edit button next to content library again.\nNow you can select the content library created earlier. Select your content library and click OK.\nSummary The vSphere content library isn\u0026rsquo;t new, but gives us some powerful functionality to enable building Tanzu Kubernetes Grid guest clusters. Create a library and assign it to your Supervisor cluster before moving on to the next post about building TKG guest clusters.\n","permalink":"https://theithollow.com/2020/09/08/create-a-content-library-for-vsphere-7-with-tanzu/","summary":"\u003cp\u003eIn this post we\u0026rsquo;ll setup a vSphere Content Library so that we can use it with our Tanzu Kubernetes Grid guest clusters. If you\u0026rsquo;re not familiar with Content libraries, you can think of them as a container registry, only for virtual machines.\u003c/p\u003e\n\u003cp\u003eWhy do we need a content library? Well, the content library be used to store the virtual machine templates that will become Kubernetes nodes when you deploy a TKG guest cluster.\u003c/p\u003e","title":"Create a Content Library for vSphere 7 with Tanzu"},{"content":"When setting up your vSphere 7 with Tanzu environment, its a good idea to update the default certificate shipped from VMware with your own certificate. This is a good security practice to ensure that your credentials are protected during logins, and nobody likes to see those pesky certificate warnings in their browsers anyway, am I right?\nCreate and Trust Certificate Authority This section of the blog post is to create a root certificate. In many situations, you won\u0026rsquo;t need to do this since your organization probably already has a certificate authority that can be used to sign certificates as needed. Since I\u0026rsquo;m doing this in a lab, I\u0026rsquo;m going to create a root certificate and make sure my workstation trusts this cert first. After this, we can use the root certificate to sign our vSphere 7 certificates.\nTo create the CA certificate and Private key, download and install openssl and then run the command below, replacing the CN with your own Root Certificate Name.\nopenssl req -nodes -new -x509 -days 3650 -keyout ca.key -out ca.crt -subj \u0026quot;/CN= HollowLabRoot\u0026quot;\nAfter running the command you should have a ca.crt and ca.key file created in your working directory.\nAfter the root certificate has been created, you need to distribute this certificate to any machine that will access your vSphere 7 with Tanzu API endpoints. On my mac, I\u0026rsquo;ve open the CA.crt file which opened the Keychain Access program.\nDouble click the certificate to open the settings and change the When using this certificate setting to Always Trust.\nWhen you finish, there should be a little blue checkmark next to the certificate demonstrating that it\u0026rsquo;s trusted. Now, any certificates signed by this root certificate, will be automatically trusted by this workstation.\nGenerate Certificate Signing Request In the vCenter UI, navigate to the Supervisor Cluster that you\u0026rsquo;ve created and click the Configure tab and then the Certificates menu item. You should see a Workload platform MGT tile. Under Actions click Generate CSR.\nFill out the information relating to your own organization and then click Next.\nOn the last screen, click the DOWNLOAD button to download the certificate signing request.\nI downloaded my CSR and named it v7wt.csr and placed it in my directory with my CA cert and key.\nLastly, it is probably a good idea to check the certificate signing request to make sure it looks ok. We can also check the Subject Alternate Name (SAN) properties on the cert at this time. You can view the csr by running:\nopenssl req -text -noout -verify -in v7wt.csr\nSign the Certificate Request Create an openssl config file. I\u0026rsquo;ve named mine ext.cnf. An important part of this config file is to have the [alt_names] section updated so that it matches the SAN properties from the signing request. If you leave these off, you may strip off this SAN information during the signing and the cert will not be trusted by today\u0026rsquo;s browsers.\nReplace the DNS.1 and IP.1 values to match your own CSR.\nkeyUsage = critical, digitalSignature, keyEncipherment extendedKeyUsage = serverAuth basicConstraints = CA:FALSE nsCertType = server subjectKeyIdentifier = hash authorityKeyIdentifier = keyid,issuer:always subjectAltName = @alt_names [alt_names] DNS.1 = sup.hollow.local IP.1 = 10.10.201.65 Once your config file is created, you can use the CA certificate and CA key to sign the CSR.\nopenssl x509 -req -in v7wt.csr -CA ca.crt -CAkey ca.key -out v7wt.crt -CAcreateserial -days 365 -sha256 -extfile ext.cnf\nAfter running the above command you should now have a v7wt.crt and you can inspect it to make sure it still includes your SAN properties.\nopenssl x509 -in v7wt.crt -text -noout\nReplace the Certificate You now have a valid certificate and are ready to replace the existing VMware certificate with your own. Go to the Supervisor cluster in vCenter again, and this time select Replace Certificate from the same location where you generated the CSR.\nThis image has an empty alt attribute; its file name is image-36-1024x958.png\nCopy the certificate, or upload from your workstation\u0026rsquo;s file system. Then click the Replace button.\nWhen you\u0026rsquo;re done, you should see that the certificate was replaced.\nTo verify that everything is working the way you want it to navigate to the Kubernetes API Endpoint webpage for one of your namespaces. When opening the page, you should not get a warning about the untrusted certificate. Inspecting the certificate should show the correct chain and that it\u0026rsquo;s valid.\nIf you\u0026rsquo;d like to check your cli access now as well, try running a kubectl vsphere login command to see if you still need the --insecure-skip-tls-verify flag set. You shouldn\u0026rsquo;t need to use this anymore since your endpoint has a trusted certificate now.\nSummary In this post you created your own root certificate, generated a certificate signing request from vCenter, and then signed that certificate request with your root CA. Lastly you replaced the VMware certificate with your own and tested to validate you can access the Kubernetes API Endpoint without using the insecure-skip-tls-verify option. It feels good to get rid of those certificate warnings doesn\u0026rsquo;t it?\n","permalink":"https://theithollow.com/2020/08/31/replace-vsphere-7-with-tanzu-certificates/","summary":"\u003cp\u003eWhen setting up your vSphere 7 with Tanzu environment, its a good idea to update the default certificate shipped from VMware with your own certificate. This is a good security practice to ensure that your credentials are protected during logins, and nobody likes to see those pesky certificate warnings in their browsers anyway, am I right?\u003c/p\u003e\n\u003ch2 id=\"create-and-trust-certificate-authority\"\u003eCreate and Trust Certificate Authority\u003c/h2\u003e\n\u003cp\u003eThis section of the blog post is to create a root certificate. In many situations, you won\u0026rsquo;t need to do this since your organization probably already has a certificate authority that can be used to sign certificates as needed. Since I\u0026rsquo;m doing this in a lab, I\u0026rsquo;m going to create a root certificate and make sure my workstation trusts this cert first. After this, we can use the root certificate to sign our vSphere 7 certificates.\u003c/p\u003e","title":"Replace vSphere 7 with Tanzu Certificates"},{"content":"In this post we\u0026rsquo;ll finally connect to our Supervisor Cluster Namespace through the Kubernetes cli and run some commands for the first time.\nIn the last post we created a namespace within the Supervisor Cluster and assigned some resource allocations and permissions for our example development user. Now it\u0026rsquo;s time to access that namespace so that real work can be done using the platform.\nFirst, login to vCenter again with the administrator@vsphere.local account and navigate to the namespace that was previously created. You should see a similar screen where we configured our permissions. In the Status tile, click one of the links to either open in a browser or copy the URL to then open in a browser.\nWhen you open the link, you should see a webpage like this one. This is the Kubernetes API endpoint.\nNotice the IP Address in the URL. It should seem pretty familiar to you because you entered the CIDR range during the workload control plane setup. Remember setting up the Ingress CIDRs? This is where the IP Address came from.\nNow, that you\u0026rsquo;ve got the Kubernetes API endpoint page displayed, you can follow the directions listed on the page to setup your client. This is nice since once you setup the namespace, you can give this web page to anyone using the Kubernetes clusters, to then use the namespace, get the clients, and describe the process. I\u0026rsquo;ll walk through those steps below.\nDownload CLI Plugin There will be a blue link on the Kubernetes API endpoint page that is a download link for the two CLIs you\u0026rsquo;ll need. These will come zipped and you\u0026rsquo;ll need to uncompress them after downloading.\nOnce you\u0026rsquo;ve downloaded the CLI tools, you need to put them into your Operating System Path. On my Mac, I\u0026rsquo;ve moved these to /usr/local/bin and set permissions so that I can execute these executables.\nTo verify that the CLIs are working, open a shell session and run the command: kubectl version just to see if the CLI will respond with the version. In my case, running it gave me a security error that I needed to fix because Apple couldn\u0026rsquo;t check it for malicious software.\nTo fix this on a Mac, go into Security \u0026amp; Privacy \u0026ndash;\u0026gt; General Tab. Here you\u0026rsquo;ll see the CLI tool that was just executed and a button to Allow the software to be executed.\nRunning the kubectl version command again, gives me the version info as expected.\nNext, we need to repeat the process with the kubectl vsphere CLI.\nAgain, ensure the software can be executed.\nOnce you can see the versions for both CLI components we can move on with accessing our cluster.\nConnect to the Kubernetes Namespace Now that the CLI tools are working, we need to first authenticate with the vSphere API. You can do this by running:\nkubectl vsphere login --server=[ip_or_fqdn_of_Kubernetes_API_Endpoint]\nIf you are not using trusted certificates, you\u0026rsquo;ll need to append the --insecure-skip-tls-verify switch as seen in the screenshot below.\nNOTE: I found the instructions a tad confusing. The Kubernetes API Endpoint webpage shows --server which I initially mistook for the vCenter server. It\u0026rsquo;s really asking for the Kubernetes API endpoint. In the screenshot below, you\u0026rsquo;ll see what happens if you use the wrong address, which was a helpful error message.\nOnce you\u0026rsquo;ve authenticated the CLI will display your Kubernetes Contexts which will now include the context for this Namespace we\u0026rsquo;ve created. You can change your context using:\nkubectl config use-context [context name]\nAnd after that you can start running Kubernetes commands. In the below screenshot, I\u0026rsquo;ve deployed the \u0026ldquo;Kubernetes Up and Running\u0026rdquo; container which you can see is running.\nIf you login to vCenter as the adminsitrator@vsphere.local user, you will even see the pod running in vCenter, under our namespace in the Hosts and Clusters view.\nSummary You\u0026rsquo;ve now got a namespace where you can deploy PodVMs within your Supervisor cluster. We downloaded the CLI tools from the Kubernetes API Endpoint, authenticated to vCenter, and executed some kubectl commands against our cluster.\nOne last note. At this point you\u0026rsquo;re authenticating to the Kubernetes API Endpoint using a non-trusted certificate because we specified the --insecure-skip-tls-verify switch at login. We can use an existing CA to sign new certificates and use them. We may cover this in another post, but the location to generate a cert request is found in Supervisor Cluster \u0026ndash;\u0026gt; Configure \u0026ndash;\u0026gt; Namespaces \u0026ndash;\u0026gt; Certificates \u0026ndash;\u0026gt; Workload platform MGT.\n","permalink":"https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/","summary":"\u003cp\u003eIn this post we\u0026rsquo;ll finally connect to our Supervisor Cluster Namespace through the Kubernetes cli and run some commands for the first time.\u003c/p\u003e\n\u003cp\u003eIn the \u003ca href=\"/2020/08/17/creating-supervisor-namespaces/\"\u003elast post\u003c/a\u003e we created a namespace within the Supervisor Cluster and assigned some resource allocations and permissions for our example development user. Now it\u0026rsquo;s time to access that namespace so that real work can be done using the platform.\u003c/p\u003e\n\u003cp\u003eFirst, login to vCenter again with the \u003ca href=\"mailto:administrator@vsphere.local\"\u003eadministrator@vsphere.local\u003c/a\u003e account and navigate to the namespace that was previously created. You should see a similar screen where we configured our permissions. In the \u003ccode\u003eStatus\u003c/code\u003e tile, click one of the links to either open in a browser or copy the URL to then open in a browser.\u003c/p\u003e","title":"Connecting to a Supervisor Namespace"},{"content":"Congratulations, you\u0026rsquo;ve deployed the Workload Management components for your vSphere 7 cluster. If you\u0026rsquo;ve been following along with the series so far, you\u0026rsquo;ll have left off with a workload management cluster created and ready to being configuring your cluster for use with Kubernetes.\nThe next step in the process is to create a namespace. Before we do that, it\u0026rsquo;s probably useful to recap what a namespace is used for.\nNamespaces the Theory Depending on your past experiences, a namespace will likely seem familiar to you in some fashion. If you have a kubernetes background, you\u0026rsquo;ll be familiar with namespaces as a way to set permissions for a group of users (or a project, etc) and for assigning resources. Alternatively, if you have a vSphere background, you\u0026rsquo;re used to using things like Resource Pools to set resource allocation.\nA Supervisor Cluster namespace is a combination of resource allocations and permissions set within the Supervisor Cluster. When you create a Supervisor Namespace, you\u0026rsquo;ll assign who has access to use it, and how many of the ESXi cluster\u0026rsquo;s resources you can use (much like a resource pool).\nWhen you enabled the Workload Management components, you created a special Kubernetes cluster called the \u0026ldquo;Supervisor Cluster\u0026rdquo;. You can continue to deploy virtual machines in this cluster, and you can also deploy kubernetes pods as a \u0026ldquo;pod vm\u0026rdquo; which is basically a container with some special wrapping so they are better isolated, like a virtual machine is.\nTo better illustrate things, the diagram below demonstrates that you can carve up the Supervisor Cluster to suit your needs. The diagram below has two namespaces for two different Development teams (you could carve these up by project, app, or whatever you\u0026rsquo;d like, dev teams is just an example). Those two namespaces would have different permissions so one development team couldn\u0026rsquo;t see the pods/resources in the other namespaces. They are also sized differently.\nThe third namespace on the far right side, is a namespace similar to the first two, but instead of it running PodVMs (think of them as containers for now) it\u0026rsquo;s running \u0026hellip; another Kubernetes cluster within that namespace called a Tanzu Kubernetes Grid (TKG) cluster. That cluster will have resources allocated by the Supervisor Namespace, but then pods can run on those VMs. NOTE: I can almost hear you asking why I\u0026rsquo;d build another cluster within a cluster, but that will need to wait for another post for now.\nHopefully you\u0026rsquo;ve gotten the purpose behind Supervisor Namespaces now, and are ready to configure your cluster.\nCreate a Namespace Within the Workload Management menu, select the Namespaces tab. If this is your first namespace, you\u0026rsquo;ll be greeted with a fancy splash page with the robot thingy on the right side. Sorry, I don\u0026rsquo;t know it\u0026rsquo;s name, but it really should have one..)\nClick Create Namespace.\nOn the first screen select which Supervisor Cluster to create the namespace, then give it a name and a description. Then click Create.\nYour Namespace is now created. You should probably do some additional configurations though now. Your screen should look something like this now.\nYou can see a couple of buttons on that overview screen. Let\u0026rsquo;s set some permissions so that some of our users can use this namespace soon. Click the App Permissions button within the \u0026ldquo;Permissions\u0026rdquo; tab. Select your identity source, a user or a group from that source, and either view or edit permissions. I\u0026rsquo;ve used edit permissions so we can use this user in later posts. Click OK.\nNOTE: View permissions = Get, list, watch permissions while Edit permissions also include Update, Delete, Patch, and Update.\nNext, in the Capacity and Usage tab, click the Edit Limits link to set some resource limits on the namespace. Enter limits for CPU, Memory and Storage for this namespace and then click OK.\nThe last setting we\u0026rsquo;ll do in this post is the Add Storage button in the Storage tab. Here you\u0026rsquo;ll select a storage policy that can be used with this namespace. These are standard storage policies that can be used on vsphere datastores to select the correct ones.\nIn the end, your dashboard will look something like the one below.\nAnd you\u0026rsquo;ll notice the namespaces are also listed under the \u0026ldquo;Hosts and Clusters\u0026rdquo; view if you\u0026rsquo;re logged in as administrator@vsphere.local, very similar to a resource pool would look.\nSummary Supervisor Namespaces are a way to isolate resources, assign role based access controls, and allocate physical resources for your users. Stay tuned for the next post when we login to the Supervisor Cluster within this namespace.\n","permalink":"https://theithollow.com/2020/08/17/creating-supervisor-namespaces/","summary":"\u003cp\u003eCongratulations, you\u0026rsquo;ve deployed the Workload Management components for your vSphere 7 cluster. If you\u0026rsquo;ve been following along with the series so far, you\u0026rsquo;ll have left off with a workload management cluster created and ready to being configuring your cluster for use with Kubernetes.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2020/08/namespaces0-3.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eThe next step in the process is to create a namespace. Before we do that, it\u0026rsquo;s probably useful to recap what a namespace is used for.\u003c/p\u003e\n\u003ch2 id=\"namespaces-the-theory\"\u003eNamespaces the Theory\u003c/h2\u003e\n\u003cp\u003eDepending on your past experiences, a namespace will likely seem familiar to you in some fashion. If you have a kubernetes background, you\u0026rsquo;ll be familiar with namespaces as a way to set permissions for a group of users (or a project, etc) and for assigning resources. Alternatively, if you have a vSphere background, you\u0026rsquo;re used to using things like Resource Pools to set resource allocation.\u003c/p\u003e","title":"Creating Supervisor Namespaces"},{"content":"VMware released the new version of vSphere with functionality to build and manage Kubernetes clusters. This series details how to deploy, configure, and use a lab running vSphere 7 with Kubernetes enabled.\nThe instructions within this post are broken out into sections. vSphere 7 requires pre-requisites at the vSphere level as well as a full NSX-T deployment. Follow these steps in order to build your own vSphere 7 with Kubernetes lab and start using Kubernetes built right into vSphere.\nGeneral Prerequisites 1 - vSphere 7 with Tanzu Environment and Prerequisites NSX-T Setup and Configuration 2 - NSX-T Installation 3 - NSX-T Pools, Nodes, and Zones Setup 4 - Deploy NSX-T Edge Nodes 5 - Tier-1 Gateway and NSX segments 6 - Tier-0 Gateway Deploy Workload Management 7 - Enable Workload Management Using the Workload Control Plane 8 - Creating Supervisor Cluster Namespaces 9 - Connecting to a Supervisor Namespace 10 - Replacing vSphere 7 with Tanzu Certificates 11 - Creating a Content Library for Tanzu Kubernetes Clusters 12 - Deploying Tanzu Kubernetes Clusters on vSphere 7 [13 - Enable the Harbor Registry (Optional)](http://Enable the Harbor Registry on vSphere 7 with Tanzu) 14 - Update Tanzu Kubernetes Clusters COMING SOON\n","permalink":"https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/","summary":"\u003cp\u003eVMware released the new version of vSphere with functionality to build and manage Kubernetes clusters. This series details how to deploy, configure, and use a lab running vSphere 7 with Kubernetes enabled.\u003c/p\u003e\n\u003cp\u003eThe instructions within this post are broken out into sections. vSphere 7 requires pre-requisites at the vSphere level as well as a full NSX-T deployment. Follow these steps in order to build your own vSphere 7 with Kubernetes lab and start using Kubernetes built right into vSphere.\u003c/p\u003e","title":"vSphere 7 with Tanzu - Getting Started Guide"},{"content":"This post focuses on enabling the workload management components for vSphere 7 with Kubernetes. It is assumed that the vSphere environment is already in place and the NSX-T configuration has been deployed.\nTo enable workload management, login to your vCenter as the administrator@vsphere.local account. Then in the Menu, select Work\nWithin the Workload Management screen, click the ENABLE button.\nThe first screen in the wizard, will list your compatible vSphere clusters. These clusters must have HA and DRS enabled in fully automated mode. If you are missing clusters, make sure you have ESXi hosts on version 7 with HA and DRS enabled. You\u0026rsquo;ll also need a Distributed switch on version 7 for these clusters.\nIf you\u0026rsquo;re having trouble finding information about why your cluster isn\u0026rsquo;t listed as compatible, you can run the command below to list why your cluster is incompatible.\ndcli com vmware vcenter namespacemanagement clustercompatibility list You can see why two of my vSphere clusters are incompatible from running the command above. If you have more trouble with enabling \u0026ldquo;workload management\u0026rdquo; I recommend reading this post from William Lam.\nNext, you must select a control plane size. This defines the VM size of the control plane nodes for your Kubernetes clusters. Since I have limited resources in my lab, I\u0026rsquo;ve chosen the Tiny size.\nThe next screen requires you to fill out networking information. First, we\u0026rsquo;ll discuss the management network. Each of the control plane nodes that will be deployed will have a network connection on the management network. (VLAN 150 if you\u0026rsquo;ve been following the series). Select the management portgroup for your network, and then the starting IP Address to be used for new nodes. They will increment from this IP Address so be sure to have at least five IP Addresses available. Next, set the subnet mask and the gateway, DNS info and NTP configs.\nOnce you\u0026rsquo;re through with the management network, its time to configure the workload network. Select the Distributed switch that will be used, and the Edge cluster. Next enter an API Server endpoint DNS name. This will be associated with the first \u0026ldquo;Starting IP Address\u0026rdquo; IP created in the management network (So 10.10.50.120 in this example). You will want to add a DNS entry for this FQDN. The Pod CIDRs and Service CIDRs should be fine, but you can change this if you\u0026rsquo;d like. Lastly, you need to enter Ingress and Egress CIDRs. This IP Address range should come from your external network. In my case this is VLAN 201. I\u0026rsquo;ve carved two /26s aside for ingress/egress access.\nNext up, its time to setup the storage. You\u0026rsquo;ll need to store three different types of objects on a datastore.\nControl Plane Node - virtual disks for control plane nodes Ephemeral Disks - vSphere pod disks Image Cache - container image cache For each of these objects you\u0026rsquo;ll need to select a storage policy that defines what datastores are compatible. I created a Hollow-Storage-Profile policy as a pre-requisite that selects my vsanDatastore. Select the storage profile configured for each of these components.\nOnce you\u0026rsquo;re done, click Finish and go get some coffee. No, I mean it, go drive to Starbucks or start a fresh pot of coffee and wait for it to be ready. Then drink it, and then come back. This process took about an hour in my cluster to complete.\nAs the configuration is running, you can view some minimal status information in the clusters screen. You can see here it\u0026rsquo;s configuring.\nAs I set this up in my lab, I had a couple of challenges and needed to find details about what was happening. If you need to find log details, login to the vCenter appliance shell and cat or tail the following two log files to give you information about whats happening.\ntail -f /var/log/vmware/wcp/wcpsvc.log tail -f /var/log/vmware/wcp/nsxd.log NOTE: there are some items which might fail, or give you a 404 error. These seem to be normal operations that will be retried via a control loop. So getting an error here and there might not be anything to worry about.\nWhen complete, you should see your cluster has a \u0026ldquo;Config Status\u0026rdquo; of \u0026ldquo;Running\u0026rdquo;. You\u0026rsquo;ll also see the control plane node IP Address which comes form the Ingress CIDR created previously.\nSummary Enabling the Workload Management components aren\u0026rsquo;t too labor intensive once you have the prerequisites done, but it does take a while to enable. You should have a supervisor cluster created and ready to be used at this point. Stay tuned and we\u0026rsquo;ll cover what to do with that cluster now that you\u0026rsquo;ve setup vSphere 7 with Kubernetes!\n","permalink":"https://theithollow.com/2020/07/14/enable-workload-management/","summary":"\u003cp\u003eThis post focuses on enabling the workload management components for vSphere 7 with Kubernetes. It is assumed that the vSphere environment is already in place and the NSX-T configuration has been deployed.\u003c/p\u003e\n\u003cp\u003eTo enable workload management, login to your vCenter as the \u003ca href=\"mailto:administrator@vsphere.local\"\u003eadministrator@vsphere.local\u003c/a\u003e account. Then in the Menu, select Work\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2020/07/image-40.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eWithin the Workload Management screen, click the \u003ccode\u003eENABLE\u003c/code\u003e button.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2020/07/image-30-1024x409.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eThe first screen in the wizard, will list your compatible vSphere clusters. These clusters must have HA and DRS enabled in fully automated mode. If you are missing clusters, make sure you have ESXi hosts on version 7 with HA and DRS enabled. You\u0026rsquo;ll also need a Distributed switch on version 7 for these clusters.\u003c/p\u003e","title":"Enable Workload Management"},{"content":"This post describes the lab environment we\u0026rsquo;ll be working with to build our vSphere 7 with Kubernetes lab and additional prerequisites that you\u0026rsquo;ll need to be aware of before starting. This is not the only topology that would work for vSphere 7 with Kubernetes, but it is a robust homelab that would mimic many production deployments except for the HA features. For example, we\u0026rsquo;ll only install one (singular) NSX Manager for the lab where in a production environment would have three.\nThis post will describe my vSphere home lab environment so that you can correlate what I\u0026rsquo;ve built, with your own environment.\nvSphere Environment - Cluster Layout My home lab has three clusters in it. Each of which consists of ESXi hosts on version 7 and a single vCenter server running the version 7 GA build.\nThe lab has a single auxilary cluster that I use to run non-essential VMs that I don\u0026rsquo;t want hogging resources from my main cluster. We\u0026rsquo;ll ignore that cluster for this series. The other two cluster will be important.\nI have a three node hollowlab cluster that I\u0026rsquo;ll use to host my workloads and general purpose VMs. This cluster must have vSphere HA turned on, and VMware Distributed Resource Scheduler (DRS) enabled in Fully Automated mode. This means that you\u0026rsquo;ll need to have vMotion working between nodes in your workload cluster. HA and DRS are requirements for vSphere 7 with Kubernetes clusters. More details about size/speed/capacities are found here.\nLastly, I have a third cluster, named edge, with a single ESXi host that will run my NSX-T Edge Virtual Machine. This VM is needed to bridge traffic between the physical network and the overlay tunnels created by NSX. You can deploy your edge VM within your workload cluster if you need to, but highly recommended to have edge nodes on their own hardware. Edge nodes can become network hotspots since overlay traffic has to flow through this VM for North/South traffic and load balancing.\nHere is a look at my cluster layout for reference.\nPhysical Switches In my lab I\u0026rsquo;m running an HP switch that has some layer three capabilities. To be honest, it doesn\u0026rsquo;t matter what you\u0026rsquo;re running, but you\u0026rsquo;ll need to be able to create VLANs, and route between them. However you want to do this is fine, but if you follow along with this series, you\u0026rsquo;ll need VLANs for the purposes in the table below. I\u0026rsquo;ve listed my VLAN numbers and VLAN Interfaces for each of the networks so you can compare with your own.\nNOTE: I see you judging me for my gateway addresses being .254 instead of .1. Just let it go.\nVLAN Purpose****VLAN #****Interface IPManagement15010.10.50.254/24Tunnel Endpoint (TEP)20010.10.200.254/24NSX Edge VLAN20110.10.201.254/24v7wk8s VLANS\nI have trunked (802.1q) the VLANs down to my ESXi hosts so that they can be used with my virtual switches. You will need to make sure to configure your ports so that the ESXi hosts can sent tagged packets on these vlans.\nNow, for any Overlay networks, you will need to ensure that you have Jumbo Frames enabled. This means, that your Interfaces, Switches, or Distributed virtual switches must accept an frames of 1600 mtu or larger. This is an NSX requirement. Be sure to enable jumbo frames across your infrastructure.\nvSphere Virtual Switches There is more than one way to setup the NSX-T virtual switching environment. For my lab, I\u0026rsquo;ve setup management, vMotion, vSAN, and NFS networks on some virtual switches. For those items, it doesn\u0026rsquo;t matter what kind of switches they\u0026rsquo;re on. These could be on standard switches and these are probably portgroups you\u0026rsquo;ve setup for all vSphere environments.\nFor the NSX components, I\u0026rsquo;ve deployed a single vSphere 7 distributed switch (VDS) across both my workload cluster and the edge cluster. I\u0026rsquo;ve created two portgroups on the VDS which will be used by the edge nodes deployed later in the series.\nFrom the screenshot below, You can see I\u0026rsquo;ve created an NSX_Edge-Uplink portgroup and an NSX_TEP porgroup. Each of these portgroups are VLAN tagged with the VLANs shown in the table from the previous section. You can have more portgroups on this switch if you\u0026rsquo;d like. As you create new NSX-T \u0026ldquo;segments\u0026rdquo; they will appear as portgroups on this switch.\nStorage This lab is using VMware VSAN for storage of the workload virtual machines and thereby the Supervisor Cluster VMs. You should be able to use other storage solutions, but you\u0026rsquo;ll need to have a VMware storage policy that works with your datastores. I\u0026rsquo;ve created a storage policy named Hollow-Storage-Profile that will be used for my build.\nLicenses You will need to have some advanced licenses for some of the components. Specifically, NSX-T requires an NSX-T Data Center Advanced or higher license. Also, the ESXi hosts will need a VMware vSphere 7 Enterprise Plus with Add-on for Kubernetes license for proper configuration.\n","permalink":"https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-environment-and-prerequisites/","summary":"\u003cp\u003eThis post describes the lab environment we\u0026rsquo;ll be working with to build our vSphere 7 with Kubernetes lab and additional prerequisites that you\u0026rsquo;ll need to be aware of before starting. This is not the only topology that would work for vSphere 7 with Kubernetes, but it is a robust homelab that would mimic many production deployments except for the HA features. For example, we\u0026rsquo;ll only install one (singular) NSX Manager for the lab where in a production environment would have three.\u003c/p\u003e","title":"vSphere 7 with Kubernetes Environment and Prerequisites"},{"content":"This post will review the deployment and configuration of a Tier-0 gateway to provide north/south routing into the NSX-T overlay networks.\nThe Tier-0 (T0) gateway is where we\u0026rsquo;ll finally connect our new NSX-T backed overlay segments to the physical network through an NSX-T Edge which was previously deployed.\nThe Tier-0 gateway will connect directly to a physical VLAN and on the other side to our T1 router deployed in the previous post. From there, we should have all the plumbing we need to route to our hosts and begin using NSX-T to do some cooler stuff. In the end, the network topology will look something like this:\nDeploy the Tier-0 Gateway Within the NSX-T Manager navigate to Networking \u0026ndash;\u0026gt; Tier-0 Gateways. From there click the ADD GATEWAY button.\nGive the gateway a name and pick an HA Mode. In our case the HA mode doesn\u0026rsquo;t really matter because we only have a single edge deployed. In a production setting, this becomes an important consideration. Next, scroll down until you get to the Interfaces section. Click the link next to interfaces to assign an interface to the router.\nName the interface, and select the type of external. Then enter an IP Address / subnet for the IP Address that will reside on the external interface side of the router. This should be a routable IP Address on your physical network (VLAN 201 from previous posts).\nIn the Connected to(Segment) box select the Uplink-Segment that was created during the segments post. Then finally select the edge node that will house the resources for this T0 gateway. Click Save to save the interface configuration and go back to the T0 router setup.\nUnder Static Routes, I added a default routing rule that sends any traffic to 0.0.0.0/0 through my physical switch.\nMy next hop address is the Physical Switch gateway address on the VLAN 201 network. In my case its 10.10.201.254.\nConnect the Routers Your Tier-0 Router is now ready to go, lets connect a couple of things together to finish this up. Go to your Tier-1 router created in the previous post and update the Linked Tier-0 Gateway drop down to reflect your new Tier-0 router. Save the configuration and you\u0026rsquo;ve now connected the T0 down to the Tier-1 and subsequent NSX segments.\nConfigure Routing Rules The last step I can\u0026rsquo;t help with too much. We need to send traffic from our physical network down to the NSX-T overlay segments through routing rules. When setting up the segments in a previous post, I used the networks below as NSX segments.\nNOTE: These networks are just test networks to demonstrate how NSX-T can be used with VMs. These segments are not necessary for vSphere 7 with Kubernetes, but a good way to validate that NSX is working. I am using these networks alongside of the vSphere 7 on Kubernetes deployment.\nSegment NameSegment CIDRWeb192.168.0.1/24App192.168.1.1/24Database192.168.2.1/24\nThese networks cannot be accessed from outside the overlay networks until you configure routing rules or a dynamic routing protocol. In my case, I updated my Layer 3 switch so that a route for 192.168.0.0/16 points to my Tier-0 Uplink IP Address which was 10.10.201.10.\nIn your case, you can add static routes, or setup a routing protocol to automatically add these routes. You\u0026rsquo;ll have to decide for yourself the best method in your lab, with your equipment.\nSummary You\u0026rsquo;ve now deployed the Tier-0 router and connected your NSX-T backed Overlay segments to your physical network. You can begin using NSX-T for vSphere 7 on Kubernetes by following the next post, or whatever network segmentation/routing/stretched Layer 2 thing you can come up with. Good luck with your NSX-T labbing!\n","permalink":"https://theithollow.com/2020/07/14/tier-0-gateway/","summary":"\u003cp\u003eThis post will review the deployment and configuration of a Tier-0 gateway to provide north/south routing into the NSX-T overlay networks.\u003c/p\u003e\n\u003cp\u003eThe Tier-0 (T0) gateway is where we\u0026rsquo;ll finally connect our new NSX-T backed overlay segments to the physical network through an NSX-T Edge which was previously deployed.\u003c/p\u003e\n\u003cp\u003eThe Tier-0 gateway will connect directly to a physical VLAN and on the other side to our T1 router deployed in the previous post. From there, we should have all the plumbing we need to route to our hosts and begin using NSX-T to do some cooler stuff. In the end, the network topology will look something like this:\u003c/p\u003e","title":"Tier-0 Gateway"},{"content":"This post will focus on deploying our first NSX Gateway/Router and setting up our overlay segments. Before you can start these steps, the Edge nodes should be up and running so that they can support the Tier-1 gateways.\nNSX uses two types of routers/gateways. We\u0026rsquo;ll start by using a Tier-1 (T1) router. These routers are usually used to pass traffic between NSX overlay segments. We could create NSX segments without any routers, but it would require a router to pass traffic between these segments so we will create a T1 router first.\nTier-1 Deployment To setup your first Tier-1 router go to the Networking \u0026ndash;\u0026gt; Tier-1 Gateways page and click ADD TIER-1 GATEWAY BUTTON. Give the router a name and select the edge cluster from the drop-down. Under route advertisement, enable All Static Routes, and All Connected Segments and Service Ports. For now, this is really all that needs to be done. We\u0026rsquo;ll revisit this at a later time.\nCreate NSX Overlay Segments Segments are layer 2 broadcast domains where we can run our virtual machines. When you create an NSX segment, a portgroup will be created on our VDS virtual switch and then be available for use within the vCenter environment for workloads.\nNOTE: creating the segments won\u0026rsquo;t immediately create portgroups in your vCenter. If nothing is attached to the segment (like a VM) then the portgroup won\u0026rsquo;t show up.\nWe\u0026rsquo;ll create three segments for our workloads.\nNOTE: You don\u0026rsquo;t need VLANs created on the physical network for these networks. These are on the overlay networks created and managed by NSX.\nWeb - 192.168.0.1/24 App - 192.168.1.1/24 Database - 192.168.2.1/24 Go to Networking \u0026ndash;\u0026gt; Segments and then click the ADD SEGMENT button to create a new segment. Fill out the name, and select the T1 router created earlier. Then select the Overlay dropdown and enter the Subnet CIDR for this segment. Then click Save.\nThats all there is to creating a new segment. Continue filling out the segments for each of the networks you\u0026rsquo;d like to create.\nAfter deploying these segments, I connected a virtual machine to Web and App segments to test connectivity between them. The result was a successful ping between VMs on different segments and in vCenter, you can see the portgroups for those new segments.\nNSX VLAN Segments We want to create one more segment for our future Tier-0 router to use to connect to our physical network. I\u0026rsquo;m naming my segment Uplink-Segment and it belongs to the VLAN-Zone transport zone.\nPay no attention to the connectivity drop down. It shows as required, but it isn\u0026rsquo;t because you haven\u0026rsquo;t deployed a T0 router yet to connect it to.\nFor the subnets, I\u0026rsquo;ve put in an address on my Edge-Uplinks portgroup. This was my 201 VLAN from the previous examples. The other important thing to note is the VLAN ID. You need to set this, but it should be set to 0 since tagging is done at the vSwitch level.\nSummary After setting up our Tier-1 router and the overlay segments, you should be able to deploy some virtual machines to those portgroups and have them communicate with each other. There is no North/South routing configured yet for your physical network to access the overlays. This will be covered more during the deployment of the Tier-0 router in the next post.\n","permalink":"https://theithollow.com/2020/07/14/tier-1-gateway-and-nsx-segments/","summary":"\u003cp\u003eThis post will focus on deploying our first NSX Gateway/Router and setting up our overlay segments. Before you can start these steps, the Edge nodes should be up and running so that they can support the Tier-1 gateways.\u003c/p\u003e\n\u003cp\u003eNSX uses two types of routers/gateways. We\u0026rsquo;ll start by using a Tier-1 (T1) router. These routers are usually used to pass traffic between NSX overlay segments. We could create NSX segments without any routers, but it would require a router to pass traffic between these segments so we will create a T1 router first.\u003c/p\u003e","title":"Tier-1 Gateway and NSX Segments"},{"content":"NSX-T Edge nodes are used for security and gateway services that can\u0026rsquo;t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use Tier0 or Tier1 routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed.\nIn my lab, I\u0026rsquo;m deploying the edge nodes to their own cluster. This is not a requirement for a lab, but a good recommendation for a production setup since traffic is usually funneled through these instances and they can become a network hot spot.\nBefore we do the deploy, let\u0026rsquo;s revisit this logical diagram for the edge node we\u0026rsquo;ll be deploying. I\u0026rsquo;ll be honest, this edge networking caused me fits until I realized that we need to extend the overlay networks from the ESXi hosts in our workload cluster, to the Edge Virtual Machine. You must add the Edge VM to the TEP network to participate in the overlay. Then, you will have a second VM interface that connects to a VLAN transport zone which will be the portgroup created on my Edge ESXi host virtual switch.\nTo wrap your head around the Edge VM networking, take a look at this page, and specifically the diagram found below. The Edge VM has a virtual switch inside it, and we\u0026rsquo;ll connect the edge vm uplinks to the Distributed virtual switch uplinks. The Edge VM will have three or more interfaces. 1 for Management, 1 for Overlay, and 1 for VLAN traffic to the physical network.\nIn the end, our overlay network will extend from the ESXi hosts to the Edge virtual machine. The edge virtual machine will have a path to the physical network.\nTo deploy the first edge node, go to the NSX Manager under System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Nodes -\u0026gt; Edge Transport Nodes. Click the + ADD EDGE VM button.\nGive the edge vm a name, FQDN, and description before selecting a size. Sizing is critically important for a production environment. It has impacts on the number of load balancers that can be provisioned among other things.\nNOTE: The documentation states that you need the Large form factor for the Tanzu components. I was able to get it to come up with a Medium form factor, but could not deploy TKG clusters until I upgraded to Large.\nNext, select the compute manager, cluster, resource pool, and datastore for the edge node to be deployed on.\nIn the configure node settings box, give the VM an IP address on a management network. This is NOT in the data plane, but rather a way to communicate with the edge node. I\u0026rsquo;ve placed this VM on one of my existing management portgroups on my edge cluster. (VLAN 50 - Management)\nThe last screen is where the configuration really happens. This is where we create virtual switches in the Edge VM and connect them (through uplinks) to a physical Nic. We need to create two switches in this screen, these switches will not be visible in the ESXi hosts view, because they exist within the Edge virtual machine.\nWe need two switches created. One for the Overlay network which belongs on the TEP network along with our ESXi hosts in our workload cluster. And another for the VLAN backed network which is how the VM communicates on the physical network.\nOn the Configure NSX screen, click the +ADD SWITCH link twice so we can setup each switch. The first I\u0026rsquo;ve named nsx-vlan to represent the northbound physical network. I selected the VLAN-Zone transport zone (VLAN 201 - Edge Network) and the single nic profile which is an out-of-the-box profile. Under the teaming policy, I\u0026rsquo;ve selected my Edge Uplink portgroup that was already created on my ESXi DVS. This link IS within the data path.\nOn the second switch in the configuration I\u0026rsquo;ve added the Overlay-Zone transport zone, again with the single nic profile. Under Address assignment, select Use IP Pool and select the TEP pool that we also used on our workload ESXi hosts to add them to the overlay. Then select the uplink for the NSX-TEP network (VLAN 200 - NSX TEP).\nI know this piece was confusing to me, so if you\u0026rsquo;re stuck, take a look at the diagram below. The edge VM will be created with a pair of switches. The uplinks on those switches will be portgroups on the DVS. My lab layout is shown below for the edge VM.\nAfter finishing the configuration, an edge VM should be listed under your Edge Transport Nodes.\nCreate Edge Cluster Edge Nodes can (and in production environments should) be deployed in pairs. Groups of Edge nodes can then be pooled together into an Edge cluster. Thus far, we haven\u0026rsquo;t focused enough on High Availability because its a lab short of resources, but these routines should be modified slightly to provide this HA capability. This might include adding a second VLAN transport zone for a second physical switch etc. Edge clusters, although we won\u0026rsquo;t be using more than one in our example, are required.\nAdd your new Edge node to an edge cluster by navigating to System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Nodes \u0026ndash;\u0026gt; Edge Transport Zones. Click the + ADD button to create a new cluster.\nGive the cluster a name and description. Then make sure your edge VM has been selected and moved to the right column.\nSummary I found that understanding the edge node routing was the most difficult piece to setting up NSX in the lab. Remember that we\u0026rsquo;re extending the Overlay transport zone from the workload ESXi host cluster to the Edge node VM. The Edge VM then has a second VLAN transport zone where traffic can be routed to the physical network. Stay tuned for the next post were we create some actual overlay networks that our VMs can use.\n","permalink":"https://theithollow.com/2020/07/14/deploy-nsx-t-edge-nodes/","summary":"\u003cp\u003eNSX-T Edge nodes are used for security and gateway services that can\u0026rsquo;t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use \u003ccode\u003eTier0\u003c/code\u003e or \u003ccode\u003eTier1\u003c/code\u003e routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed.\u003c/p\u003e","title":"Deploy NSX-T Edge Nodes"},{"content":"In the previous post we deployed an NSX Manager. Now it\u0026rsquo;s time to start configuring NSX so that we can build cool routes, firewall zones, segments, and all the other NSX goodies. And even if we don\u0026rsquo;t want to build some of these things, we\u0026rsquo;ll need this setup for vSphere 7 with Kubernetes.\nAdd an IP Pool The first thing we\u0026rsquo;ll setup is an IP Pool. As you might guess, an IP Pool is just a group of IP Addresses that we can use for things. Specifically, we\u0026rsquo;ll use these IP Addresses to assign Tunnel Endpoints (Called TEPs previously called VTEPs in NSX-V parlance) to each of our ESXi hosts that are participating in the NSX Overlay networks. The TEP becomes the point in which encapsulation and decapsulation takes place on each of the ESXi hosts. Think of it this way, when encapsulated traffic needs to be routed to a VM on a host, what IP Address do we need to send the traffic to, so that it can reach that VM. This is the TEP. We need to setup a TEP on each host, and the IP Addresses for these TEPs come from an IP Pool. Since I have three hosts, and expect to deploy 1 edge nodes, I\u0026rsquo;ll need a TEP Pool with at least 4 IP Addresses. Size your environment appropriately.\nThese IP Pools can be setup as DHCP or Static. Since this is a small lab we\u0026rsquo;ll walk through using static IP Addresses. To begin, navigate to the Networking tab and click IP Address Pools in the NSX Manager portal.\nClick the ADD IP ADDRESS POOL button and then give the new pool a name and a description. Then click the Set hyperlink under subnets.\nOn the subnets setup screen you can add your pool IPs. For this I\u0026rsquo;ll use a range of IP Addresses in my 10.10.200.0/24 network which is my VLAN 200 NSX_TEP network. Add your range and click Apply.\nCreate a Transport Zone Now it\u0026rsquo;s time to setup a Transport Zone. A Transport zone is a network that will, you guessed it, transport packets between nodes. And guess where those packets will land? Yep, on the Tunnel Endpoints. The way I think about a Transport zone is that it\u0026rsquo;s a grouping of hosts that are participating in NSX networks.\nWe have two types of Transport Zones. VLAN and Overlay.\nOverlay - These are the networks created by NSX and carry the encapsulated geneve tunnels. When you create a new NSX-T segment, the encapsulated packets are passed via this overlay transport zone.\nVLAN - These are networks backed by a VLAN used for communicating North/South with the physical network. These are commonly deployed on edge nodes so that the Overlay networks can route out to the physical network via an edge.\nTo give a graphical example of what we\u0026rsquo;re doing, see below.\nTo create our Transport Zones go to System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Transport Zones in the NSX Manager UI. Click the + button to add a new transport zone. NOTE: there may be default zones created already. I\u0026rsquo;m ignoring those in my setup and creating my own.\nFirst I\u0026rsquo;ll create the Overlay Zone. Give it a name, description and a switch name. Then select the type of zone, in my case Overlay. Click Add.\nCreate another zone, but this time make it a VLAN zone.\nAt this point we have enough transport zones to continue. Lets build some Transport Node profiles.\nUplink Profiles Uplink Profiles give you a way to set your teaming policies, and uplinks for any of the transport nodes you\u0026rsquo;ll be creating. Since this is a lab, the default uplink profiles might not be the best fit. I\u0026rsquo;m using a single NIC which you should not do for a production environment so I\u0026rsquo;ll create a custom uplink profile.\nIn the NSX Manager go to System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Profiles and then click the + under the Uplink Profiles page.\nYou can see from my screenshot below, that I\u0026rsquo;ve given it a name and added a single nic to the active uplinks. That nic is named vmnic1 which is the vmnic on my distributed switch. My overlay network is on VLAN 200 so the Transport VLAN field needs to be set to 200. Save your configuration.\nTransport Node Profile The transport node profile is used to provide configuration for each of the ESXi nodes. The profile specifies which NICs on the nodes need to be configured for the VDS switch. It also specifies the IP Addresses assigned for the TEP on this switch.\nNavigate to the System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Profiles page. Then click +ADD.\nGive the profile a name, and description. Then click VDS as the switch type. You\u0026rsquo;ll then select your computer manager and the name of the VDS. Then select an uplink profile from the list of defaults. I\u0026rsquo;ve chosen the Overlay profile from above. Under IP Assignment select Use IP Pool and then under IP Pool, select the TEP Pool we created earlier.\nLastly, in the uplinks, you must specify an ESXi Physical NIC that the VDS switch will use as a physical uplink. In my lab I have Uplink1 for my uplink NIC on the VDS.\nNOTE: Each ESXi host could be configured differently if they are non-uniform. You would need to configure each node individually instead of as a full cluster. Transport Node Profiles make this a snap as long as you\u0026rsquo;ve got uniformed infrastructure (meaning the same vmnic is used on each ESXi host in your Transport Zone).\nWe\u0026rsquo;re prepared to configure our nodes now. Lets push the configurations down to the nodes to prep them for use.\nConfigure Transport Nodes Now that we\u0026rsquo;ve got a profile, we\u0026rsquo;ll go to System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Nodes. Under the Managed by drop down, select your Compute Resource (vCenter). Click +ADD.\nYou should see your list of clusters. I\u0026rsquo;ve expanded HollowCluster cluster and you can see the nodes are not configured. Select the cluster that will be configured with your transport node profile created above.\nNOTE: You do not need to configure the edge ESXi host, just the workload nodes.\nClick the CONFIGURE NSX button.\nSelect the Transport Node Profile we created earlier and then click APPLY.\nNSX will soon be configuring things on your ESXi hosts.\nWhen complete you should see success listed next to each of the nodes in the cluster.\nSummary In this post we configured our nodes with our Transport zone and configured our profiles to configure the virtual switches and NICs on the ESXi nodes. In the next post we\u0026rsquo;ll move to our Edge Nodes.\n","permalink":"https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/","summary":"\u003cp\u003eIn the \u003ca href=\"/2020/07/14/nsx-pools-zones-and-nodes-setup/\"\u003eprevious post\u003c/a\u003e we deployed an NSX Manager. Now it\u0026rsquo;s time to start configuring NSX so that we can build cool routes, firewall zones, segments, and all the other NSX goodies. And even if we don\u0026rsquo;t want to build some of these things, we\u0026rsquo;ll need this setup for vSphere 7 with Kubernetes.\u003c/p\u003e\n\u003ch2 id=\"add-an-ip-pool\"\u003eAdd an IP Pool\u003c/h2\u003e\n\u003cp\u003eThe first thing we\u0026rsquo;ll setup is an IP Pool. As you might guess, an IP Pool is just a group of IP Addresses that we can use for things. Specifically, we\u0026rsquo;ll use these IP Addresses to assign Tunnel Endpoints (Called TEPs previously called VTEPs in NSX-V parlance) to each of our ESXi hosts that are participating in the NSX Overlay networks. The TEP becomes the point in which encapsulation and decapsulation takes place on each of the ESXi hosts. Think of it this way, when encapsulated traffic needs to be routed to a VM on a host, what IP Address do we need to send the traffic to, so that it can reach that VM. This is the TEP. We need to setup a TEP on each host, and the IP Addresses for these TEPs come from an IP Pool. Since I have three hosts, and expect to deploy 1 edge nodes, I\u0026rsquo;ll need a TEP Pool with at least 4 IP Addresses. Size your environment appropriately.\u003c/p\u003e","title":"NSX Pools, Zones, and Nodes Setup"},{"content":"This post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing.\nDeploy the NSX Manager The first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you\u0026rsquo;ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you\u0026rsquo;ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab.\nAs you deploy the template you\u0026rsquo;ll need to specify the size of the deployment. This is important, but for a lab environment less so. I\u0026rsquo;ve found that the Small size works well for my lab and doesn\u0026rsquo;t take up too many resources.\nFill out the rest of the deployment information. The configurations that I used are listed below, customized for my lab environment. I deployed the NSX-T Manager in the management VLAN outlined in the previous post.\nHostname: nsx Rolename: NSX Manager NSX Site Name: HollowLab Default IPv4 Gateway: 10.10.50.254 Management Network IPv4 Address: 10.10.50.19 Management Network Netmask: 255.255.255.0 DNS Server list: 10.10.50.12, 10.10.50.9 Domain Search List: hollow.local NTP Server List: pool.ntp.org Enable SSH: no Allow root logins: no Finish the installation and when complete, power on the NSX vm that was just deployed.\nInitialize NSX Manager Once your NSX Manager appliance has been deployed and powered on, its time to do some basic initialization. The first thing you\u0026rsquo;ll do is open a web browser and navigate to the FQDN of your NSX Manager appliance you just deployed. Once you authenticate to the appliance using the credentials specified in your OVA deployment from above, you\u0026rsquo;ll probably see some pop-up screens asking you to accept a EULA, join the CEIP program, etc. Check the boxes and close any getting started windows. We don\u0026rsquo;t need that stuff. :)\nYou will also need to apply a license to your NSX Manager. Navigate to the System tab and click +ADD to add a license and fill out the details. vSphere 7 with Kubernetes requires a NSX-T Data Center Advanced or higher license to be applied.\nThe next step in setting up our lab involves connecting a compute manager. This is a fancy name for vCenter in our case. NSX-T will use this compute manager connection to query objects and create objects as necessary. To setup the computer manager, you\u0026rsquo;ll need a service account for NSX Manager to talk to vCenter. In my case, I\u0026rsquo;m using an administrative role (Remember this is a lab), but if you want to be specific about your permissions, and of course you should, you can apply the following privileges to a service account.\nExtension.Register extensionExtension.Unregister extensionExtension.Update extensionSessions.MessageSessions.Validate sessionSessions.View and stop sessionsHost.Configuration.MaintenanceHost.Configuration.NetworkConfigurationHost.Local Operations.Create virtual machineHost.Local Operations.Delete virtual machineHost.Local Operations.Reconfigure virtual machineTasksScheduled taskGlobal.Cancel taskPermissions.Reassign role permissionsResource.Assign vApp to resource poolResource.Assign virtual machine to resource poolVirtual Machine.ConfigurationVirtual Machine.Guest OperationsVirtual Machine.ProvisioningVirtual Machine.InventoryNetwork.Assign networkvApp\nFrom within the NSX Manager console, go to System \u0026ndash;\u0026gt; Fabric \u0026ndash;\u0026gt; Compute Managers and click +ADD.\nIn the next screen, enter your vCenter information and login credentials. Then click the Add button. When you do this for the first time you\u0026rsquo;ll be presented with a SHA-256 thumbprint and you\u0026rsquo;ll need to accept that its valid before continuing. Lastly, click the \u0026ldquo;Enable Trust\u0026rdquo; button so that it\u0026rsquo;s in the Yes position. This last step is important as it allows NSX to trust vCenter for authentication.\nYou will be asked to add a thumbprint. Click Add. When you\u0026rsquo;re done you\u0026rsquo;ll have a vCenter configured and registered as a compute manager.\nSummary In this post we deployed the NSX Manager which is the brains of the NSX-T product and have configured licenses and connected it to our vCenter server. In the next post we\u0026rsquo;ll start configuring NSX-T so that we can start routing some traffic to some virtual machines.\n","permalink":"https://theithollow.com/2020/07/14/nsx-installation/","summary":"\u003cp\u003eThis post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing.\u003c/p\u003e\n\u003ch2 id=\"deploy-the-nsx-manager\"\u003eDeploy the NSX Manager\u003c/h2\u003e\n\u003cp\u003eThe first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you\u0026rsquo;ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you\u0026rsquo;ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab.\u003c/p\u003e","title":"NSX Installation"},{"content":"Hey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don\u0026rsquo;t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster?\nIf any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers.\nValidating Admission Controllers - The Theory Admission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you\u0026rsquo;ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is Open Policy Agent (OPA).\nA generalized request flow for new objects starts with Authenticating with the API, then being authorized, and then optionally hitting an admission controller, before finally being committed to the etcd store.\nThe great part about an admission controller is that you\u0026rsquo;ve got an opportunity to put our own custom logic in as a gate for API calls. This can be done in two forms.\nValidatingAdmissionController - This type of admission controller returns a binary result. Essentially, this means either yes the object is permitted or no the object is not permittetd.\nMutatingAdmissionController - This type of admission controller has the option of modifying the API call and replacing it with a different version.\nAs an example, our example requirement that a label needs to be added to all pods, could be handled by either of these methods. A validating admission controller would allow or deny a pod from being deployed without the specified tag. Meanwhile, in a mutating admission controller, we could add a tag if one was missing, and then approve it. This post focuses on building a validating admission controller.\nOnce the API request hits our admission controller webhook, it will make a REST call to the admission controller. Next, the admission controller will apply some logic on the Request, and then make a response call back to the API server with the results.\nThe api request to the admission controller should look similar to the request below. This was lifted directly from the v1.18 Kubernetes documentation site.\n{ \u0026#34;apiVersion\u0026#34;: \u0026#34;admission.k8s.io/v1\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;AdmissionReview\u0026#34;, \u0026#34;request\u0026#34;: { # Random uid uniquely identifying this admission call \u0026#34;uid\u0026#34;: \u0026#34;705ab4f5-6393-11e8-b7cc-42010a800002\u0026#34;, # Fully-qualified group/version/kind of the incoming object \u0026#34;kind\u0026#34;: {\u0026#34;group\u0026#34;:\u0026#34;autoscaling\u0026#34;,\u0026#34;version\u0026#34;:\u0026#34;v1\u0026#34;,\u0026#34;kind\u0026#34;:\u0026#34;Scale\u0026#34;}, # Fully-qualified group/version/kind of the resource being modified \u0026#34;resource\u0026#34;: {\u0026#34;group\u0026#34;:\u0026#34;apps\u0026#34;,\u0026#34;version\u0026#34;:\u0026#34;v1\u0026#34;,\u0026#34;resource\u0026#34;:\u0026#34;deployments\u0026#34;}, # subresource, if the request is to a subresource \u0026#34;subResource\u0026#34;: \u0026#34;scale\u0026#34;, # Fully-qualified group/version/kind of the incoming object in the original request to the API server. # This only differs from `kind` if the webhook specified `matchPolicy: Equivalent` and the # original request to the API server was converted to a version the webhook registered for. \u0026#34;requestKind\u0026#34;: {\u0026#34;group\u0026#34;:\u0026#34;autoscaling\u0026#34;,\u0026#34;version\u0026#34;:\u0026#34;v1\u0026#34;,\u0026#34;kind\u0026#34;:\u0026#34;Scale\u0026#34;}, # Fully-qualified group/version/kind of the resource being modified in the original request to the API server. # This only differs from `resource` if the webhook specified `matchPolicy: Equivalent` and the # original request to the API server was converted to a version the webhook registered for. \u0026#34;requestResource\u0026#34;: {\u0026#34;group\u0026#34;:\u0026#34;apps\u0026#34;,\u0026#34;version\u0026#34;:\u0026#34;v1\u0026#34;,\u0026#34;resource\u0026#34;:\u0026#34;deployments\u0026#34;}, # subresource, if the request is to a subresource # This only differs from `subResource` if the webhook specified `matchPolicy: Equivalent` and the # original request to the API server was converted to a version the webhook registered for. \u0026#34;requestSubResource\u0026#34;: \u0026#34;scale\u0026#34;, # Name of the resource being modified \u0026#34;name\u0026#34;: \u0026#34;my-deployment\u0026#34;, # Namespace of the resource being modified, if the resource is namespaced (or is a Namespace object) \u0026#34;namespace\u0026#34;: \u0026#34;my-namespace\u0026#34;, # operation can be CREATE, UPDATE, DELETE, or CONNECT \u0026#34;operation\u0026#34;: \u0026#34;UPDATE\u0026#34;, \u0026#34;userInfo\u0026#34;: { # Username of the authenticated user making the request to the API server \u0026#34;username\u0026#34;: \u0026#34;admin\u0026#34;, # UID of the authenticated user making the request to the API server \u0026#34;uid\u0026#34;: \u0026#34;014fbff9a07c\u0026#34;, # Group memberships of the authenticated user making the request to the API server \u0026#34;groups\u0026#34;: [\u0026#34;system:authenticated\u0026#34;,\u0026#34;my-admin-group\u0026#34;], # Arbitrary extra info associated with the user making the request to the API server. # This is populated by the API server authentication layer and should be included # if any SubjectAccessReview checks are performed by the webhook. \u0026#34;extra\u0026#34;: { \u0026#34;some-key\u0026#34;:[\u0026#34;some-value1\u0026#34;, \u0026#34;some-value2\u0026#34;] } }, # object is the new object being admitted. # It is null for DELETE operations. \u0026#34;object\u0026#34;: {\u0026#34;apiVersion\u0026#34;:\u0026#34;autoscaling/v1\u0026#34;,\u0026#34;kind\u0026#34;:\u0026#34;Scale\u0026#34;,...}, # oldObject is the existing object. # It is null for CREATE and CONNECT operations. \u0026#34;oldObject\u0026#34;: {\u0026#34;apiVersion\u0026#34;:\u0026#34;autoscaling/v1\u0026#34;,\u0026#34;kind\u0026#34;:\u0026#34;Scale\u0026#34;,...}, # options contains the options for the operation being admitted, like meta.k8s.io/v1 CreateOptions, UpdateOptions, or DeleteOptions. # It is null for CONNECT operations. \u0026#34;options\u0026#34;: {\u0026#34;apiVersion\u0026#34;:\u0026#34;meta.k8s.io/v1\u0026#34;,\u0026#34;kind\u0026#34;:\u0026#34;UpdateOptions\u0026#34;,...}, # dryRun indicates the API request is running in dry run mode and will not be persisted. # Webhooks with side effects should avoid actuating those side effects when dryRun is true. # See http://k8s.io/docs/reference/using-api/api-concepts/#make-a-dry-run-request for more details. \u0026#34;dryRun\u0026#34;: false } } The response REST call back to the API server should look similar to the response below. The uid must match the uid from the request in v1 of the admission.k8s.io api. The allowed field is true for permitting the request to go through and false if it should be denied.\n{ \u0026#34;apiVersion\u0026#34;: \u0026#34;admission.k8s.io/v1\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;AdmissionReview\u0026#34;, \u0026#34;response\u0026#34;: { \u0026#34;uid\u0026#34;: \u0026#34;\u0026lt;value from request.uid\u0026gt;\u0026#34;, \u0026#34;allowed\u0026#34;: true/false } } Validating Admission Controllers - In Action Now, let\u0026rsquo;s build our own custom validating admission controller in python using a Flask API. The goal of this controller is to ensure that all deployments and pods have a \u0026ldquo;Billing\u0026rdquo; label added. If you\u0026rsquo;d like to get a headstart with the code presented in this post, you may want to pull down the github repo: https://github.com/theITHollow/warden\nWe\u0026rsquo;ll assume that we\u0026rsquo;re doing some chargeback/showback to our customers and we need this label on everything or we can\u0026rsquo;t identify what customer it belongs to, and we can\u0026rsquo;t bill them.\nCreate the Admission Controller API The first step we\u0026rsquo;ll go through is to build our flask API and put in our custom logic. If you look in the flask API example below, I\u0026rsquo;m handling an incoming request to the /validate URI and checking the metadata of the object for a \u0026ldquo;billing\u0026rdquo; label. I\u0026rsquo;m also capturing the uid of the request so I can pass that back in the response. Also, be sure to provide a message so that the person requesting the object gets feedback about why the operation was not permitted. Then depending on the label being found, a response is created with an allowed value of True or False.\nThis is a very simple example, but once you\u0026rsquo;ve set this up, use your own custom logic.\nfrom flask import Flask, request, jsonify warden = Flask(__name__) #POST route for Admission Controller @warden.route(\u0026#39;/validate\u0026#39;, methods=[\u0026#39;POST\u0026#39;]) #Admission Control Logic def deployment_webhook(): request_info = request.get_json() uid = request_info[\u0026#34;request\u0026#34;].get(\u0026#34;uid\u0026#34;) try: if request_info[\u0026#34;request\u0026#34;][\u0026#34;object\u0026#34;][\u0026#34;metadata\u0026#34;][\u0026#34;labels\u0026#34;].get(\u0026#34;billing\u0026#34;): #Send response back to controller if validations succeeds return k8s_response(True, uid, \u0026#34;Billing label exists\u0026#34;) except: return k8s_response(False, uid, \u0026#34;No labels exist. A Billing label is required\u0026#34;) #Send response back to controller if failed return k8s_response(False, uid, \u0026#34;Not allowed without a billing label\u0026#34;) #Function to respond back to the Admission Controller def k8s_response(allowed, uid, message): return jsonify({\u0026#34;apiVersion\u0026#34;: \u0026#34;admission.k8s.io/v1\u0026#34;, \u0026#34;kind\u0026#34;: \u0026#34;AdmissionReview\u0026#34;, \u0026#34;response\u0026#34;: {\u0026#34;allowed\u0026#34;: allowed, \u0026#34;uid\u0026#34;: uid, \u0026#34;status\u0026#34;: {\u0026#34;message\u0026#34;: message}}}) if __name__ == \u0026#39;__main__\u0026#39;: warden.run(ssl_context=(\u0026#39;certs/wardencrt.pem\u0026#39;, \u0026#39;certs/wardenkey.pem\u0026#39;),debug=True, host=\u0026#39;0.0.0.0\u0026#39;) One of the requirements for an admission controller is that it is protected by certificates. So, let\u0026rsquo;s go create some certificates. I\u0026rsquo;ve created a script to generate these certificates for us and it\u0026rsquo;s stored in the git repository. The CN is important here, so it should match the DNS name of your admission controller.\nNOTE: As of Kubernetes version 1.19 SAN certificates are required. If this will be deployed on 1.19 or higher, you must create a SAN Certificate. The github repository has been updated with an ext.cnf file and script will deploy a SAN certificate now.\nSince I\u0026rsquo;ll be deploying this as a container within k8s, the service name exposing it is warden and the namespace it will be stored in will be validation. You should modify the script and the ext.cnf file before using it yourself.\nkeydir=\u0026#34;certs\u0026#34; cd \u0026#34;$keydir\u0026#34; # CA root key openssl genrsa -out ca.key 4096 #Create and sign the Root CA openssl req -x509 -new -nodes -key ca.key -sha256 -days 1024 -out ca.crt -subj \u0026#34;/CN=Warden Controller Webhook\u0026#34; #Create certificate key openssl genrsa -out warden.key 2048 #Create CSR openssl req -new -sha256 \\ -key warden.key \\ -config ../ext.cnf \\ -out warden.csr #Generate the certificate openssl x509 -req -in warden.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out warden.crt -days 500 -sha256 -extfile ../ext.cnf -extensions req_ext # Create .pem versions cp warden.crt wardencrt.pem \\ | cp warden.key wardenkey.pem The associated ext.cnf file is show below. Notice the alt_names section, which must match the name of your admission controller service. Also, feel free to update the country, locality, and organization to match your environment.\n[ req ] default_bits = 2048 distinguished_name = req_distinguished_name req_extensions = req_ext prompt = no [ req_distinguished_name ] countryName = US stateOrProvinceName = Illinois localityName = Chicago organizationName = HollowLabs commonName = Warden Controller Webhook [ req_ext ] subjectAltName = @alt_names [alt_names] DNS.1 = warden.validation.svc You\u0026rsquo;ll notice that the flask code is using these certificates and if you don\u0026rsquo;t change the names or locations in the script, it should just work. If you made modifications you will need to update the last line of the python code.\nwarden.run(ssl_context=('certs/wardencrt.pem', 'certs/wardenkey.pem'),debug=True, host='0.0.0.0')\nBuild and Deploy the Admission Controller It\u0026rsquo;s time to build a container for this pod to run in. My Dockerfile is listed below as well as in the git repo.\nFROM ubuntu:16.04 RUN apt-get update -y \u0026amp;\u0026amp; \\ apt-get install -y python-pip python-dev # We copy just the requirements.txt first to leverage Docker cache COPY ./requirements.txt /app/requirements.txt WORKDIR /app RUN pip install -r requirements.txt COPY . /app ENTRYPOINT [ \u0026#34;python\u0026#34; ] CMD [ \u0026#34;app/warden.py\u0026#34; ] Push your image to your image registry in preparation for being deployed in your Kubernetes cluster.\nDeploy the admission controller with the following Kubernetes manifest, after changing the name of your image.\n--- apiVersion: v1 kind: Namespace metadata: name: validation --- apiVersion: v1 kind: Pod metadata: name: warden labels: app: warden namespace: validation spec: restartPolicy: OnFailure containers: - name: warden image: theithollow/warden:v1 #EXAMPLE- USE YOUR REPO imagePullPolicy: Always --- apiVersion: v1 kind: Service metadata: name: warden namespace: validation spec: selector: app: warden ports: - port: 443 targetPort: 5000 Deploy the Webhook Configuration The admission controller has been deployed and is waiting for some requests to come in. Now, we need to deploy the webhook configuration that tells the Kubernetes API to check with the admission controller that we just deployed.\nThe webhook configuration needs to know some information about what types of objects it\u0026rsquo;s going to make these REST calls for, as well as the URI to send them to. The clientConfig section contains info about where to make the API call. Within the rules section, you\u0026rsquo;ll define what apiGroups, resources, versions and operations will trigger the requests. This will seem similar to RBAC polices. Also note the failurePolicy which defines what happens to object requests if the admission controller is unreachable.\n--- apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: validating-webhook namespace: validation webhooks: - name: warden.validation.svc failurePolicy: Fail sideEffects: None admissionReviewVersions: [\u0026#34;v1\u0026#34;,\u0026#34;v1beta1\u0026#34;] rules: - apiGroups: [\u0026#34;apps\u0026#34;, \u0026#34;\u0026#34;] resources: - \u0026#34;deployments\u0026#34; - \u0026#34;pods\u0026#34; apiVersions: - \u0026#34;*\u0026#34; operations: - CREATE clientConfig: service: name: warden namespace: validation path: /validate/ caBundle: #See command below The last piece of the config is the base64 encoded version of the CA certificate. If you used the script in the git repo, the command below will print the caBundle information for the manifest.\ncat certs/ca.crt | base64\nDeploy the webook. kubectl apply -f [manifest].yaml\nTest the Results There are three manifests in the /test-pods folder that can be used to test with.\ntest1.yaml - Should work with the admission controller because it has a proper billing label.\ntest2.yaml - Should fail, because there are no labels assigned.\ntest3.yaml - Should fail, because while it does have a label, it does not have a billing label.\nNotice that each of these tests provided different responses. These responses can be customized so that you can give good feedback on why an operation was not permitted.\n","permalink":"https://theithollow.com/2020/05/26/kubernetes-validating-admission-controllers/","summary":"\u003cp\u003eHey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don\u0026rsquo;t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster?\u003c/p\u003e\n\u003cp\u003eIf any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers.\u003c/p\u003e\n\u003ch2 id=\"validating-admission-controllers---the-theory\"\u003eValidating Admission Controllers - The Theory\u003c/h2\u003e\n\u003cp\u003eAdmission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you\u0026rsquo;ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is \u003ca href=\"https://www.openpolicyagent.org/\"\u003eOpen Policy Agent (OPA)\u003c/a\u003e.\u003c/p\u003e","title":"Kubernetes Validating Admission Controllers"},{"content":"Just because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not.\nLiveness and Readiness Probes - The Theory On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node\u0026rsquo;s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?\nWell, it can use the notion of probes to check on the status of a container. Specifically a liveness probe.\nLiveness probes indicate if a container is running. Meaning, has the application within the container started running and is it still running? If you\u0026rsquo;ve configured liveness probes for your containers, you\u0026rsquo;ve probably still seen them in action. When a container gets restarted, it\u0026rsquo;s generally because of a liveness probe failing. This can happen if your container couldn\u0026rsquo;t startup, or if the application within the container crashed. The Kubelet will restart the container because the liveness probe is failing in those circumstances. In some circumstances though, the application within the container is not working, but hasn\u0026rsquo;t crashed. In that case, the container won\u0026rsquo;t restart unless you provide additional information as a liveness probe.\nA readiness probe indicates if the application running inside the container is \u0026ldquo;ready\u0026rdquo; to serve requests. As an example, assume you have an application that starts but needs to check on other services like a backend database before finishing its configuration. Or an application that needs to download some data before it\u0026rsquo;s ready to handle requests. A readiness probe tells the Kubelet that the application can now perform its function and that the Kubelet can start sending it traffic.\nThere are three different ways these probes can be checked.\nExecAction: Execute a command within the container TCPSocketAction: TCP check against the container\u0026rsquo;s IP/port HTTPGetAction: An HTTP Get request against the container\u0026rsquo;s IP/Port Let\u0026rsquo;s look at the two probes in the context of a container starting up. The diagram below shows several states of the same container over time. We have a view into the containers to see whats going on with the application with relationship to the probes.\nOn the left side, the pod has just been deployed. A liveness probe performed at TCPSocketAction and found that the pod is \u0026ldquo;alive\u0026rdquo; even though the application is still doing work (loading data, etc) and isn\u0026rsquo;t ready yet. As time moves on, the application finishes its startup routine and is now \u0026ldquo;ready\u0026rdquo; to serve incoming traffic.\nLet\u0026rsquo;s take a look at this from a different perspective. Assume we have a deployment already in our cluster, and it consists of a single replica which is displayed on the right side, behind our service. Its likely that we\u0026rsquo;ll need to scale the app, or replace it with another version. Now that we know our app isn\u0026rsquo;t ready to handle traffic right away after being started, we can wait to have our service add the new app to the list of endpoints until the application is \u0026ldquo;ready\u0026rdquo;. This is an important thing to consider if your apps aren\u0026rsquo;t ready as soon as the container starts up. A request could be sent to the container before its able to handle the request.\nLiveness and Readiness Probes - In Action First, we\u0026rsquo;ll look to see what happens with a readiness check. For this example, I\u0026rsquo;ve got a very simple apache container that displays pretty elaborate website. I\u0026rsquo;ve created a yaml manifest to deploy the container, service, and ingress rule.\napiVersion: v1 kind: Pod metadata: labels: app: liveness name: liveness-http spec: containers: - name: liveness image: theithollow/hollowapp-blog:liveness livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 3 periodSeconds: 3 readinessProbe: httpGet: path: /health port: 80 initialDelaySeconds: 3 periodSeconds: 3 --- apiVersion: v1 kind: Service metadata: name: liveness spec: selector: app: liveness ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: liveness-ingress namespace: default spec: rules: - host: liveness.theithollowlab.com http: paths: - backend: serviceName: liveness servicePort: 80 This manifest includes two probes:\nLiveness check doing an HTTP request against \u0026ldquo;/\u0026rdquo; Readiness check doing an HTTP request agains /health livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 3 periodSeconds: 3 readinessProbe: httpGet: path: /health port: 80 initialDelaySeconds: 3 periodSeconds: 3 My container uses a script to start the HTTP daemon right away, and then waits 60 seconds before creating a /health page. This is to simulate some work being done by the application and the app isn\u0026rsquo;t ready for consumption. This is the entire website for reference.\nAnd here is my container script.\n/usr/sbin/httpd \u0026gt; /dev/null 2\u0026gt;\u0026amp;1 \u0026amp;. #Start HTTP Daemon sleep 60. #wait 60 seconds echo HealthStatus \u0026gt; /var/www/html/health #Create Health status page sleep 3600 Deploy the manifest through kubectl apply. Once deployed, I\u0026rsquo;ve run a --watch command to keep an eye on the deployment. Here\u0026rsquo;s what it looked like.\nYou\u0026rsquo;ll notice that the ready status showed 0/1 for about 60 seconds. Meaning that my container was not in a ready status for 60 seconds until the /health page became available through the startup script.\nAs a silly example, what if we modified our liveness probe to look for /health? Perhaps we have an application that sometimes stops working, but doesn\u0026rsquo;t crash. Will the application ever startup? Here\u0026rsquo;s my new probe in the yaml manifest.\nlivenessProbe: httpGet: path: /health port: 80 initialDelaySeconds: 3 periodSeconds: 3 After deploying this, let\u0026rsquo;s run another --watch on the pods. Here we see that the pod is restarting, and I am unable to ever access the /health page because it restarts before its ready.\nWe can see that the liveness probe is failing if we run a describe on the pod.\n","permalink":"https://theithollow.com/2020/05/18/kubernetes-liveness-and-readiness-probes/","summary":"\u003cp\u003eJust because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not.\u003c/p\u003e\n\u003ch2 id=\"liveness-and-readiness-probes---the-theory\"\u003eLiveness and Readiness Probes - The Theory\u003c/h2\u003e\n\u003cp\u003eOn each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node\u0026rsquo;s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?\u003c/p\u003e","title":"Kubernetes Liveness and Readiness Probes"},{"content":"You\u0026rsquo;ve built your Kubernetes cluster(s). You\u0026rsquo;ve built your apps in containers. You\u0026rsquo;ve architected your services so that losing a single instance doesn\u0026rsquo;t cause an outage. And you\u0026rsquo;re ready for cloud scale. You deploy your application and are waiting to sit back and \u0026ldquo;profit.\u0026rdquo;\nWhen your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn\u0026rsquo;t it be better to deploy the application and let the platform scale the application automatically?\nLuckily Kubernetes can do this for you as long as you\u0026rsquo;ve setup the correct guardrails to protect your cluster\u0026rsquo;s health. Kubernetes uses the Horizontal Pod Autoscaler (HPA) to determine if pods need more or less replicas without a user intervening.\nHorizontal Pod Auto-scaling - The Theory In a conformant Kubernetes cluster you have the option of using the Horizontal Pod Autoscaler to automatically scale your applications out or in based on a Kubernetes metric. Obviously this means that before we can scale an application, the metrics for that application will have to be available. For this, we need to ensure that the Kubernetes metrics server is deployed and configured.\nThe Horizontal Pod Autoscaler is implemented as a control loop, checking on the metrics every fifteen seconds by default and then making decisions about whether to scale a deployment or replica-set.\nThe scaling algorithm determines how many pods should be configured based on the current and desired state values. The actual algorithm is shown below.\ndesiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )] If you haven\u0026rsquo;t heard of Horizontal Pod Autoscaler until now, you\u0026rsquo;re probably thinking that this feature is awesome and you can\u0026rsquo;t wait to get this in your cluster for better load balancing. I mean, isn\u0026rsquo;t capacity planning for your app much easier when you say, \u0026ldquo;I\u0026rsquo;ll just start small and scale under load.\u0026rdquo; And you\u0026rsquo;re right in some sense. But also take careful consideration if this makes sense in your production clusters. If you have dozens of apps all autoscaling, the total capacity of the cluster can get chewed up if you haven\u0026rsquo;t put the right restrictions on pods. This is a great time to revisit requests and limits on your pods, as well as setting autoscale limits. Don\u0026rsquo;t make your new autoscaling app, a giant \u0026ldquo;noisy neighbor\u0026rdquo; for all the other apps in the cluster.\nHorizontal Pod Auto-scaling - In Action Prerequisites Before you can use the Horizontal Pod Autoscaler, you\u0026rsquo;ll have to have a few things in place.\nHealthy/Conformant Kubernetes Cluster Kubernetes Metric Server in place and serving metrics A stateless replica-set so that it can be scaled Example Application To show how the HPA works we\u0026rsquo;ll scale out a simple Flask app that I\u0026rsquo;ve used in several other posts. This web server will start with a single container and scale up/down based on load. To generate load, we\u0026rsquo;ll use busybox with a wget loop to generate traffic to the web server.\nFirst, to deploy our web server. I\u0026rsquo;m re-using the hollowapp flask image and setting a 100m CPU limit with the imperative command:\nkubectl run hollowapp --image=theithollow/hollowapp:allin1-v2 --limits=cpu=100m In this case, we\u0026rsquo;ll assume that we want to add another replica to our replica-set/deployment anytime a pod reaches 20% CPU Utilization. This can be done by using the command:\nkubectl autoscale deployment [deployment name] --cpu-percentage=20 --min=1 --max-10 In my lab, I\u0026rsquo;ve deployed the flask container deployment, and you can see that there is virtually no load on the pod when I run the kubectl top pods command.\nBefore we start scaling, we can look at the current HPA object by running:\nkubectl get hpa In the above screenshot, you can see that there is an HPA object with a target of 20% and the current value is 1%.\nNow we\u0026rsquo;re ready to scale the app. I\u0026rsquo;ve used busybox to generate load to this pod. As the load increases the CPU usage should also increase. When it hits 20% CPU utilization, the pods will scale out to try to keep this load under 20%.\nIn the gif below there are three screens. The top left consists of a script to generate load to my web containers. There isn\u0026rsquo;t much useful info there so ignore it. If you want to generate your own load, deploy a container in interactive mode and run:\nwhile true; do wget -q -O- http://[service name] \u0026gt; /dev/null 2\u0026gt;\u0026amp;1; done The upper right shows a --watch on the hpa object. Over time, you\u0026rsquo;ll see that the HPA object updates with the current load. If the CPU load is greater than 20%, you\u0026rsquo;ll then see HPA scale out the number of pods. This is displayed in the bottom shell window which is running a --watch on the pods.\nAfter the number of pods hits 10, it no longer scales since that the maximum number of pods we allowed in the autoscale command. I should also note that after I stopped generating load, it took several minutes before my pods started to scale back down, but they will scale down again automatically.\nSummary The Kubernetes Horizontal Pod Autoscaler is a really nice feature to let you scale your app when under load. Its not always easy to know how many resources your app will need for a production environment, and scaling as you need it can be a nice fix for this. Also, to give back resources when you\u0026rsquo;re not using them. But be careful not to let your scaling get out of control and use up all the resources in your cluster. Good luck!\n","permalink":"https://theithollow.com/2020/05/04/kubernetes-pod-auto-scaling/","summary":"\u003cp\u003eYou\u0026rsquo;ve built your Kubernetes cluster(s). You\u0026rsquo;ve built your apps in containers. You\u0026rsquo;ve architected your services so that losing a single instance doesn\u0026rsquo;t cause an outage. And you\u0026rsquo;re ready for cloud scale. You deploy your application and are waiting to sit back and \u0026ldquo;profit.\u0026rdquo;\u003c/p\u003e\n\u003cp\u003eWhen your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn\u0026rsquo;t it be better to deploy the application and let the platform scale the application automatically?\u003c/p\u003e","title":"Kubernetes Pod Auto-scaling"},{"content":"Containerizing applications and running them on Kubernetes doesn\u0026rsquo;t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the \u0026ldquo;noisy neighbor\u0026rdquo; problem in a Kubernetes Cluster.\nResource Requests and Limits - The Theory Kubernetes uses the concept of a \u0026ldquo;Resource Request\u0026rdquo; and a \u0026ldquo;Resource Limit\u0026rdquo; when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests.\nResource Requests To put things simply, a resource request specifies the minimum amount of resources a container needs to successfully run. Thought of in another way, this is a guarantee from Kubernetes that you\u0026rsquo;ll always have this amount of either CPU or Memory allocated to the container.\nWhy would you worry about the mimimum amount of resources guaranteed to a pod? Well, its to help prevent one container from using up all the node\u0026rsquo;s resources and starving the other containers from CPU or memory. For instance if I had two containers on a node, one container could request 100% of that nodes processor. Meanwhile the other container would likely not be working very well because the processor is being monopolized by its \u0026ldquo;noisy neighbor\u0026rdquo;.\nWhat a resource request can do, is to ensure that at least a small part of that processor\u0026rsquo;s time is reserved for both containers. This way if there is resource contention, each pod will have a guaranteed, minimum amount of resource in which to still function.\nResource Limits As you might guess, a resource limit is the maximum amount of CPU or memory that can be used by a container. The limit represents the upper bounds of how much CPU or memory that a container within a pod can consume in a Kubernetes cluster, regardless of whether or not the cluster is under resource contention.\nLimits prevent containers from taking up more resources on the cluster than you\u0026rsquo;re willing to let them.\nCommon Practices As a general rule, all containers should have a request for memory and cpu before deploying to a cluster. This will ensure that if resources are running low, your container can still do the minimum amount of work to stay in a healthy state until those resource free up again (hopefully).\nLimits are often used in conjunction with requests to create a \u0026ldquo;guaranteed pod\u0026rdquo;. This is where the request and limit are set to the same value. In that situation, the container will always have the same amount of CPU available to it, no more or less.\nAt this point you may be thinking about adding a high \u0026ldquo;request\u0026rdquo; value to make sure you have plenty of resource available for your container. This might sound like a good idea, but have dramatic consequences to scheduling on the Kubernetes cluster. If you set a high CPU request, for example 2 CPUs, then your pod will ONLY be able to be scheduled on Kubernetes nodes that have 2 full CPUs available that aren\u0026rsquo;t reserved by other pods\u0026rsquo; requests. In the example below, the 2 vCPU pod couldn\u0026rsquo;t be scheduled on the cluster. However, if you were to lower the \u0026ldquo;request\u0026rdquo; amount to say 1 vCPU, it could.\nResource Requests and Limits - In Action CPU Limit Example Lets try out using a CPU limit on a pod and see what happens when we try to request more CPU than we\u0026rsquo;re allowed to have. Before we set the limit though, lets look at a pod with a single container under normal conditions. I\u0026rsquo;ve deployed a resource consumer container in my cluster and be default, you can see that I\u0026rsquo;m use 1m CPU(cores) and 6 Mi(bytes) of memory.\nNOTE: CPU is measured in millicores so 1000m = 1 CPU core. Memory is measured in Megabytes.\nOk, now that we\u0026rsquo;ve seen the \u0026ldquo;no load\u0026rdquo; state, lets add some CPU load by making a request to the pod. Here, I\u0026rsquo;ve increased the CPU usage on the container to 400 millicores.\nAfter the metrics start coming in, you can see that I\u0026rsquo;ve got roughly 400m used on the container as you\u0026rsquo;d expect to see.\nNow I\u0026rsquo;ve deleted the container and we\u0026rsquo;ll edit the deployment manifest so that it has a limit on CPU.\napiVersion: apps/v1 kind: Deployment metadata: labels: run: resource-consumer name: resource-consumer namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: run: resource-consumer strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: run: resource-consumer spec: containers: - image: theithollow/resource-consumer:v1 imagePullPolicy: IfNotPresent name: resource-consumer terminationMessagePath: /dev/termination-log terminationMessagePolicy: File resources: requests: memory: \u0026#34;100Mi\u0026#34; cpu: \u0026#34;100m\u0026#34; limits: memory: \u0026#34;300Mi\u0026#34; cpu: \u0026#34;300m\u0026#34; dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 In the container resources section I\u0026rsquo;ve set a limit on CPU to 300m. Lets re-deploy this yaml manifest and then again increase our resource usage to 400m.\nAfter redeploying the container and again increasing my CPU load to 400m, we can see that the container is throttled to 300m instead. I\u0026rsquo;ve effectively \u0026ldquo;limited\u0026rdquo; the resources the container could consume from the cluster.\nCPU Requests Example OK, next, I\u0026rsquo;ve deployed two pods into my Kubernetes cluster and those pods are on the same worker node for a simple example about contention. I\u0026rsquo;ve got a guaranteed pod that has 1000m CPU set as a limit but also as a request. The other pod is unbounded, meaning there is no limit on how much CPU it can utilize.\nAfter the deployment, each pod is really not using any resources as you can see here.\nI make a request to increase the load on my non-guaranteed pod.\nAnd if we look at the containers resources you can see that even though my container wants to use 2000m CPU, it\u0026rsquo;s only actually using 1000m CPU. The reason for this is because the guaranteed pod is guaranteed 1000m CPU, whether it is actively using that CPU or not.\nSummary Kubernetes uses Resource Requests to set a minimum amount of resources for a given container so that it can be used if it needs it. You can also set a Resource Limit to set the maximum amount of resources a pod can utilize.\nTaking these two concepts and using them together can ensure that your critical pods always have the resources that they need to stay healthy. They can also be configured to take advantage of shared resources within the cluster.\nBe careful setting resource requests too high so your Kubernetes scheduler can still scheduler these pods. Good luck!\n","permalink":"https://theithollow.com/2020/04/20/kubernetes-resource-requests-and-limits/","summary":"\u003cp\u003eContainerizing applications and running them on Kubernetes doesn\u0026rsquo;t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the \u0026ldquo;noisy neighbor\u0026rdquo; problem in a Kubernetes Cluster.\u003c/p\u003e\n\u003ch2 id=\"resource-requests-and-limits---the-theory\"\u003eResource Requests and Limits - The Theory\u003c/h2\u003e\n\u003cp\u003eKubernetes uses the concept of a \u0026ldquo;Resource Request\u0026rdquo; and a \u0026ldquo;Resource Limit\u0026rdquo; when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests.\u003c/p\u003e","title":"Kubernetes Resource Requests and Limits"},{"content":"VMware offers a Kubernetes Cloud Provider that allows Kubernetes (k8s) administrators to manage parts of the vSphere infrastructure by interacting with the Kubernetes Control Plane. Why is this needed? Well, being able to spin up some new virtual disks and attaching them to your k8s cluster is especially useful when your pods need access to persistent storage for example.\nThe Cloud providers (AWS, vSphere, Azure, GCE) obviously differ between vendors. Each cloud provider has different functionality that might be exposed in some way to the Kubernetes control plane. For example, Amazon Web Services provides a load balancer that can be configured with k8s on demand if you are using the AWS provider, but vSphere does not (unless you\u0026rsquo;re using NSX).\nOriginally, these cloud providers were maintained by the same folks writing the Kubernetes software. Given the number of cloud providers that might want to add functionality for their own platform, this wasn\u0026rsquo;t a very sustainable way to provide access to new cloud features. To fix this issue, and allow the Kubernetes project to move forward without the cruft of managing the providers, the Cloud Provider Interface was used to allow third parties to write their own cloud provider tools to enhance Kubernetes without needing access to the core Kubernetes project.\nThe main thing that I hope users realize, is that when you setup your cluster, you can currently pick between two types of cloud providers:\nIn-tree cloud provider Out-of-tree (External) Cloud Provider If you\u0026rsquo;ve followed my Kubernetes posts, you may notice that I\u0026rsquo;ve been using the in-tree cloud providers up until this point. Going forward, I\u0026rsquo;ll be using the external cloud providers when possible which should have better functionality than the in-tree providers, or will soon.\nWhy am I telling you this? Well even if we ignore the fact that external cloud providers may have a very different architecture including their own controllers, they may have different configuration specs as well. Lets take a look at the vSphere Storage Integration that comes with the In-tree cloud provider vs the External Cloud Provider.\nStorageClass Differences - Example In this post on Cloud Providers and Storage Classes, I used the in-tree cloud provider for vSphere and created a storage class from the yaml manifest below.\nkind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-disk #name of the default storage class annotations: storageclass.kubernetes.io/is-default-class: \u0026#34;true\u0026#34; provisioner: kubernetes.io/vsphere-volume parameters: diskformat: thin Recently, I\u0026rsquo;ve been deploying my clusters in my lab through VMware Tanzu Kubernetes Grid (TKG) like in this post. TKG does not use the in-tree cloud provider for vSphere. To deploy a storage class (with similar functionality to above) on TKG the yaml manifest looks like this:\nkind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-disk annotations: storageclass.kubernetes.io/is-default-class: \u0026#34;true\u0026#34; provisioner: csi.vsphere.vmware.com parameters: storagepolicyname: \u0026#34;k8s\u0026#34; Now at first glance those manifests look very similar, but the \u0026ldquo;provisioner\u0026rdquo; is different. Its a different mechanism providing storage, so the provisioner is also different. Once you understand that, then the parameters section should also make sense about why they are different. In the in-tree storageclass I set a disk format. In the Out-of-tree cloud provider I have a VMware Storage Policy configured in vSphere and I reference that instead. Expect that your parameters and configs will be different for each provider and be sure to check if you\u0026rsquo;re using in-tree vs external providers.\nI would make this additional note. Both of the storage classes can be deployed to my cluster at the same time. The in-tree vSphere cloud provider is still a valid Kubernetes manifest and can be applied, but if your Cloud Provider isn\u0026rsquo;t setup correctly it will fail to provision PVCs when you request them. You can see below that I\u0026rsquo;ve deployed a StorageClass and a PVC successfully, but my PVC wouldn\u0026rsquo;t bind to the storage and the message returned is a \u0026ldquo;Failed to provision volume with StorageClass \u0026ldquo;name\u0026rdquo;: Cloud Provider not installed properly\u0026rdquo; message.\nSummary The long and the short of this post is to make sure users understand that there are two different types of cloud providers used in Kubernetes. The in-tree provider, and the external providers such as VMware\u0026rsquo;s Cloud Provider (VCP). They have similar functionality, but they are different, which means your configurations might need to change based on which cloud provider you\u0026rsquo;re using.\nFor reference here are links for additional reading:\nKubernetes In-Tree Cloud Provider Info vSphere External Cloud Provider I wanted to write up a quick article here in case anyone getting started with Kubernetes gets tripped up by this as I did. I\u0026rsquo;ve had this tweet pinned to my timeline for a bit now and I\u0026rsquo;ve got to live up to that. I hope this helps clear up any confusion.\n“Pass on what you have learned. Strength, mastery\u0026hellip; but weakness, folly, failure also. Yes failure most of all. The greatest teacher, failure is.” - Yoda https://t.co/R7RXvmoLGm\n— Eric Shanks (@eric_shanks) March 25, 2020\n","permalink":"https://theithollow.com/2020/04/14/in-tree-vs-out-of-tree-kubernetes-cloud-providers/","summary":"\u003cp\u003eVMware offers a Kubernetes Cloud Provider that allows Kubernetes (k8s) administrators to manage parts of the vSphere infrastructure by interacting with the Kubernetes Control Plane. Why is this needed? Well, being able to spin up some new virtual disks and attaching them to your k8s cluster is especially useful when your pods need access to persistent storage for example.\u003c/p\u003e\n\u003cp\u003eThe Cloud providers (AWS, vSphere, Azure, GCE) obviously differ between vendors. Each cloud provider has different functionality that might be exposed in some way to the Kubernetes control plane. For example, Amazon Web Services provides a load balancer that can be configured with k8s on demand if you are using the AWS provider, but vSphere does not (unless you\u0026rsquo;re using NSX).\u003c/p\u003e","title":"In-tree vs Out-of-tree Kubernetes Cloud Providers"},{"content":"VMware recently released the 1.0 release of Tanzu Kubernetes Grid (TKG) which aims at decreasing the difficulty of deploying conformant Kubernetes clusters across infrastructure. This post demonstrates how to use TKG to deploy a management cluster to vSphere.\nIf you\u0026rsquo;re not familiar with TKG yet, you might be curious about what a Management Cluster is. The management cluster is used to manage one to many workload clusters. The management cluster is used to spin up VMs on different cloud providers, and lay down the Kubernetes bits on those VMs, thus creating new clusters for applications to be build on top of. TKG is built upon the ClusterAPI project so this post pretty accurately describes the architecture that TKG uses.\nvSphere Prerequisites When we stand up new clusters, management clusters or workload clusters, they need to have a virtual machine to spin up and deploy additional configurations on top of. Due to the various different ways that companies configure their default templates, this becomes a bit of an issue to ensure stability across all of VMware\u0026rsquo;s customers. To resolve this issue, an Open Virtualization Appliance (OVA) must be downloaded and installed into your vSphere vCenter. In fact you\u0026rsquo;ll need two of these appliances. One appliance will be a virtual machine to house Kubernetes VMs and another to host an HA Proxy VM which acts as a load balancer for the k8s cluster.\nI\u0026rsquo;ve imported both of the OVAs into my vCenter and converted them to templates.\nAdditionally, you will need a few other configurations setup, which may already be in place for many environments.\nA vSphere Cluster with DRS enabled. A resource pool in which to deploy the Tanzu Kubernetes Grid Instance A VM folder in which to collect the Tanzu Kubernetes Grid VMs A datastore with sufficient capacity for the control plane and worker node VM files A network with DHCP to connect the VMs The Network Time Protocol (NTP) service is running on all hosts Install Workstation Prerequisites Once your templates are imported into vCenter, you will change your attention to your local workstation which needs some tools installed on it. The first of which is the TKG binary.\nTKG CLI - This binary is what you\u0026rsquo;ll be interacting with to build, scale, and destroy your Kubernetes clusters. For a vSphere deployment, this took will suffice. If you are performing an install for AWS you will need an additional tool not covered in this post.\nDocker - When TKG bootstraps the management cluster, it uses docker to spin up a small cluster on your workstation, builds the management cluster and then moves (sometimes called \u0026ldquo;pivoting\u0026rdquo;) resources to the management cluster and destroys this local cluster.\nDeploy the Management Cluster Prerequisites done, its time to build our cluster. To setup the cluster run:\ntkg init --ui The tkg command with the \u0026ndash;ui switch lets you continue the rest of the installation from a web browser. You can continue the installation by opening a browser and going to http://localhost:8080.\nAs you can see from the screenshot, you can also use this method for AWS as well, but the prerequisites section of this post does not cover the AWS prerequisites.\nClick on the \u0026ldquo;DEPLOY ON VSPHERE\u0026rdquo; button to continue.\nOn the following screen, you\u0026rsquo;ll be asked for connection information for your vSphere environment. Fill out this information and click the \u0026ldquo;CONNECT\u0026rdquo; button. Once you do this, you should be able to select some vSphere objects through the rest of this wizard, if connection is successful.\nSelect the datacenter where the cluster will be deployed, and add an SSH Public Key. The public key is used so that you can SSH into the k8s nodes after deployment if needed.\nOn the next screen you\u0026rsquo;ll be faced with a choice for the size and availability of your cluster. Select Development if you\u0026rsquo;d prefer a smaller cluster with a single control plane node. (READ not for production use) You can also select a Production option if you want a three node control plane for your cluster. After picking the deployment type, select the instance type which is the size of the VMs. Below this, you can optionally set the cluster name, and then you should select the Load Balancer template which was one of the OVAs that was imported during the pre-requisites section.\nOn the resources screen, select the Resource Pool, VM Folder and Datastore where your virtual machines will live.\nMoving on to the Network section, fill out information about how your network will be setup.\nOn the last screen select the OS image used for the virtual machines. This is the other OVA that was imported into the vCenter during the prerequisites section.\nWhen done, you can review the configuration before deploying. After reviewing the configuration, click the \u0026ldquo;DEPLOY MANAGEMENT CLUSTER\u0026rdquo; button to being the deployment.\nAs the deployment runs, you can keep track of where you are, but looking at either the web browser which shows the running logs, or the terminal where you initial ran the tkg init command.\nOnce the installation is complete it will update your context to point to the management cluster so that you can start issuing kubectl commands.\nCommand Line Options This was a quick run through of a build on vSphere, but you may want to add some automation to the process. All of the steps in the instructions above can be completed through the command line by committing the \u0026ndash;ui switch. I will caution you though, that the first time through this process its easier to use the UI because it sets up a config.yaml file stored (by default) in .tkg/config.yaml.\nIf you wish to use the CLI the easiest way to do this is to run through this process once through the UI and copy this config.yaml file and edit it as you need for your deployments. From then on, you can use the tkg binary with this config file for deployment and skip the user interface.\nWhats next? From here, you can start using the management cluster to deploy additional workload clusters for your teams. We\u0026rsquo;ll take a look at this in a future post but if you\u0026rsquo;re in a hurry, just run the following to use the defaults:\ntkg create cluster [name] --kubernetes-version=[version] --plan=[dev/prod] Feel free to poke around in the management cluster and look at the custom resource definitions, namespaces, and settings on the management cluster used to make it a management cluster.\nIf you\u0026rsquo;re done with your management cluster and want to remove it, first make sure you\u0026rsquo;ve removed any workload clusters if you\u0026rsquo;ve built them, and then run:\ntkg delete management-cluster ","permalink":"https://theithollow.com/2020/04/06/deploying-tanzu-kubernetes-grid-management-clusters-vsphere/","summary":"\u003cp\u003eVMware recently released the 1.0 release of Tanzu Kubernetes Grid (TKG) which aims at decreasing the difficulty of deploying conformant Kubernetes clusters across infrastructure. This post demonstrates how to use TKG to deploy a management cluster to vSphere.\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re not familiar with TKG yet, you might be curious about what a Management Cluster is. The management cluster is used to manage one to many workload clusters. The management cluster is used to spin up VMs on different cloud providers, and lay down the Kubernetes bits on those VMs, thus creating new clusters for applications to be build on top of. TKG is built upon the \u003ca href=\"https://github.com/kubernetes-sigs/cluster-api\"\u003eClusterAPI project\u003c/a\u003e so \u003ca href=\"/2019/11/04/clusterapi-demystified/\"\u003ethis post\u003c/a\u003e pretty accurately describes the architecture that TKG uses.\u003c/p\u003e","title":"Deploying Tanzu Kubernetes Grid Management Clusters - vSphere"},{"content":"There is a worldwide pandemic going on right now and it has disrupted practically everything. Many people are worried not only about their health and families health, but also their job situations. I feel incredibly fortunate that my employer seems intent on continuing to work through this situation and that I am already a remote worker most of the time.\nMy team was asked to of course take care of our families, but also to take this opportunity to learn something new. I took this respite from normal activities to try to learn how to do some basic Golang (Go) programming. I have a hard time focusing on a project sometimes when there are no specific goals in mind, so my \u0026ldquo;Hello World\u0026rdquo; attempt at programming in Golang was to grab the latest COVID-19 statistics and post them to slack once per day.\nThe results of my efforts were slack messages looking something like what you see below.\nSetup Slack Webhook The first thing to do was to get a webhook from slack. Assuming you have permissions you can find the information on setting up your slack app and incoming webhooks from the slack documentation. https://api.slack.com/messaging/webhooks#create_a_webhook\nI created my incoming webhook and pointed it at one of my slack channels. The webhook URL will be needed later.\nWrite Golang! OK, this is really the part I was trying to learn. I\u0026rsquo;ll post my code below, and the link to that code is found on github if you are also interested.\npackage main import ( \u0026#34;fmt\u0026#34; \u0026#34;io/ioutil\u0026#34; \u0026#34;log\u0026#34; \u0026#34;net/http\u0026#34; \u0026#34;time\u0026#34; \u0026#34;encoding/json\u0026#34; \u0026#34;bytes\u0026#34; \u0026#34;errors\u0026#34; \u0026#34;github.com/aws/aws-lambda-go/lambda\u0026#34; ) type slackRequestBody struct { Text string `json:\u0026#34;text\u0026#34;` } func getInfections(endpoint string) string { resp, err := http.Get(endpoint) if err != nil { log.Fatalln(err) } defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) if err != nil { log.Fatalln(err) } return string(body) } func postToSlack(webhookURL string, msg string) error { slackBody, _ := json.Marshal(slackRequestBody{Text: msg}) req, err := http.NewRequest(http.MethodPost, webhookURL, bytes.NewBuffer(slackBody)) if err != nil { return err } req.Header.Add(\u0026#34;content-Type\u0026#34;, \u0026#34;application/json\u0026#34;) client := \u0026amp;http.Client{Timeout: 10 * time.Second} resp, err := client.Do(req) if err != nil { return err } buf := new(bytes.Buffer) buf.ReadFrom(resp.Body) if buf.String() != \u0026#34;ok\u0026#34; { return errors.New(\u0026#34;Non-ok response returned from Slack\u0026#34;) } return nil } func HandleRequest() (string, error) { //set slack webhook URL webhookUrl := \u0026#34;NEEDS_TO_BE_FILLED_IN\u0026#34; //Get today\u0026#39;s date date := time.Now() today := date.Format(\u0026#34;2006.01.02 15:04:05\u0026#34;) //Get Global Infections globalEndpoint := \u0026#34;https://corona.lmao.ninja/all\u0026#34; worldInfections := getInfections(globalEndpoint) var data map[string]interface{} err := json.Unmarshal([]byte(worldInfections), \u0026amp;data) if err != nil { log.Fatal(err) } //for debugging //fmt.Println(\u0026#34;These are the current COVID-19 numbers as of\u0026#34;, today) //fmt.Println(\u0026#34;World Cases :\u0026#34;, data[\u0026#34;cases\u0026#34;]) //fmt.Println(\u0026#34;World Deaths :\u0026#34;, data[\u0026#34;deaths\u0026#34;]) //fmt.Println(\u0026#34;World Recovered :\u0026#34;, data[\u0026#34;recovered\u0026#34;]) //fmt.Println(\u0026#34;Updated: \u0026#34;, data[\u0026#34;updated\u0026#34;]) usEndpoint := \u0026#34;https://corona.lmao.ninja/countries/usa\u0026#34; usInfections := getInfections(usEndpoint) var usData map[string]interface{} userr := json.Unmarshal([]byte(usInfections), \u0026amp;usData) if userr != nil { log.Fatal(userr) } //For Debugging //fmt.Println(\u0026#34;US cases :\u0026#34;, usData[\u0026#34;cases\u0026#34;]) //fmt.Println(\u0026#34;US cases today:\u0026#34;, usData[\u0026#34;todayCases\u0026#34;]) //fmt.Println(\u0026#34;US deaths :\u0026#34;, usData[\u0026#34;deaths\u0026#34;]) //fmt.Println(\u0026#34;US deaths today:\u0026#34;, usData[\u0026#34;todayDeaths\u0026#34;]) //fmt.Println(\u0026#34;US recovered :\u0026#34;, usData[\u0026#34;recovered\u0026#34;]) //Post to Slack - World err = postToSlack(webhookUrl, \u0026#34;These are the current COVID-19 numbers as of \u0026#34; + today + \u0026#34;\\n\u0026#34; + \u0026#34;World Cases : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, data[\u0026#34;cases\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;World Deaths : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, data[\u0026#34;deaths\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;World Recovered : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, data[\u0026#34;recovered\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;US Cases : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, usData[\u0026#34;cases\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;US Cases today : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, usData[\u0026#34;todayCases\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;US Deaths : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, usData[\u0026#34;deaths\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;US Deaths today : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, usData[\u0026#34;todayDeaths\u0026#34;]) + \u0026#34;\\n\u0026#34; + \u0026#34;US recovered : \u0026#34; + fmt.Sprintf(\u0026#34;%v\u0026#34;, usData[\u0026#34;recovered\u0026#34;]) + \u0026#34;\\n\u0026#34;) return \u0026#34;Infections Printed\u0026#34;, nil } func main() { //start lambda function lambda.Start(HandleRequest) } I don\u0026rsquo;t want to walk through the code line by line and explain what I learned in my introduction to Go, but in general the code grabs data from \u0026ldquo;https://corona.lmao.ninja\u0026rdquo; through a simple GET REST call. Then it formats a message and makes a POST REST call to our incoming webhook URL.\nIf you want to use the code to get updates, here\u0026rsquo;s what you do. Make sure you\u0026rsquo;ve got the Go prerequisites installed and configured. https://golang.org/doc/install#install\nUpdate the webhook URL field in the code above (or from github) and save the file with a .go extension.\nWithin the directory where the go code is stored, run :\nenv GOOS=linux GOARCH=amd64 go build -o covid The result of running the above command is a go executable that is ready to be used with AWS Lambda. Before we upload this binary as a Lambda function, we must zip it.\nzip -j covid.zip covid Create Lambda The code is ready to be used with Lambda. Login to your AWS account and go to the Lambda service. Create a new function. In the function setup be sure to give the function a name and select GO for the language.\nOn the next screen be sure to upload the zip file created earlier and set the handler name to \u0026ldquo;covid\u0026rdquo;.\nConfigure your test event at the top of the same screen and pass an empty set. Save the function and run it as you wish.\nI setup a CloudWatch Event to run the lambda function on a schedule so I got a new message each day with the updated numbers.\nSummary This code may not be the best GO code ever run and I\u0026rsquo;m sure it could be optimized, but was my project for learning the language. The result though, is some useful, albeit depressing, information that is delivered to my slack instance each day.\nOn a related subject, if you are interested in putting your computer cycles to work to help combat and learn about these and other illnesses, please take a look at the post submitted by VMware this week about how to setup a FoldingatHome appliance to aid researchers in finding cures to diseases such as Coronavirus.\n","permalink":"https://theithollow.com/2020/03/22/hello-world-covid-19-and-golang/","summary":"\u003cp\u003eThere is a worldwide pandemic going on right now and it has disrupted practically everything. Many people are worried not only about their health and families health, but also their job situations. I feel incredibly fortunate that my employer seems intent on continuing to work through this situation and that I am already a remote worker most of the time.\u003c/p\u003e\n\u003cp\u003eMy team was asked to of course take care of our families, but also to take this opportunity to learn something new. I took this respite from normal activities to try to learn how to do some basic Golang (Go) programming. I have a hard time focusing on a project sometimes when there are no specific goals in mind, so my \u0026ldquo;Hello World\u0026rdquo; attempt at programming in Golang was to grab the latest COVID-19 statistics and post them to slack once per day.\u003c/p\u003e","title":"Hello World - COVID-19 and Golang"},{"content":" VMware Tanzu is a family of products and services for modernizing your applications and infrastructure with a common goal: deliver better software to production, continuously. The portfolio simplifies multi-cloud operations, while freeing developers to move faster and access the right resources for building the best applications. VMware Tanzu enables development and operations’ teams to work together in new ways that deliver transformative business results.\nOne of these new solutions within the Tanzu brand is Mission Control. If you\u0026rsquo;re looking to get started with Tanzu Mission Control for management and visibility for your Kubernetes Clusters, start with the articles below. You\u0026rsquo;ll learn the basics of Tanzu Mission Control, how to deploy and manage Kubernetes clusters, assigning policies, and managing lifecycles of those clusters.\nDeploying Kubernetes Clusters Attaching Clusters Resizing Clusters Cluster Upgrades Namespace Management Access Policies Conformance Testing This image has an empty alt attribute; its file name is vmware-tanzu-icon_enterprise-1024x284.png\nIn Swahili, ’tanzu’ means the growing branch of a tree. In Japanese, ’tansu’ refers to a modular form of cabinetry. At VMware, Tanzu represents our growing portfolio of solutions to help you build, run and manage modern apps.\nhttps://cloud.vmware.com/tanzu\nSimply put, Tanzu consists of the VMware software that will be used to help customers deploy and manage Kubernetes. This post focuses on the Tanzu Mission Control product.\nSo what does Tanzu Mission Control (TMC) do for us? TMC was built to help customers manage their Kubernetes clusters from a centralized location. In the early days of Kubernetes, users thought that giant clusters might be built and shared between teams with some security built in. In practice many clusters are being built and assigned to different teams, projects, or environments. This second pattern allows for each cluster to have its own autonomy, but becomes very difficult for operations teams to secure, manage, and monitor these clusters. Tanzu Mission Control aims to allow this multi-cluster world to thrive while giving operations teams control across all of these clusters.\nFeature Introduction Deploy New Clusters The product itself includes tools to spin up new clusters not only on vSphere, but also within Public Cloud vendors like AWS or Azure. I\u0026rsquo;ve written posts before on setting up Kubernetes clusters, and mentioned that this can be a painful process. TMC aims to simplify that process and let you build clusters where they make sense for your organization. Instead of operations teams setting up VMs and installing software like Kubeadm or Kublets, TMC can do this for us and once done, add it to our management portal.\nThis image has an empty alt attribute; its file name is image-4.png\nManage Existing Clusters If you\u0026rsquo;ve been using Kubernetes already, you might think that this product sounds neat but you\u0026rsquo;re not interested in rebuilding that cluster you spent weeks on setting up. Not to worry, TMC will let you attach existing clusters into the product so you can manage any cluster you want. And when I say it can attach a Kubernetes cluster, I mean any cluster that is conformant with upstream Kubernetes, so this can include PaaS solutions as well.\nThis image has an empty alt attribute; its file name is image-5.png\nApply Policy One of the biggest features of TMC is setting policies on the clusters. A base Kubernetes cluster might not meet corporate standards for security for example. Perhaps containers running between environments shouldn\u0026rsquo;t be able to communicate with one another. This has been done via network policies set on namespaces in the past, but managing these for each cluster is cumbersome. Tanzu Mission Control will allow you to set these types of policies across your clusters and managed centrally, saving tons of time and effort as well as configuration drift.\nCluster Lifecycle Management Deploying clusters is cool, but at some point those clusters need to get upgraded, or expanded, or removed. TMC will allow you to add additional worker nodes, delete clusters, or perform in place upgrades to existing clusters. These are three tasks that often take some considerable time for operations teams. Tanzu Mission Control provides an avenue to do this from the GUI or CLI.\nAuthentication Setting up access to a Kubernetes cluster can be an incredibly tedious task. Setting up users, roles, cluster roles, role bindings etc will need to be created for all of your clusters. In good cases, you\u0026rsquo;re using something like Dex/Gangway to do LDAP authentication for your clusters. Well with TMC we can centralize our administration of the authentication and access policies for our clusters.\nSummary I\u0026rsquo;ve only listed a few of the features of TMC and there will be new features added over time I\u0026rsquo;m sure, but those features are compelling. Having a centralized tool for operations teams to respond to the demands of Kubernetes users while meeting standards set throughout the company is a real challenge, that TMC can really help with.\nIf you want to know more, there are a series of posts discussing how to use some of these new features found here.\nDeploying Kubernetes Clusters Attaching Clusters Resizing Clusters Cluster Upgrades Namespace Management Access Policies Conformance Testing Disclosure: The author of this post is an employee of VMware within the Modern Applications Platform BU.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-getting-started-guide/","summary":"\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2020/02/TMC-Guide-1024x571.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eVMware Tanzu is a family of products and services for modernizing your applications and infrastructure with a common goal: deliver better software to production, continuously. The portfolio simplifies multi-cloud operations, while freeing developers to move faster and access the right resources for building the best applications. VMware Tanzu enables development and operations’ teams to work together in new ways that deliver transformative business results.\u003c/p\u003e\n\u003cp\u003eOne of these new solutions within the Tanzu brand is Mission Control. If you\u0026rsquo;re looking to get started with Tanzu Mission Control for management and visibility for your Kubernetes Clusters, start with the articles below. You\u0026rsquo;ll learn the basics of Tanzu Mission Control, how to deploy and manage Kubernetes clusters, assigning policies, and managing lifecycles of those clusters.\u003c/p\u003e","title":"Tanzu Mission Control Getting Started Guide"},{"content":"Controlling access to a Kubernetes cluster is an ongoing activity that must be done in conjunction with developer needs and is often maintained by operations or security teams. Tanzu Mission Control (TMC) can help use setup and manage these access policies across fleets of Kubernetes clusters, making everyone\u0026rsquo;s life a little bit easier.\nSetup Users Before we can assign permissions to a user or group, we need to have a user or group to assign these permissions. By logging into the VMware Cloud Services portal (cloud.VMware.com) and going to the Identity and Access Management Tab we can create and invite new users. You can see I\u0026rsquo;ve created a user.\nFor good practice purposes, I\u0026rsquo;ve added this user to a group named \u0026ldquo;hollowgroup\u0026rdquo; which is where I\u0026rsquo;ll assign my permissions. Future users can be added to this group to obtain the same permissions without changing our policies.\nCreate an Access Policy Now we must login to the Tanzu Mission Control portal and navigate to the \u0026ldquo;Policies\u0026rdquo; menu. From here we can assign permissions at several levels including the Organization (Top level), the cluster group, a specific cluster, or a namespace. We can also assign permissions at a workspace level which would be a group of namespaces. These options are handy because they provide good flexibility here to manage many policies across clusters and namespaces.\nThe levels here work in a hierarchical fashion, where the lowest level will take precedence. This lets you assign view permissions at the cluster level and more administrative type writes at a namespace level for example.\nHere I\u0026rsquo;ve added the edit permissions to my hollowgroup group on my test namespace.\nAfter we create the policy, you should be able to login to the Kubernetes cluster and we\u0026rsquo;ll notice some new roles/clusterroles and rolebindings/clusterrolebindings. Below you an see one of my role bindings created that maps my clusterrole with the hollowgroup group.\nAccess the Cluster Now that the policy has been created, our user can login to Tanzu Mission Control and will have a limited view of the resources. Since we added them to this namespace though, they can navigate to the namespace we identified. Here you\u0026rsquo;ll notice a button that says \u0026ldquo;ACCESS THIS NAMESPACE\u0026rdquo; button.\nThat button will give instructions on downloading the appropriate KUBECONFIG file and instructions on using it. It will requires the TMC binary to be in your path to handle the authentication piece.\nSummary Identity management for disperate clusters can be a tricky thing to manage for an administrator. Tanzu Mission Control can provide Identity Policy management across multiple clusters and namespaces to centrally manage these processes.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-access-policies/","summary":"\u003cp\u003eControlling access to a Kubernetes cluster is an ongoing activity that must be done in conjunction with developer needs and is often maintained by operations or security teams. Tanzu Mission Control (TMC) can help use setup and manage these access policies across fleets of Kubernetes clusters, making everyone\u0026rsquo;s life a little bit easier.\u003c/p\u003e\n\u003ch2 id=\"setup-users\"\u003eSetup Users\u003c/h2\u003e\n\u003cp\u003eBefore we can assign permissions to a user or group, we need to have a user or group to assign these permissions. By logging into the VMware Cloud Services portal (cloud.VMware.com) and going to the Identity and Access Management Tab we can create and invite new users. You can see I\u0026rsquo;ve created a user.\u003c/p\u003e","title":"Tanzu Mission Control - Access Policies"},{"content":"What do you do if you\u0026rsquo;ve already provisioned some Kubernetes clusters before you got Tanzu Mission Control? Or maybe you\u0026rsquo;re inheriting some new clusters through an acquisition? Or a new team came on board and were using their own installation? Whatever the case, Tanzu Mission Control will let you manage a conformant Kubernetes cluster but you must first attach it.\nAttach An Existing Cluster For this example, I\u0026rsquo;ll be attaching a pre-existing Kubernetes cluster on vSphere infrastructure. This cluster was deployed via kubeadm as documented in this previous article about deploying Kubernetes on vSphere.\nThe first thing we do is find a cluster group where we\u0026rsquo;ll import the cluster to. On the cluster group screen we\u0026rsquo;ll click the \u0026ldquo;ATTACH CLUSTER\u0026rdquo; button.\nAfter this, the steps are really straightforward and the TMC console will guide you through the process. It should be noted that you\u0026rsquo;ll need outbound network connectivity to the Internet for the cluster resources to communicate with the Tanzu Mission Control service.\nSelect the cluster group that this cluster will be added to after the attachment process completes. Then give the cluster a name and description. Lastly, label the cluster according to whatever tagging strategy you\u0026rsquo;re using within TMC. Then click the \u0026ldquo;Register\u0026rdquo; button.\nOnce this is done, TMC will give you a kubectl command to run which installs the components necessary to mange the cluster. Connect to your existing cluster and run the command to apply the configurations. You can also view the YAML manifests instead if you\u0026rsquo;d like to see whats being installed before you do it.\nI ran the command and several clusterroles, services, secrets, and accounts were created within the VMware-system-tmc namespace.\nOnce you\u0026rsquo;ve applied the configuration, click the \u0026ldquo;VERIFY CONNECTION\u0026rdquo; button within TMC to complete the process.\nAfter a few minutes of verification, you should see your cluster attached in the TMC console. You can see from my lab that I now have two clusters listed in my cluster group. One of them listed as type \u0026ldquo;Provisioned\u0026rdquo; which I deployed from TMC. Another listed as type \u0026ldquo;Attached\u0026rdquo; which is the cluster we just attached in this post.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-attach-clusters/","summary":"\u003cp\u003eWhat do you do if you\u0026rsquo;ve already provisioned some Kubernetes clusters before you got Tanzu Mission Control? Or maybe you\u0026rsquo;re inheriting some new clusters through an acquisition? Or a new team came on board and were using their own installation? Whatever the case, Tanzu Mission Control will let you manage a conformant Kubernetes cluster but you must first attach it.\u003c/p\u003e\n\u003ch2 id=\"attach-an-existing-cluster\"\u003eAttach An Existing Cluster\u003c/h2\u003e\n\u003cp\u003eFor this example, I\u0026rsquo;ll be attaching a pre-existing Kubernetes cluster on vSphere infrastructure. This cluster was deployed via kubeadm as documented in this previous article about deploying \u003ca href=\"/2020/01/08/deploy-kubernetes-on-vsphere/\"\u003eKubernetes on vSphere\u003c/a\u003e.\u003c/p\u003e","title":"Tanzu Mission Control - Attach Clusters"},{"content":"Kubernetes releases a new minor version every quarter and updating your existing clusters can be a chore. With updates coming at you pretty quickly and new functionality being added all the time, having a way to upgrade your clusters is a must, especially if you are managing multiples of clusters. Tanzu Mission Control can take the pain out of upgrading these clusters.\nIt should be mentioned that the cluster upgrade procedure only works for clusters that were previously deployed through Tanzu Mission Control. If an existing cluster is attached to TMC after deployment, these cluster lifecycle steps won\u0026rsquo;t work.\nPerform a Cluster Upgrade Upgrading a cluster provisioned by TMC is a very straightforward process. Through the use of control loops in Tanzu Mission Control code, the cluster version is updated and the TMC intelligence does the rest.\nFor this example, we\u0026rsquo;ll find the cluster used in previous posts deployed on AWS with version. 1.16.4 as seen from the command line result of get nodes.\nNow, lets login to the Tanzu Mission Control console and find our cluster in the clusters menu.\nIn the upper right hand corner of this screen you can see a highlighted \u0026ldquo;Upgrade\u0026rdquo; button. Click that button and we\u0026rsquo;re on our way. The next screen that is displayed will warn you that lifecycle operations will be suspended during this process and as all upgrades, you should always have a good backup before continuing. You\u0026rsquo;ll then select the version that you\u0026rsquo;d like to upgrade the cluster to.\nNote: You will only be able to select versions newer than the existing version. There is no version rollback functionality built into this tool if you decide you needed to downgrade the cluster version.\nWhen you\u0026rsquo;re ready, click upgrade and then sit back and relax while TMC does the work.\nWhile the cluster is upgrading, you\u0026rsquo;ll see the upgrading status at the top of the page.\nIf you are closely watching the nodes, you can see that they will be in differing verisions while the cluster is in the upgrade process. For example, you can see I have nodes in two different versions seen below.\nWhen you\u0026rsquo;re all done, you\u0026rsquo;ll have a healthy upgraded cluster. Now you can repeat this on the rest of the clusters under TMC management if you need to.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-cluster-upgrade/","summary":"\u003cp\u003eKubernetes releases a new minor version every quarter and updating your existing clusters can be a chore. With updates coming at you pretty quickly and new functionality being added all the time, having a way to upgrade your clusters is a must, especially if you are managing multiples of clusters. Tanzu Mission Control can take the pain out of upgrading these clusters.\u003c/p\u003e\n\u003cp\u003eIt should be mentioned that the cluster upgrade procedure only works for clusters that were previously deployed through Tanzu Mission Control. If an existing cluster is attached to TMC after deployment, these cluster lifecycle steps won\u0026rsquo;t work.\u003c/p\u003e","title":"Tanzu Mission Control - Cluster Upgrade"},{"content":"No matter what flavor of Kubernetes you\u0026rsquo;re using, the cluster should have some high level of common functionality with the upstream version. To ensure this is the case Kubernetes conformance tests can validate your clusters. These tests are run by Sonobuoy which is an open source community standard. Tanzu Mission Control can run these tests on your clusters to ensure this conformance. They are a great way to make sure your cluster was installed, configured and operating properly.\nRun a Conformance Inspection To run an inspection, navigate to the inspections menu item. You will click the \u0026ldquo;NEW INSPECTION\u0026rdquo; button to start a new test. When the next screen opens, select the cluster in which the tests will run. Then select the type of conformance test. There is the normal version and then the lite version. The normal version will run all of the conformance tests but it will take a number of hours to complete. The other version is the Lite version which runs a single test just to validate it can run something. It would be useful to use the Lite right away to see if it will work, and then run the standard version when you\u0026rsquo;ve validated it will run. Click the \u0026ldquo;RUN INSPECTION\u0026rdquo; button.\nOnce you click the RUN INSPECTION you\u0026rsquo;ll see the status of your inspection in the summary screen. This may take several hours so come back to it when you\u0026rsquo;re ready.\nIf you\u0026rsquo;d like to check the conformance tests you can run\nkubectl get pods --all-namespaces | grep sonobuoy to watch the status.\nWhen the tests are done, you can review the status in the summary screen. After this click on the result to get the detailed tests.\nThe detailed tests will show all the tests, their descriptions and the status of each. Its also nicely summarized for a quick glance.\nSummary Tanzu Mission Control can test your Kubernetes cluster for conformance with upstream projects and cluster health through its inspection service. Test clusters that were deployed through TMC or attached. I hope this post describes the process of running a conformance inspection through Tanzu Mission Control.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-conformance-tests/","summary":"\u003cp\u003eNo matter what flavor of Kubernetes you\u0026rsquo;re using, the cluster should have some high level of common functionality with the upstream version. To ensure this is the case Kubernetes conformance tests can validate your clusters. These tests are run by Sonobuoy which is an open source community standard. Tanzu Mission Control can run these tests on your clusters to ensure this conformance. They are a great way to make sure your cluster was installed, configured and operating properly.\u003c/p\u003e","title":"Tanzu Mission Control - Conformance Tests"},{"content":"I\u0026rsquo;ve written about deploying clusters in the past, but if you are a TMC customer, those steps can be skipped altogether. TMC will let us deploy a Kubernetes cluster and add it to management, all from the GUI or CLI.\nFor this example, I\u0026rsquo;ll create a new Kubernetes cluster within my AWS account. Before we setup the cluster, we need to configure access to our AWS Account so that TMC can manage resources for us.\nConnect AWS Account Within TMC in the settings menu, we click Connect Account. From there, we provide a name for the credentials and once done, we click \u0026ldquo;generate template\u0026rdquo;. This template will be downloaded and is a CloudFormation Template applied to our AWS Account.\nStep 2 of this process explains the steps to deploy the CloudFormation template in your AWS Account.\nStep 3 we enter the ARN of the role created in the Cloud Formation Template.\nCreate a Cluster Group Cluster Groups are just what they sound like. A grouping of Kubernetes clusters. These are used for organizational purposes, but also to configure multiple clusters at one time. The configuration hierarchy will apply configs at the group level first, but can be overridden by lower level configs.\nUnder the Cluster Groups menu, we\u0026rsquo;ll click the \u0026ldquo;New Cloud Group\u0026rdquo; button to setup a new group.\nHere we\u0026rsquo;ll give the group a name and a description. We can also apply tags here which is a really useful feature for organizing your clusters between team members. I\u0026rsquo;ve added an owner tag with my username.\nCreate a Cluster Now that we\u0026rsquo;ve setup the credentials and permissions in our AWS Account, and have a cluster group, we can begin to deploy a cluster. Under the clusters menu, we\u0026rsquo;ll click the \u0026ldquo;New Cluster\u0026rdquo; button and then select the account we created in the previous steps.\nThe next screen, we enter details about our desired cluster. For example, I\u0026rsquo;m providing a cluster name and description as well as tags again. But I also specify the cluster group that it should be applied to, the account it will reside in, the region where it will be deployed and SSH keys. We also provide the version of Kubernetes to deploy and the VPC CIDR we want to use.\nOn the following screen, we have to deside information about the Availability zones to deploy within, as well as whether this is a lab environment of production grade. The differences are whether or not you have high availability and load balancers for your cluster. Also, be sure to watch the type of nodes, some can be more expensive than others.\nReview your settings and click the \u0026ldquo;Create\u0026rdquo; button.\nWhen you click create, you can switch over to your AWS console and watch the new resources get spun up for you. You could also watch the TMC console for updates.\nOnce the cluster is built, we\u0026rsquo;ll see some details about the component health, nodes, etc.\nAccess Your Cluster OK, last step should be accessing your cluster. To do this, we can use the \u0026ldquo;Access this cluster\u0026rdquo; button in the top right hand corner of your cluster health screen.\nThis screen will give you the details to download a KUBECONFIG file with your credentials, how to initialize the kubectl config and mentions that you\u0026rsquo;ll need to download the TMC CLI and place it within your system\u0026rsquo;s path.\nI\u0026rsquo;ve added my KUBECONFIG file and placed the TMC CLI in my path. After this I tried to run a simple get command as you can see here.\nOnce I did this the first time, my chrome browser opened up and provided this information, stating that my authentication flow is complete.\nOnce done I can start running my kubectl commands and I\u0026rsquo;ve got a working cluster!\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-deploying-clusters/","summary":"\u003cp\u003eI\u0026rsquo;ve written about deploying clusters in the past, but if you are a TMC customer, those steps can be skipped altogether. TMC will let us deploy a Kubernetes cluster and add it to management, all from the GUI or CLI.\u003c/p\u003e\n\u003cp\u003eFor this example, I\u0026rsquo;ll create a new Kubernetes cluster within my AWS account. Before we setup the cluster, we need to configure access to our AWS Account so that TMC can manage resources for us.\u003c/p\u003e","title":"Tanzu Mission Control - Deploying Clusters"},{"content":"When we need to segment resources within a Kubernetes cluster, we often use a namespace. Namespaces can be excellent resources to create a boundary for either networking, role based access, or simply for organizational purposes. It may be common to have some standard namespaces across all of your clusters. Maybe you have corporate monitoring standards and the tools live in a specific namespace, or you always have an ingress namespace thats off limits to developers or something. Managing namespaces across cluster could be tedious, but Tanzu Mission Control lets us manage these namespaces centrally from the TMC console.\nBefore we build one of these, its important to know that we can group multiple namespaces together as part of a Workspace. A workspace is a Tanzu construct for grouping application resources, much like how a Cluster group is an infrastructure grouping.\nCreate a Workspace To create a workspace, go to the Workspaces menu item in the TMC console and click the \u0026ldquo;NEW WORKSPACE\u0026rdquo; button.\nGive the workspace a name, description, and tags coinciding with your methodology. Then click the \u0026ldquo;CREATE\u0026rdquo; button. Thats it! Pretty simple.\nCreate a Namespace Now that we have a workspace created, we can define our namespaces. Under the Workspaces menu item, click \u0026ldquo;NEW NAMESPACE.\u0026rdquo;\nOn the following screen select the cluster where the namespace will be created, the workspace it belongs with and then the details for the namespace such as name, description, and tags. Then click \u0026ldquo;Create.\u0026rdquo;\nOnce done, you can see your namespace listed in the Namespaces menu items.\nIf you click on the namespace, you can see details about resource utilization of that namespace. Right now we don\u0026rsquo;t have any workloads deployed in here, so lets change that.\nI deployed a simple application into the namespace on the cluster and now we can see some better information about what we might see for a production deployment. You can see how this might be helpful for operations teams to visually see what is going on in the cluster. I see this as a useful step on better dialogue between development teams and operations teams.\nIf you need to provide access to this namespace, there is a button in the corner that will provide instructions on setting up a KUBECONFIG file with context for this namespace.\nThis has been a nice way to create a namespace from TMC. It should be noted that you can create additional namespaces from within the cluster as well if needed. Not all namespaces need to be under TMC management.\nIn a future post, you\u0026rsquo;ll see how to assign policies to these namespaces for better control and governance.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-namespace-management/","summary":"\u003cp\u003eWhen we need to segment resources within a Kubernetes cluster, we often use a \u003ca href=\"/2019/02/06/kubernetes-namespaces/\"\u003enamespace\u003c/a\u003e. Namespaces can be excellent resources to create a boundary for either networking, role based access, or simply for organizational purposes. It may be common to have some standard namespaces across all of your clusters. Maybe you have corporate monitoring standards and the tools live in a specific namespace, or you always have an ingress namespace thats off limits to developers or something. Managing namespaces across cluster could be tedious, but Tanzu Mission Control lets us manage these namespaces centrally from the TMC console.\u003c/p\u003e","title":"Tanzu Mission Control - Namespace Management"},{"content":"A pretty common task that a Kubernetes administrator must do is to resize the cluster. We need more nodes to handle more workloads, or we\u0026rsquo;ve overprovisioned a cluster and are trying to save costs. This usually took some custom automation scripts such as node autoscaler, or it was done manually based on request.\nTanzu Mission Control can resize our cluster very simply from the TMC portal.\nScale Out a Cluster Within the TMC Portal, find the cluster that needs to be resized. Within the cluster screen, find the \u0026ldquo;Node pools\u0026rdquo; menu. Node pools define the worker nodes that are part of the Kubernetes cluster thats been deployed.\nIn this screen, I have a default node pool which is the worker nodes I originally deployed. I COULD edit that pool and simply modify the number of nodes. This would adjust the worker nodes in my cluster accordingly. For this example though, I\u0026rsquo;ll create a new node pool which will adjust my worker nodes, but will configure basically create a new group of worker nodes. Maybe they are different size, or need different tags or something.\nAs you can see I\u0026rsquo;ve created a new node pool and specified the size and number of nodes. I also added two sets of tags. Note that there are two sets of options here:\nNode Label- Labels seen within the Kubernetes cluster.\nCloud label - Tags applied to the AWS instances.\nWhen I\u0026rsquo;ve got it setup the way I want, click save. As soon as I click save, TMC starts to provision my additional pool and add it to the Kubernetes clusters.\nAfter a few minutes the nodes have been deployed and added to the cluster. As you can see the cluster below shows all my nodes, and a second kubectl command lists only the nodes with the Node Label I specified for the new pool.\nLikewise in the AWS console, I can see a different tag for the Cloud Label listed on those instances.\nScale In a Cluster OK, that was fun, but we\u0026rsquo;re pretty worried about costs. Go in and edit your nodes to reduce the total size of the cluster now. I will do two actions.\nDelete my new hollow-pool1 node pool. reduce the size of the default-node-pool from 3 nodes to 2 First, we\u0026rsquo;ll delete the recently added node pool.\nClick Delete and accept the confirmation.\nNext, go to the default node pool and click edit, to change the number of worker nodes from 3 to 2.\nAfter a few minutes your cluster should match your desired state, configured within TMC.\nFor this example cluster, we started with six total nodes (including the control plane nodes), added two nodes in a new pool, and then deleted that pool and one additional node from the default pool for a total of five nodes left running.\n","permalink":"https://theithollow.com/2020/03/10/tanzu-mission-control-resize-clusters/","summary":"\u003cp\u003eA pretty common task that a Kubernetes administrator must do is to resize the cluster. We need more nodes to handle more workloads, or we\u0026rsquo;ve overprovisioned a cluster and are trying to save costs. This usually took some custom automation scripts such as node autoscaler, or it was done manually based on request.\u003c/p\u003e\n\u003cp\u003eTanzu Mission Control can resize our cluster very simply from the TMC portal.\u003c/p\u003e\n\u003ch2 id=\"scale-out-a-cluster\"\u003eScale Out a Cluster\u003c/h2\u003e\n\u003cp\u003eWithin the TMC Portal, find the cluster that needs to be resized. Within the cluster screen, find the \u0026ldquo;Node pools\u0026rdquo; menu. Node pools define the worker nodes that are part of the Kubernetes cluster thats been deployed.\u003c/p\u003e","title":"Tanzu Mission Control - Resize Clusters"},{"content":"Most of the blog posts I write about Kubernetes have examples using publicly available images from public image registries like DockerHub or Google Container Registry. But in the real world, companies use private registries for storing their container images. There are a list of reasons why you might want to do this including:\nCustom code is inside the container such as business logic or other intellectual property. On-premises private repos provide solutions to bandwidth or firewall restrictions. Custom scanning software is being integrated for vulnerability management. In this post, we\u0026rsquo;ll setup our Kubernetes cluster to be able to use a private container registry.\nSetup Kubernetes For my lab, I\u0026rsquo;ve deployed Harbor to store some images within my lab and I\u0026rsquo;ve created certificates on the harbor server. The images in my \u0026ldquo;hollowlab\u0026rdquo; project are simple images that I pulled down from a public repo, but should act as my super secret private image with sensitive data within them.\nBefore I can start working on setting up my cluster, I need to make sure that all of my Kubernetes nodes can securely communicate with my harbor registry. Since I\u0026rsquo;m using self-signed certificates, I need to make sure my nodes will trust them. So to do this, I copy the certificates into the /etc/ssl/certs directory and afterwards reload/restart the docker daemon so the changes take effect.\nOnce that step is completed I must login to the docker registry with my username and password.\ndocker login registry.domain.name -u username -p password After the login has completed, the docker/config.json file will have a section in it for the registry name and an auth token. Make sure that you\u0026rsquo;ve logged in to the docker registry and this auth token is present on every node within your Kubernetes cluster. A configuration management tool might come in nicely here to make the changes across a fleet of servers.\nOnce the k8s nodes are authentication through the container runtime, we use the docker config file to create a Kubernetes secret. Run the command below replacing the path to your config file. NOTE: this requires your KUBECONFIG file to be configured and you can run kubectl commands against your k8s cluster.\nkubectl create secret generic regcred --from-file=.dockerconfigjson=[pathToDockerConfigJsonHere] --type=kubernetes.io/dockerconfigjson Once the secret has been created you are free to use the images located in your private registry, within your deployment files. You will need to insert the \u0026ldquo;imagePullSecrets\u0026rdquo; configuration option and reference the secret created above. This is so the cluster can authenticate with the registry properly.\napiVersion: apps/v1 kind: Deployment metadata: name: wordpress-deployment labels: app: wordpress spec: replicas: 2 selector: matchLabels: app: wordpress template: metadata: labels: app: wordpress spec: containers: - name: ubuntu-container image: harbor.hollow.local/hollowlab/wordpress #Your image here ports: - containerPort: 80 imagePullSecrets: - name: regcred Summary Sometimes it makes sense to have a private registry setup to store code that shouldn\u0026rsquo;t be available to the whole world. This works fine with Kubernetes, you just have to make sure your container runtime can authenticate with the registry by storing a secret and using this secret in your deployment manifests.\n","permalink":"https://theithollow.com/2020/03/03/use-a-private-registry-with-kubernetes/","summary":"\u003cp\u003eMost of the blog posts I write about Kubernetes have examples using publicly available images from public image registries like DockerHub or Google Container Registry. But in the real world, companies use private registries for storing their container images. There are a list of reasons why you might want to do this including:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eCustom code is inside the container such as business logic or other intellectual property.\u003c/li\u003e\n\u003cli\u003eOn-premises private repos provide solutions to bandwidth or firewall restrictions.\u003c/li\u003e\n\u003cli\u003eCustom scanning software is being integrated for vulnerability management.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIn this post, we\u0026rsquo;ll setup our Kubernetes cluster to be able to use a private container registry.\u003c/p\u003e","title":"Use a Private Registry with Kubernetes"},{"content":"Recently I was tasked with setting up some virtual machines to be used as a load balancer for a Kubernetes cluster. The environment we were deploying our Kubernetes cluster didn\u0026rsquo;t have a load balancer available, so we thought we\u0026rsquo;d just throw some envoy proxies on some VMs to do the job. This post will show you how the following tasks were completed:\nDeploy Envoy on a pair of CentOS7 virtual machines. Configure Envoy with health checks for the Kubernetes Control Plane Install keepalived on both servers to manage failover. Configure keepalived to failover if a server goes offline, or the envoy service is not started. Deploy Envoy The first step will be to setup a pair of CentOS 7 servers. I\u0026rsquo;ve used virtual servers for this post, but baremetal would work the same. Also, similar steps could be used if you prefer debian as your linux flavor.\nOnce there is a working pair of servers, its time to install envoy.\nsudo yum install -y yum-utils sudo yum-config-manager --add-repo https://getenvoy.io/linux/centos/tetrate-getenvoy.repo sudo yum install -y getenvoy-envoy Once the Envoy bits are installed, we should create a configuration file that tells envoy how to load balance across our Kubernetes control plane nodes and set health checks to make sure it is routed appropriately. Be sure to update this file with your own ports, and server names/IP Addresses before deploying.\ncat \u0026lt;\u0026lt;EOF \u0026gt; /root/config static_resources: listeners: - name: main address: socket_address: address: 0.0.0.0 port_value: 6443 #Kubernetes Default Port filter_chains: - filters: - name: envoy.tcp_proxy config: stat_prefix: ingress_tcp cluster: k8s clusters: - name: k8s connect_timeout: 0.25s type: strict_dns # static lb_policy: round_robin hosts: - socket_address: address: k8s-controller-0.hollow.local #replace with k8s control plane node name port_value: 6443 #Kubernetes Default Port - socket_address: address: k8s-controller-1.hollow.local #replace with k8s control plane node name port_value: 6443 #Kubernetes Default Port - socket_address: address: k8s-controller-2.hollow.local #replace with k8s control plane node name port_value: 6443 #Kubernetes Default Port health_checks: - timeout: 1s interval: 5s unhealthy_threshold: 1 healthy_threshold: 1 http_health_check: path: \u0026#34;/healthz\u0026#34; admin: access_log_path: \u0026#34;/dev/null\u0026#34; address: socket_address: address: 0.0.0.0 port_value: 8001 EOF Next, let\u0026rsquo;s setup a systemd service so that it will start on boot and restart if it crashes.\ncat \u0026lt; /etc/systemd/system/envoy.service \u0026gt; [Unit] \u0026gt; Description=Envoy Proxy \u0026gt; After=network.target \u0026gt; StartLimitIntervalSec=0 \u0026gt; \u0026gt; [Service] \u0026gt; Type=simple \u0026gt; Restart=always \u0026gt; RestartSec=1 \u0026gt; User=root \u0026gt; ExecStart=/usr/bin/envoy -c /root/config.yaml \u0026gt; \u0026gt; [Install] \u0026gt; WantedBy=multi-user.target \u0026gt; EOF Lastly, we can enable and start the service.\nsudo systemctl start envoy sudo systemctl enable envoy Make Envoy Highly Available At this point of the post, you should have two virtual machines with Envoy installed and able to distribute traffic to your Kubernetes control plane nodes. Either one of them should work. But what we\u0026rsquo;d really like to have is a single IP Address (Virtual IP Address - VIP) that can float between these two envoy nodes depending on which one is healthy. To do this, we\u0026rsquo;ll use the keepalived project.\nThe first step will be to install keepalived on both envoy nodes.\nsudo yum install keepalived Keepalived will ensure that whichever node is healthy, will own the VIP. But having a healthy node now also includes having the envoy process we created is in a running status. To ensure that our service is running, we need to create our own script. The script is very simple and just gathers the process id of our envoy service. If it can\u0026rsquo;t get a process id, the script will fail, and keepalived will note this error to manage failover.\ncat \u0026lt;\u0026lt;EOF \u0026gt; /usr/local/bin/envoycheck.sh pidof envoy EOF Our service will run that script as root, and for security reasons ONLY the root user should have access to execute or modify this script. So we need to change permissions. NOTE: if anyone other than root has access keepalived service will skip this check so be sure to set the permissions correctly.\nsudo chmod 700 /usr/local/bin/envoycheck.sh Now, we need to set the keepalived configurations on each of the nodes. Pick a node and deploy the following keepalived configuration to /etc/keepalived/keepalived.conf which overwrites the existing configuration.\nNode1 ! Configuration File for keepalived global_defs { enable_script_security script_user root } vrrp_script chk_envoy { script \u0026#34;/usr/local/bin/envoycheck.sh\u0026#34; #Our custom health check interval 2 # check every 2 seconds } vrrp_instance VI_1 { state MASTER interface ens192 #REPLACE WITH YOUR NETWORK INTERFACE virtual_router_id 51 priority 101 #PRIORITY unicast_src_ip 10.10.50.2 #IP Address of this server unicast_peer { 10.10.50.3 #IP Address of the secondary server advert_int 1 authentication { auth_type PASS auth_pass MYPASSWORD #PASSWORD } virtual_ipaddress { 10.10.50.4 ####SHARED IP ADDRESS - VIP } track_script { chk_envoy } } When you\u0026rsquo;re done with the first node, create a similar config file on the second node.\nNode2 ! Configuration File for keepalived global_defs { enable_script_security script_user root } vrrp_script chk_envoy { script \u0026#34;/usr/local/bin/envoycheck.sh\u0026#34; #Our custom health check interval 2 # check every 2 seconds } vrrp_instance VI_1 { state MASTER interface ens192 #REPLACE WITH YOUR NETWORK INTERFACE virtual_router_id 51 priority 100 . #PRIORITY - DIFFERENT FROM HOST 1 unicast_src_ip 10.10.50.3 #IP ADDRESS OF THIS HOST unicast_peer { 10.10.50.2 #IP Address of the first node advert_int 1 authentication { auth_type PASS auth_pass MYPASSWORD . #PASSWORD - SAME AS HOST 1 } virtual_ipaddress { 10.10.50.4 ####SHARED IP ADDRESS - VIP - SAME AS HOST1 } track_script { chk_envoy } } Now we should be ready to go. Start and enable the service for keepalived.\nsudo service keepalived start chkconfig keepalived on Test failover You may not have a Kubernetes cluster setup yet for a full test, but we can at least see if our envoy server will failover to the other node. To do this you can look at the messages to see which keepalived node is advertising gratuitous arp commands in order to own the VIPs.\ntailf /var/log/messages If you\u0026rsquo;re looking at the standby envoy node, the messages will state that the service is in a BACKUP STATE.\nIf you want to test the failover, stop the envoy service and see if the node in a backup state starts sending gratuitous arps to takeover the VIP.\nSummary A virtual load balancer could be handy in a lot of situations. This case called for a way to distribute load to my Kubernetes control plane nodes, but could really be used for anything. First deploy envoy and configure it to distribute load to the upstream services providing the appropriate health checks. Then use keepalived to ensure that a VIP floats between the healthy envoy nodes. What will you use this option to do? Post your configs in the comments.\n","permalink":"https://theithollow.com/2020/02/24/highly-available-envoy-proxies-for-the-kubernetes-control-plane/","summary":"\u003cp\u003eRecently I was tasked with setting up some virtual machines to be used as a load balancer for a Kubernetes cluster. The environment we were deploying our Kubernetes cluster didn\u0026rsquo;t have a load balancer available, so we thought we\u0026rsquo;d just throw some envoy proxies on some VMs to do the job. This post will show you how the following tasks were completed:\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eDeploy Envoy on a pair of CentOS7 virtual machines.\u003c/li\u003e\n\u003cli\u003eConfigure Envoy with health checks for the Kubernetes Control Plane\u003c/li\u003e\n\u003cli\u003eInstall keepalived on both servers to manage failover.\u003c/li\u003e\n\u003cli\u003eConfigure keepalived to failover if a server goes offline, or the envoy service is not started.\u003c/li\u003e\n\u003c/ol\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2020/02/image-61-1024x495.png\"/\u003e \n\u003c/figure\u003e\n\n\u003ch2 id=\"deploy-envoy\"\u003eDeploy Envoy\u003c/h2\u003e\n\u003cp\u003eThe first step will be to setup a pair of CentOS 7 servers. I\u0026rsquo;ve used virtual servers for this post, but baremetal would work the same. Also, similar steps could be used if you prefer debian as your linux flavor.\u003c/p\u003e","title":"Highly Available Envoy Proxies for the Kubernetes Control Plane"},{"content":"Its 2020 and I\u0026rsquo;ve had plenty of time at home due to the social distancing and global pandemic going on. I\u0026rsquo;ve also been putting off purchasing any new home gear, thinking to myself that maybe the cloud only model will be my next lab, but it isn\u0026rsquo;t yet. Due to the work I\u0026rsquo;ve been doing with vSphere 7 and Kubernetes clusters, I couldn\u0026rsquo;t avoid updating my hardware any longer. Here\u0026rsquo;s the updated home lab for any enthusiasts.\nRack The rack is custom made and been in use for a while now. My lab sits in the basement on a concrete floor. So I built a wooden set of shelves on casters so I could roll it around if it was in the way. I place the UPS on the shelf so that I can unplug the power to move the lab. As long as I have a long enough Internet cable, I can wheel my lab around for as long as the UPS holds on. On one side I put a whiteboard so I could draw something out if I was stuck. I don\u0026rsquo;t use it that often, but I like that it covers the side of the rack.\nOn the back of the shelves, I added some cable management panels.\nPower As mentioned, I have a UPS powering my lab. It\u0026rsquo;s a CyberPower 1500 AVR. I\u0026rsquo;m currently running around 500 Watts for the lab under normal load. I\u0026rsquo;ve mounted a large power strip along the side and a few small strips on each shelf. I also bought some 6 inch IEC cables which really cuts down the cable clutter behind the lab.\nCompute I bought new compute so that I could run the vSphere 7 stack, complete with NSX-T, Tanzu Kubernetes Grid, and anything else you can think of. So I bought three new E200-8d Supermicro servers with a six core Intel processor and 128 GB of memory.\nFor local storage I use a 64 GB USB drive for the ESXi host OS disk and I added a 1 TB SSD and a 500 GB NVMe drive. These drives are added for capacity and caching tiers for VMware vSAN. There isn\u0026rsquo;t a lot of room for disk drives in this model, but they sure are compact enough to fit on a shelf.\nThese servers have two 10GbE NICs, two 1 GbE NICs, and an IPMI port for out of band management of the server. I wanted to be sure to have a way to power on and off the server, load images into a virtual cd-rom, etc. I was disappointed to find out that Supermicro now charges a license on top of the motherboard for these features. I ended up paying for the licenses, but will be sure to remember this the next time I go server shopping.\nI have two other servers using spare parts I had lying around. One of these is six core, the other a four core. Between them another 192 GB of RAM. These are also running vSphere 7 but in different clusters. The totals in vCenter are shown below.\nShared Storage For Storage, I have a tiered system. I have an eight bay Synology array used for virtual machines and file stores. Then I have a secondary Synology used as a backup device. Important information on the large Synology is backed up to the smaller one, and then pushed to Amazon S3 once a month for an offsite.\nvSphere Storage Array: Synology DS1815+ 8 TB available of spinning disks with dual 256 GB SSD for Caching File Storage and Backup Array: Synology DS1513+ 3.6 TB available of spinning Disk For machines I’m building over and over and want fast performance, I have a VSAN datastore with linked clones. This lets me spin up linux VMs in about 90 seconds from template. vSAN has become my place for ephemeral data. Sometimes I turn off vSAN if I\u0026rsquo;m not using it so I can power down ESXi nodes via DPM to cut down on power usage.\nNetwork My networking gear hasn\u0026rsquo;t been updated much. I love the Ubiquiti network devices for wireless. The Edge gateway serves as a perimeter firewall, and a nice gate between my wireless guests and the homelab. This lets me work on the lab even when the Playstation is running full throttle. :)\nI mounted my basement access point, USG and PoE switch to a piece of plywood and mounted it with a patch panel.\nCore Switch: HP v1910-24G Ethernet Switch Wireless Switch: Ubiquiti UniFi 8 POE-150W Storage/vMotion Switch: Netgear XS708E 10 Gigabit That switch was a gift from fellow vExpert Jason Langer /2016/12/19/unbelievable-gift-home-lab/ Wireless Firewall: Ubiquiti UniFi Security Gateway Wireless: Ubiquiti AC Pro Controller: UniFi Cloud Key The cables are colored according to purpose.\nYellow - Management Networks and Out of Band access. Green - Storage and vMotion Networks (10GbE) Blue - Trunk ports for virtual machines Red - Uplinks Cloud I’ve decided to use Amazon as my preferred cloud vendor. Mainly because I’ve done much more work here than on Azure. My AWS Accounts are configured in a hub spoke model which mimics a production like environment for customers.\nI use the cloud for backup archival, and just about anything you can think of that my homelab either can’t do or doesn’t have capacity for. I like to use solutions like Route53 for DNS so a lot of times my test workloads still end up in the cloud. Most of the accounts below are empty or have resources that don’t cost money, such as VPCs.\nMy overall monthly spend on AWS is around $35, most of which is spent on the VPN tunnel and some DNS records.\n","permalink":"https://theithollow.com/2020/02/15/2020-home-lab/","summary":"\u003cp\u003eIts 2020 and I\u0026rsquo;ve had plenty of time at home due to the social distancing and global pandemic going on. I\u0026rsquo;ve also been putting off purchasing any new home gear, thinking to myself that maybe the cloud only model will be my next lab, but it isn\u0026rsquo;t yet. Due to the work I\u0026rsquo;ve been doing with vSphere 7 and Kubernetes clusters, I couldn\u0026rsquo;t avoid updating my hardware any longer. Here\u0026rsquo;s the updated home lab for any enthusiasts.\u003c/p\u003e","title":"2020 Home Lab"},{"content":"Sometimes things don\u0026rsquo;t go quite as we\u0026rsquo;ve planned. When that happens in a computer system, we turn to the logs to tell us what went wrong, and to give us some clues on either how to fix the issue, or where to look for the next clue.This post focuses on where to look for issues in your Kubernetes deployment.\nBefore we dive into the logs, we must acknowledge that there are different ways to install a kubernetes cluster. The pieces and parts can be deployed as system services or containers, and the way to obtain their logs will change. This post uses a previous post about a k8s install as an example of where to find those logs.\nJournal Logs Some of the Kubernetes components will likely be installed as Linux systemd services. Any components deployed as systemd services can be found in the Linux journal on the host in which the service resides. This could include etcd logs, kubelet logs, or really any of the components running as a service. We\u0026rsquo;ll look at one component that should run as systemd below, the Kubelet.\nKubelet Logs The kubelet runs on every node in the cluster. Its used to make sure the containers on that node are healthy and running, it also runs our static pods, such as our API server. The kubelet logs are often reviewed to make sure that the cluster nodes are healthy. The kubelet is often the first place I look for issues when the answer isn\u0026rsquo;t obvious to me. Since the Kubelet runs as a systemd services we can access those logs by using the journalctl commands on the linux host.\njournalctl -xeu kubelet Container Logs The rest of this post assumes that the other components are running as containers. Container logs will be accessed differently and you must know the container you\u0026rsquo;re looking for so we\u0026rsquo;ll explain those below.\nBefore we discuss that, its important to realize that the container logs may be accessed in different ways depending on the situation. For example if the Kubernetes cluster is healthy enough to run kubectl commands, we\u0026rsquo;ll use the commands:\nkubectl logs \u0026lt;pod name goes here\u0026gt; But what if the Kubernetes cluster is so unhealthy that the kubectl commands don\u0026rsquo;t work? You won\u0026rsquo;t be able to run the commands to read the logs even.\nIf this happens, the next step is to get the logs through your container runtime. For example, if our API server container is running, but the K8s cluster isn\u0026rsquo;t, you must run the cri commands (Docker in this example) such as:\ndocker logs \u0026lt;container id goes here\u0026gt; You should now have the commands you need to get logs from the other Kubernetes components that are running in your cluster as containers. The list below will explain what types of issues you might have and which containers you might want to check to solve them.\nAPI Server Logs - The API server is the brains of the whole k8s operation. This is a good place to start if you\u0026rsquo;re having issues with anything in the cluster. These logs can be helpful to ensure that the API server can write to etcd correctly, or if there are authentication issues as a couple of examples. Kube Scheduler Logs - WHY AREN\u0026rsquo;T MY PODS STARTING!? The kube scheduler is responsible for placement of pods on the nodes. If you\u0026rsquo;re troubleshooting zones, or pods not starting up, check this container. Controller Manager Logs - The controller manager has a multitude of services its managing, but these include replica controllers and cloud providers. I\u0026rsquo;ve used these logs to troubleshoot issues with my vSPhere cloud provider connections. etcd Logs - The stateful storage for Kubernetes. The API Server is the only thing writing to this key/value store (or it should be!). If the etcd database isn\u0026rsquo;t happy, the cluster isn\u0026rsquo;t going to be very useful. Check this for information about etcd quorum and certificates when the API server is connecting to the store. Summary Hopefully you\u0026rsquo;ll never need to look up any Kubernetes logs because it just runs. My experience has told me that you\u0026rsquo;ll need to look up logs at some point in your k8s journey. Remember that any logs for the systemd services such as the Kubelet are found in the Journal. Other logs are written to stdout and those logs can be accessed by using either the kubectl logs commands or the container runtime logs commands. In an enterprise environment tools such as FluentD are often used to aggregate these logs and centrally store these for easier review. Happy log hunting!\n","permalink":"https://theithollow.com/2020/02/12/kubernetes-logs-for-troubleshooting/","summary":"\u003cp\u003eSometimes things don\u0026rsquo;t go quite as we\u0026rsquo;ve planned. When that happens in a computer system, we turn to the logs to tell us what went wrong, and to give us some clues on either how to fix the issue, or where to look for the next clue.This post focuses on where to look for issues in your Kubernetes deployment.\u003c/p\u003e\n\u003cp\u003eBefore we dive into the logs, we must acknowledge that there are different ways to install a kubernetes cluster. The pieces and parts can be deployed as system services or containers, and the way to obtain their logs will change. This post uses a \u003ca href=\"/2020/01/08/deploy-kubernetes-on-vsphere/\"\u003eprevious post\u003c/a\u003e about a k8s install as an example of where to find those logs.\u003c/p\u003e","title":"Kubernetes Logs for Troubleshooting"},{"content":"If you\u0026rsquo;ve been on the operations side of the IT house, you know that one of your primary job functions is to ensure High Availability (HA) of production workloads. This blog post focuses on making sure applications deployed on a vSphere Kubernetes cluster will be highly available.\nThe Control Plane Ok, before we talk about workloads, we should discuss the Kubernetes Control plane components. When we deploy Kubernetes on virtual machines, we have to make sure that the brains of the Kubernetes cluster will continue working even if there is a hardware failure. The first step is to make sure that your control plane components are deployed on different physical (ESXi) hosts. This can be done with a vSphere Host Affinity Rule to keep k8s VMs pinned to groups of hosts or anti-affinity rules to make sure two control plane nodes aren\u0026rsquo;t placed on the same host. After this is done, your Load Balancer should be configured to point to your k8s control plane VMs and a health check is configured for the /healthz path.\nAt the end of your configuration the control plane should look similar to this diagram where each control plane node is on different hardware, and accessed through a load balancer.\nWorker Node HA Now that our cluster is HA, we need to make sure that the worker nodes are also HA, and this is a bit trickier to work with. We\u0026rsquo;ve already discussed that our Kubernetes nodes (Control Plane Nodes/ Worker Nodes) should be spread across different physical hosts. So we can quickly figure out that our cluster will look similar to this diagram.\nThis is great, but now think about what would happen if we were to deploy our mission critical Kubernetes application with a pair of replicas for High Availability Reasons. The deployment could be perfect, but its also possible that the Kubernetes scheduler would place the deployment like this example below.\nYIKES! The Kubernetes scheduler did the right thing and distributed our app across multiple Kubernetes nodes, but it didn\u0026rsquo;t know that those two worker nodes live on the same physical host. If our ESXi Host #1 were to have a hardware failure, our app would experience a short outage while it was redeployed on another node.\nHow do we solve for this issue?\nConfigure Kubernetes Zones Lets re-configure our cluster to be aware of the underlying physical hardware. We\u0026rsquo;ll do this by configuring \u0026ldquo;zones\u0026rdquo; and placing hardware in them. For the purposes of illustrating this process, we\u0026rsquo;re going to configure two zones and ESXi-1 will be in Zone 1, and the other two ESXi hosts will be in Zone 2.\nNOTE: I\u0026rsquo;m not advocating for un-even zones, but to prove that Kubernetes will distribute load across zones evenly, I wanted to put an un-even number of nodes in each zone. For example, if I had three zones and put each host in its own zone, the k8s scheduler would place pods across each node evenly, which is what it would do even if there were no zones. Demonstrating with an un-even number should display this concept more clearly.\nCreate VMware VM/Host Affinity Rules Whenever we have applications that need to be placed on different ESXi hosts for availability, we use vSphere affinity rules. Here I\u0026rsquo;ve created two rules (1 for each zone) and placed our k8s vms and ESXi hosts in those zone names.\nFor example, the screenshot below shows our zone (AZ1) that has our k8s-worker-0 VM pinned to the esxi01 HOST.\nThen we create a second zone and pin my remaining two worker nodes on different hosts, ESXi02/ESXi03.\nMy home lab now looks like this, where I\u0026rsquo;ve pinned my k8s vms to nodes. You can see that the problem mentioned above could still happen, but we\u0026rsquo;ve ensured that our worker nodes won\u0026rsquo;t be moved between zones by vSphere DRS though.\nConfigure Kubernetes Zone Topology UPDATE:\nThe process seen below can also be done automatically with a combination of vSphere Tags and a properly configured vSphere cloud provider. That process can be found here: https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/zones.html\nNow we need to let Kubernetes know about our physical topology. The first thing we need to do is to enable a feature gate on our Kubernetes API server and Scheduler. This requires us to be running Kubernetes 1.16 or later, so if you\u0026rsquo;re running an older version, you\u0026rsquo;ll need to upgrade.\nSet the feature flag EvenPodsSpread on each of your kube-apiserver and kube-scheduler nodes to enable the feature. This is likely a configuration that should be added to the static pod manifests in your cluster. If you\u0026rsquo;re not sure where these are, check /etc/kubernetes/manifests on your control plane nodes. Thats where they\u0026rsquo;ll be if you followed this post on deploying Kubernetes on vSphere.\nExcellent, now the next thing we want to do is to add a label to our Kubernetes worker nodes that correspond to whatever zone name we want. I\u0026rsquo;m using AZ1 and AZ2. So I\u0026rsquo;ve applied a new label to my worker nodes named \u0026ldquo;zone\u0026rdquo; and their zone ID, as seen below.\nRepeat that process on each of your Kubernetes workers, keeping in mind what ESXi host they belong with. As you can see my labels match my vSphere config.\nUse Zones with our Deployments Now our Kubernetes worker nodes are in the appropriate zones, our VMs are spread across hosts within those zones and our API server and Scheduler components are aware of zone topologies. The next step is to configure our applications to respect our zones. Within the pod spec of our deployment manifests, we need to use the \u0026ldquo;topologySpreadConstraints\u0026rdquo; config and set the topologyKey to our zone label, which was \u0026ldquo;zone\u0026rdquo;.\nYou can see my nginx deployment in full, below. This is the same deployment manifest used in the getting started guid e, when we learned about Deployments. We\u0026rsquo;ve just added the topology section.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object strategy: type: RollingUpdate rollingUpdate: #Update Pods a certain number at a time maxUnavailable: 1 #Total number of pods that can be unavailable at once maxSurge: 1 #Maximum number of pods that can be deployed above desired state replicas: 6 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: nginx containers: - name: nginx-container #the name of the container within the pod image: nginx:1.7.9 #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod Did it work? So I\u0026rsquo;ve deployed my mission critical app to my re-configured vSphere cluster. Lets see how the distribution came out in our cluster. We can see that there are six pods deployed across three nodes. The cool part is that we deployed three pods on worker-0 within AZ1, and three pods on the other two workers which are in AZ2. So while the number of ESXi nodes in each zone are un-even, the distribution of pods respected our zone boundaries, making our application highly available.\nOther Considerations This post was just supposed to help explain how the zones can be configured for Kubernetes clusters on vSphere. There are other considerations such as sizing to take into account. The example we used left un-even numbers of worker nodes in each zone. This means that AZ1 is likely to be full before AZ2 is. When this happens, the scheduler won\u0026rsquo;t be able to distribute the workloads across both zones and you may have an availability issue again. Zones should be similar in size to ensure that you can spread the pods out over those nodes.\nZones are great, but you can also use other topology labels such as regions if you need to break down your cluster further. Maybe you want to spread pods out over multiple regions as well as your zones?\nAlso, your vSphere environment is probably pretty flexible. An administrator might need to vMotion these workers to other nodes for maintenance. If that happens, Kubernetes won\u0026rsquo;t know about any zone topology changes. Be sure to keep your zone topology fairly static or you could encounter issues.\nSummary Kubernetes deployments will try to spread out your pods across as many nodes as possible, but doing so doesn\u0026rsquo;t necessarily mean that they are highly available. Setting your zones up correctly can increase the availability of your applications deployed on Kubernetes. Good luck setting up your environment!\n","permalink":"https://theithollow.com/2020/01/27/kubernetes-ha-on-vsphere/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been on the operations side of the IT house, you know that one of your primary job functions is to ensure High Availability (HA) of production workloads. This blog post focuses on making sure applications deployed on a vSphere Kubernetes cluster will be highly available.\u003c/p\u003e\n\u003ch2 id=\"the-control-plane\"\u003eThe Control Plane\u003c/h2\u003e\n\u003cp\u003eOk, before we talk about workloads, we should discuss the Kubernetes Control plane components. When we deploy Kubernetes on virtual machines, we have to make sure that the brains of the Kubernetes cluster will continue working even if there is a hardware failure. The first step is to make sure that your control plane components are deployed on different physical (ESXi) hosts. This can be done with a vSphere Host Affinity Rule to keep k8s VMs pinned to groups of hosts or anti-affinity rules to make sure two control plane nodes aren\u0026rsquo;t placed on the same host. After this is done, your Load Balancer should be configured to point to your k8s control plane VMs and a health check is configured for the /healthz path.\u003c/p\u003e","title":"Kubernetes HA on vSphere"},{"content":"You\u0026rsquo;ve stood up your Kubernetes (k8s) cluster and are really looking forward to all of your coworkers deploying containers on it. How will you get everyone logged in? Creating local service accounts and distributing KUBECONFIG files (securely), seems like a real chore. This post will show how you can use Active Directory authentication for Kubernetes Clusters.\nThis post will use two projects, dex and gangway, to perform the authentication against ldap and return the Kubernetes login information to the user\u0026rsquo;s browser. The end result will look something like the screen below. The authenticated user will receive instructions on installing the client and setting up certificates for authentication.\nPrerequisites This post has several prerequisites that should be in place before setting up authentication with your Active Directory servers.\nA Working Kubernetes Cluster which connectivity to the AD infrastructure for Auth to take place. Cert-Manager should be installed, or be prepared to handle your own certificates for any new apps deployed. An article for using cert-manager can be found here. Cert-Manager will automatically deploy certificates to Dex/Gangway. An Ingress controller sending traffic to the apps that will be deployed in this post. Ingress Controller information can be found here. Permissions to make changes on the cluster. Create a shared secret that will be used in both gangway and dex configurations so that they may authenticate with each other. Use: \u0026ldquo;openssl rand -base64 32\u0026rdquo; and store this secret for use in this post. Infrastructure Setup We\u0026rsquo;ll need a couple of DNS names configured so that traffic will be delivered to dex and gangway from outside the cluster. Cert-Manager will need to be configured so that Dex and Gangway get their certificates installed on the Ingress Controller.\nAdditionally, if you\u0026rsquo;re looking for more information on how dex and gangway will interact with LDAP and the user\u0026rsquo;s browser, the section below will describe the authentication process.\nGroup Permissions Setup For this lab, I want any users that are part of the \u0026ldquo;k8s_access\u0026rdquo; Active Directory group to have admin access to my cluster. First, create your Active Directory Group and place the users you wish to have access into this group. Then ensure you\u0026rsquo;ve got connection information for your AD servers handy, so we can use them in this first step.\nWe\u0026rsquo;ll also need a Kubernetes Role Binding so that when a user that is a member of this group authenticates, it will receive the proper permissions. Here is the role binding in my lab.\napiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: Group name: k8s_access Apply the role binding above to your cluster, or make your changes and apply.\nDex and Gangway Now, assuming all of our prerequisites are in order, lets get to deploying our authentication tools into our Kubernetes cluster. As mentioned, we\u0026rsquo;ll use two tools, Dex and Gangway, to provide the authentication mechanisms for Active Directory.\nDex will serve as the identity provider that will validate our credentials with the Active Directory (ldap) identity store. Dex uses OpenID Connect to perform this validation.\nGangway will enable the end users to self-configure their kubectl configuration using the OpenID Connect Token provided by Dex after successful authentication. The full process can be seen in the below example.\nUser attempts a login request to the gangway URL Gangway does a browser redirect to Dex through the user’s web browser Dex Responds to the request by responding with a login Form The user submits the login information to Dex Dex uses the login information to authenticate with LDAP to verify the credentials Dex provides a JWT back to gangway through a browser redirect Gangway provides the token needed to access the Kubernetes API via the web portal User takes the authentication information and places it in KUBECONFIG. Then is free to use KUBECONFIG to run commands against the Kubernetes API For the deployment of Dex and Gangway, we\u0026rsquo;ll be building off the work of one of my colleagues, Alex Brand, who has a great tutorial of deploying Dex and Gangway in a Kubernetes cluster. We\u0026rsquo;ll only slightly modify it for use with Active Directory and the Cert-Manager issuers that we\u0026rsquo;ve used in a previous post. If you just want to learn Dex and Gangway, please check out his github project which is an excellent tutorial.\nDeploy Dex First we\u0026rsquo;ll deploy Dex, which is where our Active Directory Configuration will be necessary. Based on Alex\u0026rsquo;s tutorial, we\u0026rsquo;ll be deploying four items. A Configmap setup information for Dex as well as the ldap connector, and then the containers that are part of a k8s deployment. Then lastly we will deploy a service and ingress to provide access to this service from outside the cluster. First, let\u0026rsquo;s look at the configmap and then apply it.\nThe configmap below contains important information for Dex to do the authentication piece. There is connection information in here that is currently using non-secure ldap connections. The configuration also includes how dex should search ldap for your users, and then also list any groups those users are members of. The configmap that was used in my lab is shown below.\nkind: ConfigMap apiVersion: v1 metadata: name: dex data: config.yaml: | issuer: https://dex.theithollowlab.com/dex storage: type: sqlite3 config: file: dex.db # Configuration for the HTTP endpoints. web: http: 0.0.0.0:5556 staticClients: - id: gangway redirectURIs: - https://gangway.theithollowlab.com/callback name: \u0026#34;Heptio Gangway\u0026#34; secret: mfgDcwBEgSgFehUFdQh2fhbftrgPOQWy0Q05gZgY8bs= #shared secret from prerequisites connectors: - type: ldap id: ldap name: LDAP config: host: 10.0.4.251:389 #Address of AD Server # Following field is required if the LDAP host is not using TLS (port 389). # Because this option inherently leaks passwords to anyone on the same network # as dex, THIS OPTION MAY BE REMOVED WITHOUT WARNING IN A FUTURE RELEASE. # insecureNoSSL: true # If a custom certificate isn\u0026#39;t provide, this option can be used to turn on # TLS certificate checks. As noted, it is insecure and shouldn\u0026#39;t be used outside # of explorative phases. # insecureSkipVerify: true # When connecting to the server, connect using the ldap:// protocol then issue # a StartTLS command. If unspecified, connections will use the ldaps:// protocol # # startTLS: true # Path to a trusted root certificate file. Default: use the host\u0026#39;s root CA. # rootCA: /etc/dex/ldap.ca bindDN: CN=binduser,cn=users,dc=hollowaws,dc=local #user with access to search AD bindPW: Password123 #password of user with access to search AD # The attribute to display in the provided password prompt. usernamePrompt: AD Username # User search maps a username and password entered by a user to a LDAP entry. userSearch: baseDN: dc=hollowaws,dc=local # BaseDN to start the search from. # Optional filter to apply when searching the directory. filter: \u0026#34;(objectClass=person)\u0026#34; # username attribute used for comparing user entries. This will be translated # and combined with the other filter as \u0026#34;(\u0026lt;attr\u0026gt;=\u0026lt;username\u0026gt;)\u0026#34;. username: sAMAccountName # The following three fields are direct mappings of attributes on the user entry. # String representation of the user. idAttr: sAMAccountName # Required. Attribute to map to Email. emailAttr: userPrincipalName # Maps to display name of users. No default value. nameAttr: displayName # Group search queries for groups given a user entry. groupSearch: # BaseDN to start the search from. It will translate to the query # \u0026#34;(\u0026amp;(objectClass=group)(member=\u0026lt;user uid\u0026gt;))\u0026#34;. baseDN: OU=k8s,DC=hollowaws,DC=local # Optional filter to apply when searching the directory. filter: \u0026#34;(objectClass=group)\u0026#34; # Following two fields are used to match a user to a group. It adds an additional # requirement to the filter that an attribute in the group must match the user\u0026#39;s # attribute value. userAttr: distinguishedName groupAttr: member # Represents group name. nameAttr: cn Once the configmap has been configured for your environment and applied to your Kubernetes cluster, we can move on to deploying the rest of the dex components. Next, we\u0026rsquo;ll deploy our containers.\napiVersion: apps/v1 kind: Deployment metadata: labels: app: dex name: dex namespace: default spec: replicas: 1 selector: matchLabels: app: dex strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: dex spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml image: quay.io/coreos/dex:v2.10.0 imagePullPolicy: Always name: dex ports: - containerPort: 5556 name: https protocol: TCP volumeMounts: - mountPath: /etc/dex/cfg name: config volumes: - configMap: items: - key: config.yaml path: config.yaml name: dex name: config When the containers have been deployed through the above deployment manifest, a service and ingress rule should be deployed. For the ingress rule, be sure that you\u0026rsquo;ve updated your configuration to include the appropriate issuer deployed as part of the cert-manager prerequisites, and update your DNS names for the ingress rule for your environment.\n--- kind: Service apiVersion: v1 metadata: name: dex spec: selector: app: dex ports: - port: 5556 targetPort: https name: https --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dex annotations: kubernetes.io/tls-acme: \u0026#34;true\u0026#34; cert-manager.io/cluster-issuer: \u0026#34;letsencrypt-production\u0026#34; #cert-manager issuer name spec: tls: - hosts: - dex.theithollowlab.com secretName: dex-tls rules: - host: dex.theithollowlab.com #Your DNS Name for Dex http: paths: - backend: serviceName: dex servicePort: https Deploy Gangway Now we\u0026rsquo;re ready to deploy Gangway which will be how the user interacts with the solution to get credentials. Gangway acts as the OIDC client.\nGangway will be deployed in its own namespace, and then a configmap with the gangway configs will be deployed first before our containers.\n--- apiVersion: v1 kind: Namespace metadata: name: gangway --- apiVersion: v1 kind: ConfigMap metadata: name: gangway namespace: gangway data: gangway.yaml: | # The cluster name. Used in UI and kubectl config instructions. # Env var: GANGWAY_CLUSTER_NAME clusterName: \u0026#34;hollowcluster\u0026#34; # OAuth2 URL to start authorization flow. # Env var: GANGWAY_AUTHORIZE_URL authorizeURL: \u0026#34;https://dex.theithollowlab.com/dex/auth\u0026#34; #replace the domain name with your domain # OAuth2 URL to obtain access tokens. # Env var: GANGWAY_TOKEN_URL tokenURL: \u0026#34;https://dex.theithollowlab.com/dex/token\u0026#34; #replace the domain name with your domain # Used to specify the scope of the requested Oauth authorization. scopes: [\u0026#34;openid\u0026#34;, \u0026#34;profile\u0026#34;, \u0026#34;email\u0026#34;, \u0026#34;offline_access\u0026#34;, \u0026#34;groups\u0026#34;] # Where to redirect back to. This should be a URL where gangway is reachable. # Typically this also needs to be registered as part of the oauth application # with the oAuth provider. # Env var: GANGWAY_REDIRECT_URL redirectURL: \u0026#34;https://gangway.theithollowlab.com/callback\u0026#34; #replace the domain name with your domain # API client ID as indicated by the identity provider # Env var: GANGWAY_CLIENT_ID clientID: \u0026#34;gangway\u0026#34; # API client secret as indicated by the identity provider # Env var: GANGWAY_CLIENT_SECRET clientSecret: \u0026#34;mfgDcwBEgSgFehUFdQh2fhbftrgPOQWy0Q05gZgY8bs=\u0026#34; #secret key from prerequisites again. This should match the Dex key # The JWT claim to use as the username. This is used in UI. # Default is \u0026#34;nickname\u0026#34;. # Env var: GANGWAY_USERNAME_CLAIM usernameClaim: \u0026#34;sub\u0026#34; # The JWT claim to use as the email claim. This is used to name the # \u0026#34;user\u0026#34; part of the config. Default is \u0026#34;email\u0026#34;. # Env var: GANGWAY_EMAIL_CLAIM emailClaim: \u0026#34;email\u0026#34; # The API server endpoint used to configure kubectl # Env var: GANGWAY_APISERVER_URL apiServerURL: https://cp.theithollowlab.com:6443 #This should be your k8s API URL Just like we did with Dex, we\u0026rsquo;ll next deploy our containers which are part of the deployment manifest below.\napiVersion: apps/v1 kind: Deployment metadata: name: gangway namespace: gangway labels: app: gangway spec: replicas: 1 selector: matchLabels: app: gangway template: metadata: labels: app: gangway revision: \u0026#34;1\u0026#34; spec: containers: - name: gangway image: gcr.io/heptio-images/gangway:v2.0.0 imagePullPolicy: Always command: [\u0026#34;gangway\u0026#34;, \u0026#34;-config\u0026#34;, \u0026#34;/gangway/gangway.yaml\u0026#34;] env: - name: GANGWAY_SESSION_SECURITY_KEY valueFrom: secretKeyRef: name: gangway-key key: sesssionkey ports: - name: http containerPort: 8080 protocol: TCP resources: requests: cpu: \u0026#34;100m\u0026#34; memory: \u0026#34;128Mi\u0026#34; limits: cpu: \u0026#34;200m\u0026#34; memory: \u0026#34;512Mi\u0026#34; volumeMounts: - name: gangway mountPath: /gangway/ livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 20 timeoutSeconds: 1 periodSeconds: 60 failureThreshold: 3 readinessProbe: httpGet: path: / port: 8080 timeoutSeconds: 1 periodSeconds: 10 failureThreshold: 3 volumes: - name: gangway configMap: name: gangway And then lastly, we\u0026rsquo;ll deploy a service and ingress rule to allow communication to our gangway containers. Be sure to update the dns rules and issuers\n--- kind: Service apiVersion: v1 metadata: name: gangwaysvc namespace: gangway labels: app: gangway spec: type: ClusterIP ports: - name: \u0026#34;http\u0026#34; protocol: TCP port: 80 targetPort: \u0026#34;http\u0026#34; selector: app: gangway --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gangway namespace: gangway annotations: kubernetes.io/tls-acme: \u0026#34;true\u0026#34; cert-manager.io/cluster-issuer: \u0026#34;letsencrypt-production\u0026#34; #your cert-manager issuer here spec: tls: - secretName: gangway hosts: - gangway.theithollowlab.com #dns name previously configured for gangway rules: - host: gangway.theithollowlab.com #dns name previously configured for gangway http: paths: - backend: serviceName: gangwaysvc servicePort: http Try it out! Its that moment, you\u0026rsquo;ve been waiting for. Lets try to login with a user in our AD group that should have permissions to the cluster. Navigate to your gangway URL which in my case was https://gangway.theithollowlab.com.\nClick the sign in button to continue. Then enter your AD Username and Password for the user in question. Notice that in the screenshot we were redirected to Dex for this step.\nAfter logging in with the my test user, we\u0026rsquo;re presented with the option to grant access. This is a good sign. Click the \u0026ldquo;Grant Access\u0026rdquo; button.\nNow, we\u0026rsquo;ll see that we\u0026rsquo;ve been redirected back to Gangway with the instructions on configuring kubectl for the command line.\nAfter installing kubectl and executing the commands from the screen, you should be able to run a kubectl command against the cluster with no problem.\nHere are those steps in my cli, and the last command is a simple get on the pods running in the default namespace.\nSummary Setting up a way to authenticate with a corporate directory for authentication is almost a must for most organizations. Its hard to have systems running everywhere with their own directory services so AD is pretty common. I hope this post helped show how you can connect your Kubernetes cluster to Active Directory to help ease this burden.\nIf you want more information around these projects, please check out these resources:\nDex - https://github.com/dexidp/dex Gangway - https://github.com/heptiolabs/gangway Dex/Gangway Tutorial - https://github.com/alexbrand/gangway-dex-tutorial TGIK - https://www.youtube.com/watch?v=xYMA-S75_9U The code for this post can be found on this github repository for easier access: https://github.com/eshanks16/k8s-ldap\n","permalink":"https://theithollow.com/2020/01/21/active-directory-authentication-for-kubernetes-clusters/","summary":"\u003cp\u003eYou\u0026rsquo;ve stood up your Kubernetes (k8s) cluster and are really looking forward to all of your coworkers deploying containers on it. How will you get everyone logged in? Creating local service accounts and distributing KUBECONFIG files (securely), seems like a real chore. This post will show how you can use Active Directory authentication for Kubernetes Clusters.\u003c/p\u003e\n\u003cp\u003eThis post will use two projects, \u003ca href=\"https://github.com/dexidp/dex\"\u003edex\u003c/a\u003e and \u003ca href=\"https://github.com/heptiolabs/gangway\"\u003egangway\u003c/a\u003e, to perform the authentication against ldap and return the Kubernetes login information to the user\u0026rsquo;s browser. The end result will look something like the screen below. The authenticated user will receive instructions on installing the client and setting up certificates for authentication.\u003c/p\u003e","title":"Active Directory Authentication for Kubernetes Clusters"},{"content":"The way you deploy Kubernetes (k8s) on AWS will be similar to how it was done in a previous post on vSphere. You still setup nodes, you still deploy kubeadm, and kubectl but there are a few differences when you change your cloud provider. For instance on AWS we can use the LoadBalancer resource against the k8s API and have AWS provision an elastic load balancer for us. These features take a few extra tweaks in AWS.\nAWS Prerequisites Before we start deploying Kubernetes, we need to ensure a few prerequisites are covered in our AWS environment. You will need the following components setup to use the cloud provider and have the kubeadm installation complete successfully.\nAn AWS Account with administrative access. EC2 instances for Control plane nodes (This post uses three ubuntu nodes split across AWS Availability Zones, but this is not necessary.) EC2 instances hostname must match the private dns name assigned to it. For example, when I deployed my ec2 instance from template and ran hostname, I got ip-10-0-4-208. However, when I check the private DNS from the ec2 instance metadata by the following command curl http://169.254.169.254/latest/meta-data/local-hostname I received: ip-10-0-4-208.us-east-2.compute.internal. This won\u0026rsquo;t work. The hostname command must match the private dns name exactly. Use this command to set the hostname. hostnamectl set-hostname \u0026lt;hostname.region.compute.internal\u0026gt; To ensure the Control plane instances have access to AWS to spin up Load Balancers, ebs volumes, etc. we must apply an Instance Policy to the Control Plane nodes with the following iam policy: { \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;autoscaling:DescribeAutoScalingGroups\u0026#34;, \u0026#34;autoscaling:DescribeLaunchConfigurations\u0026#34;, \u0026#34;autoscaling:DescribeTags\u0026#34;, \u0026#34;ec2:DescribeInstances\u0026#34;, \u0026#34;ec2:DescribeRegions\u0026#34;, \u0026#34;ec2:DescribeRouteTables\u0026#34;, \u0026#34;ec2:DescribeSecurityGroups\u0026#34;, \u0026#34;ec2:DescribeSubnets\u0026#34;, \u0026#34;ec2:DescribeVolumes\u0026#34;, \u0026#34;ec2:CreateSecurityGroup\u0026#34;, \u0026#34;ec2:CreateTags\u0026#34;, \u0026#34;ec2:CreateVolume\u0026#34;, \u0026#34;ec2:ModifyInstanceAttribute\u0026#34;, \u0026#34;ec2:ModifyVolume\u0026#34;, \u0026#34;ec2:AttachVolume\u0026#34;, \u0026#34;ec2:AuthorizeSecurityGroupIngress\u0026#34;, \u0026#34;ec2:CreateRoute\u0026#34;, \u0026#34;ec2:DeleteRoute\u0026#34;, \u0026#34;ec2:DeleteSecurityGroup\u0026#34;, \u0026#34;ec2:DeleteVolume\u0026#34;, \u0026#34;ec2:DetachVolume\u0026#34;, \u0026#34;ec2:RevokeSecurityGroupIngress\u0026#34;, \u0026#34;ec2:DescribeVpcs\u0026#34;, \u0026#34;elasticloadbalancing:AddTags\u0026#34;, \u0026#34;elasticloadbalancing:AttachLoadBalancerToSubnets\u0026#34;, \u0026#34;elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\u0026#34;, \u0026#34;elasticloadbalancing:CreateLoadBalancer\u0026#34;, \u0026#34;elasticloadbalancing:CreateLoadBalancerPolicy\u0026#34;, \u0026#34;elasticloadbalancing:CreateLoadBalancerListeners\u0026#34;, \u0026#34;elasticloadbalancing:ConfigureHealthCheck\u0026#34;, \u0026#34;elasticloadbalancing:DeleteLoadBalancer\u0026#34;, \u0026#34;elasticloadbalancing:DeleteLoadBalancerListeners\u0026#34;, \u0026#34;elasticloadbalancing:DescribeLoadBalancers\u0026#34;, \u0026#34;elasticloadbalancing:DescribeLoadBalancerAttributes\u0026#34;, \u0026#34;elasticloadbalancing:DetachLoadBalancerFromSubnets\u0026#34;, \u0026#34;elasticloadbalancing:DeregisterInstancesFromLoadBalancer\u0026#34;, \u0026#34;elasticloadbalancing:ModifyLoadBalancerAttributes\u0026#34;, \u0026#34;elasticloadbalancing:RegisterInstancesWithLoadBalancer\u0026#34;, \u0026#34;elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\u0026#34;, \u0026#34;elasticloadbalancing:AddTags\u0026#34;, \u0026#34;elasticloadbalancing:CreateListener\u0026#34;, \u0026#34;elasticloadbalancing:CreateTargetGroup\u0026#34;, \u0026#34;elasticloadbalancing:DeleteListener\u0026#34;, \u0026#34;elasticloadbalancing:DeleteTargetGroup\u0026#34;, \u0026#34;elasticloadbalancing:DescribeListeners\u0026#34;, \u0026#34;elasticloadbalancing:DescribeLoadBalancerPolicies\u0026#34;, \u0026#34;elasticloadbalancing:DescribeTargetGroups\u0026#34;, \u0026#34;elasticloadbalancing:DescribeTargetHealth\u0026#34;, \u0026#34;elasticloadbalancing:ModifyListener\u0026#34;, \u0026#34;elasticloadbalancing:ModifyTargetGroup\u0026#34;, \u0026#34;elasticloadbalancing:RegisterTargets\u0026#34;, \u0026#34;elasticloadbalancing:DeregisterTargets\u0026#34;, \u0026#34;elasticloadbalancing:SetLoadBalancerPoliciesOfListener\u0026#34;, \u0026#34;iam:CreateServiceLinkedRole\u0026#34;, \u0026#34;kms:DescribeKey\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;*\u0026#34; ] } ] } EC2 instances for worker nodes (Likewise, this post uses three ubuntu worker nodes across AZs) Worker node EC2 instances also need an AWS Instance Profile assigned to them with permissions to the AWS control plane. { \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;ec2:DescribeInstances\u0026#34;, \u0026#34;ec2:DescribeRegions\u0026#34;, \u0026#34;ecr:GetAuthorizationToken\u0026#34;, \u0026#34;ecr:BatchCheckLayerAvailability\u0026#34;, \u0026#34;ecr:GetDownloadUrlForLayer\u0026#34;, \u0026#34;ecr:GetRepositoryPolicy\u0026#34;, \u0026#34;ecr:DescribeRepositories\u0026#34;, \u0026#34;ecr:ListImages\u0026#34;, \u0026#34;ecr:BatchGetImage\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; } ] } All EC2 instances (Worker and Control Plane nodes) must have a tag named kubernetes.io/cluster/\u0026lt;CLUSTERNAME\u0026gt; where CLUSTERNAME is a name you\u0026rsquo;ll give your cluster. Subnets must have a tag named kubernetes.io/cluster/\u0026lt;CLUSTERNAME\u0026gt;. where CLUSTERNAME is the name you\u0026rsquo;ll give your cluster. This is used when new Load Balancers are attached to Availability Zones. An elastic load balancer setup and configured with the three master EC2 instances. There should be a health check on the targets of SSL:6443 as well as listeners. A DNS Entry configured to work with your load balancer. The high level view of my instances are shown here:\nCreate a Kubeadm Config File Now that the AWS infrastructure is ready to go, we\u0026rsquo;re ready to stat working on the Kubernetes pieces. The first section is dedicated to setting up a kubeadm.conf file. This file has instructions on how to setup the control plane components when we use kubeadm to bootstrap them. There are a ton of options that can be configured here, but we\u0026rsquo;ll use a simple example that has AWS cloud provider configs included.\nCreate a kubeadm.conf file based on the example below, using your own environment information.\n--- apiServer: extraArgs: cloud-provider: aws apiServerCertSANs: - cp.theithollowlab.com apiServerExtraArgs: endpoint-reconciler-type: lease apiVersion: kubeadm.k8s.io/v1beta1 clusterName: hollowk8s #your cluster name controlPlaneEndpoint: cp.theithollowlab.com #your VIP DNS name controllerManager: extraArgs: cloud-provider: aws configure-cloud-routes: \u0026#39;false\u0026#39; kind: ClusterConfiguration kubernetesVersion: 1.17.0 #your desired k8s version networking: dnsDomain: cluster.local podSubnet: 172.16.0.0/16 #your pod subnet matching your CNI config nodeRegistration: kubeletExtraArgs: cloud-provider: aws Kubernetes EC2 Instance Setup Now, we can start installing components on the ubuntu instances before we deploy the cluster. Do this on all virtual machines that will be part of your Kubernetes cluster.\nDisable swap\nswapoff -a sed -i.bak -r \u0026#39;s/(.+ swap .+)/#\\1/\u0026#39; /etc/fstab Install Kubelet, Kubeadm, and Kubectl.\nsudo apt-get update \u0026amp;\u0026amp; sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - cat \u0026lt;\u0026lt;EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl Install Docker and change the cgroup driver to systemd.\nsudo apt install docker.io -y cat \u0026gt; /etc/docker/daemon.json \u0026lt;\u0026lt;EOF { \u0026#34;exec-opts\u0026#34;: [\u0026#34;native.cgroupdriver=systemd\u0026#34;], \u0026#34;log-driver\u0026#34;: \u0026#34;json-file\u0026#34;, \u0026#34;log-opts\u0026#34;: { \u0026#34;max-size\u0026#34;: \u0026#34;100m\u0026#34; }, \u0026#34;storage-driver\u0026#34;: \u0026#34;overlay2\u0026#34; } EOF sudo systemctl restart docker sudo systemctl enable docker Once you\u0026rsquo;ve completed the steps above, copy the kubeadm.conf file to /etc/kubernetes/ on the Control Plane VMs.\nAfter the kubeadm.conf file is placed we need to update the configuration of the kubelet service so that it knows about the AWS environment as well. Edit the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add an additional configuration option.\n--cloud-provider=aws Once the configuration has been made, reset the kubelet daemon.\nsystemctl daemon-reload Bootstrap the First K8s Control Plane Node The time has come to setup the cluster. Login to one of your control plane nodes which will become the first master in the cluster. We’ll run the kubeadm initialization with the kubeadm.conf file that we created earlier and placed in the /etc/kubernetes directory.\nkubeadm init --config /etc/kubernetes/kubeadm.conf --upload-certs It may take a bit for the process to complete. Kubeadm init is ensuring that our api-server, controller-manager, and etcd container images are downloaded as well as creating certificates which you should find in the /etc/kubernetes/pki directory.\nWhen the process is done, you should receive instructions on how to add additional control plane nodes and worker nodes.\nAdd Additional Control Plane Nodes We can now take the information provided from our init command and run the kubeadm join provided in our output, on the other two control plane nodes.\nBefore we add those additional control plan nodes, you’ll need to copy the contents of the pki directory to the other control plane nodes. This is needed because they need those certificates for authentication purposes with the existing control plane node.\nYour instructions are going to be different. The command below was what was provided to me.\nkubeadm join cp.theithollowlab.com:6443 --token c5u9ax.ua2896eckzxms16p \\ --discovery-token-ca-cert-hash sha256:ceaa3e06e246228bbca02cc9b107070bf42383778a66b91c0bc3fecaac1fd26b \\ --control-plane --certificate-key 82b68a17877ab24c705b427d41373068482c7e6bc07b28ae8e63062019371989 When you\u0026rsquo;re done with your additional control plane nodes, you should see a success message with some instrucitons on setting up the KUBECONFIG file, which we\u0026rsquo;ll cover later.\nJoin Worker Nodes to the Cluster At this point we should have three control plane nodes working in our cluster. Let’s add the worker nodes now by using the other kubeadm join command presented to use after setting up our first control plane node.\nAgain, yours will be different, but for example purposes mine was:\nkubeadm join cp.theithollowlab.com:6443 --token c5u9ax.ua2896eckzxms16p \\ --discovery-token-ca-cert-hash sha256:ceaa3e06e246228bbca02cc9b107070bf42383778a66b91c0bc3fecaac1fd26b Setup KUBECONFIG Log back into your first Kubernetes control plane node and we’ll setup KUBECONFIG so we can issue some commands against our cluster and ensure that it’s working properly.\nRun the following to configure your KUBECONFIG file for use:\nexport KUBECONFIG=/etc/kubernetes/admin.conf When you’re done, you can run:\nkubectl get nodes We can see here that we have a cluster created, but the status is not ready. This is because we’re missing a CNI.\nDeploy a CNI There are a variety of Networking interfaces that could be deployed. For this simple example I’ve used calico. Simply apply this manifest from one of your nodes.\nkubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml When you\u0026rsquo;re done, you should have a working cluster.\nDeploy an AWS Storage Class If you wish to use EBS volumes for your Persistent Volumes, you can apply a storage class manifest such as the following:\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/storage-class/aws/default.yaml Common Errors There are some tricky places along the way, below are a few common issues you may run into when standing up your Kubernetes cluster.\nMissing EC2 Cluster Tags If you\u0026rsquo;ve forgotten to add the kubernetes.io/cluster/ tag to your EC2 instances, kubeadm will fail stating issues with the kubelet.\nThe kubelet logs will include something like the following:\nTag \u0026ldquo;KubernetesCluster\u0026rdquo; nor \u0026ldquo;kubernetes.io/cluster/…\u0026rdquo; not found; Kubernetes may behave unexpectedly.\n\u0026hellip; failed to run Kubelet: could not init cloud provider \u0026ldquo;aws\u0026rdquo;: AWS cloud failed to find ClusterID\nMissing Subnet Cluster Tags If you\u0026rsquo;ve forgotten to add the kubernetes.io/cluster/ tag to your subnets, then any LoadBalancer resources will be stuck in a \u0026ldquo;pending\u0026rdquo; state.\nThe controller managers will throw errors about a missing tags on the subnets.\nfailed to ensure load balancer: could not find any suitable subnets for creating the ELB\nNode Names Don\u0026rsquo;t Match Private DNS Names If your hostname and private dns names don\u0026rsquo;t match, you\u0026rsquo;ll see error messages during the kubeadm init phase.\nError writing Crisocket information for the control-plane node: timed out waiting for the condition.\nThe kubelet will show error messages about the node not being found.\nTo fix this, update the nodes hostnames so that they match the private dns names.\nSummary Once you\u0026rsquo;ve completed the above sections, you should have a functional Kubernetes cluster that can take advantage of things like LoadBalancers and EBS volumes through your Kubernetes API calls. Have fun!\n","permalink":"https://theithollow.com/2020/01/13/deploy-kubernetes-on-aws/","summary":"\u003cp\u003eThe way you deploy Kubernetes (k8s) on AWS will be similar to how it was done in a \u003ca href=\"/2020/01/08/deploy-kubernetes-on-vsphere/\"\u003eprevious post on vSphere\u003c/a\u003e. You still setup nodes, you still deploy kubeadm, and kubectl but there are a few differences when you change your cloud provider. For instance on AWS we can use the LoadBalancer resource against the k8s API and have AWS provision an elastic load balancer for us. These features take a few extra tweaks in AWS.\u003c/p\u003e","title":"Deploy Kubernetes on AWS"},{"content":"If you\u0026rsquo;re struggling to deploy Kubernetes (k8s) clusters, you\u0026rsquo;re not alone. There are a bunch of different ways to deploy Kubernetes and there are different settings depending on what cloud provider you\u0026rsquo;re using. This post will focus on installing Kubernetes on vSphere with Kubeadm. At the end of this post, you should have what you need to manually deploy k8s in a vSphere environment on ubuntu.\nPrerequisites NOTE: This tutorial uses the \u0026ldquo;in-tree\u0026rdquo; cloud provider for vSphere. This is not the preferred method for deployment going forward. More details can be found here for reference.\nBefore we start configuring and deploying Kubernetes, we need to ensure we have the proper environment setup. My lab will consist of three ubuntu virtual machines used for control plane (masters) nodes, and five additional machines for worker nodes. Your environment can vary with the number of nodes, but I\u0026rsquo;ve chosen to use three control plane nodes to show the high availability configurations. My environment looks roughly like this diagram:\nNote that having multiple control plane nodes requires having a load balancer to distribute traffic to these VMs. The load balancer should have a health check on port 6443 on the control plane nodes and a VIP. I\u0026rsquo;m using a Kemp load balancer in my lab where I\u0026rsquo;ve created a VIP and a corresponding DNS name for this vip.\nVIP - 10.10.50.170:6443 DNS Name - kube.hollow.local These Kubernetes virtual machines should be placed within their own VM folder in vSphere. You can see that I created a folder named \u0026ldquo;kubernetes\u0026rdquo; for this post.\nAlso, the virtual machines that will be used for the Kubernetes cluster need to have an advanced setting changed for each VM. The disk.EnableUUID parameter must be set to true. This can be automated of course, but this post focuses on the manual settings so you can see whats happening.\nRight Click the k8s node in vCenter and choose edit settings. From there, go to the VM Options tab and scroll down to find the \u0026ldquo;Edit Configuration\u0026rdquo; link in the Configuration Parameters settings.\nFind the disk.EnableUUID and set the value to true.\nvSphere Permissions Some components of Kubernetes work better when they are tied with a cloud provider such as vSphere or AWS. For example, when you create a Persistent Volume, it sure would be nice to have the cloud provider provision storage for that volume wouldn\u0026rsquo;t it? Well before we can couple Kubernetes with vSphere, we need to setup some permissions first.\nWe will need to create a few roles, of which I will show the permissions granted to each below in a screenshot.\nmanage-k8s-node-vms\nmanage-k8s-volumes\nk8s-system-read-and-spbm-profile-view\nOnce these roles have been created with the appropriate permissions assigned, they must be added to the proper entity in vCenter and associated with a user. In my case I\u0026rsquo;m using a service account called \u0026quot; k8s-vcp\u0026quot;.\nBe sure to add the appropriate role to the correct entity and user. The table below shows how they should be assigned.\nRole to be AssignedEntityPropagate?****manage-k8s-node-vmsCluster, Hosts, k8s nodes VM FolderYesmanage-k8s-volumesDatastore where new volumes will be createdNok8s-system-read-and-spbm-profile-viewvCenterNo**Read-Only (pre-created)**Datacenter, Datastore, Cluster, Datastore Storage FolderNo\nFor example, in my vSphere Cluster where my k8s vms live, I\u0026rsquo;ve added the \u0026quot; manage-k8s-node-vms\u0026quot; role to my k8s-vcp user and set it to propagate. Do this for each permission in the list above.\nCreate vSphere Config File Now that we\u0026rsquo;ve got our permissions setup, we need to create a file with some login information in it. This config file is used by the k8s control plane to interact with vSphere. Think of it this way, we need to be able to tell the k8s control plane where our VMs live, what datastores to use, and under what user we should be making those calls from.\nCreate a vsphere.conf file and fill it out with the information from the template below, filling in your own information as appropriate. The file below are the settings used in my environment. For details about how this file can be constructed, visit the VMware.github.io page for details on how you can setup multiple vCenters for example.\n[Global] user = \u0026#34;k8s-vcp@vsphere.local\u0026#34; password = \u0026#34;Password123\u0026#34; port = \u0026#34;443\u0026#34; insecure-flag = \u0026#34;1\u0026#34; [VirtualCenter \u0026#34;vcenter1.hollow.local\u0026#34;] datacenters = \u0026#34;HollowLab\u0026#34; [Workspace] server = \u0026#34;vcenter1.hollow.local\u0026#34; datacenter = \u0026#34;HollowLab\u0026#34; default-datastore = \u0026#34;vsanDatastore\u0026#34; resourcepool-path = \u0026#34;HollowCluster/Resources\u0026#34; folder = \u0026#34;kubernetes\u0026#34; [Disk] scsicontrollertype = pvscsi Create a Kubeadm Config File Now we\u0026rsquo;re pretty much done with the vSphere side. We\u0026rsquo;re ready to start working on the Kubernetes pieces. The first section we\u0026rsquo;ll cover is creating the kubeadm.conf file. This file has instructions on how to setup the control plane components when we use kubeadm to bootstrap them. There are a ton of options here so we won\u0026rsquo;t go into all of them. The important pieces are to note that we need to provide the vsphere.conf file as parameters in this kubeadm configuration. The file below is what I used to setup my cluster. Be sure to update the IP addresses and DNS Names for your load balancer here as well.\n--- apiServer: extraArgs: cloud-config: /etc/kubernetes/vsphere.conf cloud-provider: vsphere endpoint-reconciler-type: lease extraVolumes: - hostPath: /etc/kubernetes/vsphere.conf mountPath: /etc/kubernetes/vsphere.conf name: cloud apiServerCertSANs: - 10.10.50.170 - kube.hollow.local apiServerExtraArgs: endpoint-reconciler-type: lease apiVersion: kubeadm.k8s.io/v1beta1 controlPlaneEndpoint: kube.hollow.local controllerManager: extraArgs: cloud-config: /etc/kubernetes/vsphere.conf cloud-provider: vsphere extraVolumes: - hostPath: /etc/kubernetes/vsphere.conf mountPath: /etc/kubernetes/vsphere.conf name: cloud kind: ClusterConfiguration kubernetesVersion: 1.17.0 networking: podSubnet: 192.168.0.0/16 Kubernetes VM Setup Now we can start installing components on the ubuntu VMs before we deploy the cluster. Do this on all virtual machines that will be part of your Kubernetes Cluster.\nDisable swap\nswapoff -a sed -i.bak -r \u0026#39;s/(.+ swap .+)/#\\1/\u0026#39; /etc/fstab Install Kubelet, Kubeadm, and Kubectl\nsudo apt-get update \u0026amp;\u0026amp; sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - cat \u0026lt;\u0026lt;EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl Install Docker and change the cgroup driver to systemd.\nsudo apt install docker.io -y cat \u0026gt; /etc/docker/daemon.json \u0026lt;\u0026lt;EOF { \u0026#34;exec-opts\u0026#34;: [\u0026#34;native.cgroupdriver=systemd\u0026#34;], \u0026#34;log-driver\u0026#34;: \u0026#34;json-file\u0026#34;, \u0026#34;log-opts\u0026#34;: { \u0026#34;max-size\u0026#34;: \u0026#34;100m\u0026#34; }, \u0026#34;storage-driver\u0026#34;: \u0026#34;overlay2\u0026#34; } EOF sudo systemctl restart docker sudo systemctl enable docker Once you\u0026rsquo;ve completed the steps above, copy the vSphere.conf and kubeadm.conf files to /etc/kubernetes/ on your Control Plane VMs.\nOnce you\u0026rsquo;ve placed the Kubeadm.conf and vsphere.conf files in the /etc/kubernetes directory, we need to update the configuration of the Kubelet service so that it knows about the vsphere environment as well.\nEdit the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and add two configuration options.\n--cloud-provider=vsphere --cloud-config=/etc/kubernetes/vsphere.conf When you\u0026rsquo;re done, recycle the kubelet service.\nsystemctl daemon-reload Bootstrap the First K8s Control Plane Node The time has come to setup the cluster. Login to one of your control plane nodes which will become the first master in the cluster. We\u0026rsquo;ll run the kubeadm initialization with the kubeadm.conf file that we created earlier and placed in the /etc/kubernetes directory.\nkubeadm init --config /etc/kubernetes/kubeadm.conf --upload-certs It may take a bit for the process to complete. Kubeadm init is ensuring that our api-server, controller-manager, and etcd container images are downloaded as well as creating certificates which you should find in the /etc/kubernetes/pki directory.\nWhen the process is done, you should receive instructions on how to add additional control plane nodes and worker nodes.\nAdd Additional Control Plane Nodes We can now take the information provided from our init command and run the kubeadm join provided in our output, on the other two control plane nodes.\nBefore we add those additional control plan nodes, you\u0026rsquo;ll need to copy the contents of the pki directory to the other control plane nodes. This is needed because they need those certificates for authentication purposes with the existing control plane node.\nYour instructions are going to be different. The command below was what was provided to me.\nkubeadm join kube.hollow.local:6443 --token v6gohm.lzzh9bgjgiwtnh5h \\ --discovery-token-ca-cert-hash sha256:70f35ce8c79d7e4ea189e61cc5459d1071a3ab906fd9cede7a77b070f204c5c8 \\ --control-plane --certificate-key e048d3654ae2fca5409b8255f83ecfa00b08376ab6f91d7230cacf4a547cc372 When you\u0026rsquo;re done with your additional control plane clsuters, you should see a success message with some instructions on setting up the KUBECONFIG file which we\u0026rsquo;ll cover later.\nJoin Worker Nodes to the Cluster At this point we should have three control plane nodes working in our cluster. Let\u0026rsquo;s add the worker nodes now by using the other kubeadm join command presented to use after setting up our first control plane node.\nAgain, yours will be different, but for example purposes mine was:\nkubeadm join kube.hollow.local:6443 --token v6gohm.lzzh9bgjgiwtnh5h \\ --discovery-token-ca-cert-hash sha256:70f35ce8c79d7e4ea189e61cc5459d1071a3ab906fd9cede7a77b070f204c5c8 Run the command on your worker nodes\nSetup KUBECONFIG Log back into your first Kubernetes control plane node and we\u0026rsquo;ll setup KUBECONFIG so we can issue some commands against our cluster and ensure that it\u0026rsquo;s working properly.\nRun the following to configure your KUBECONFIG file for use:\nexport KUBECONFIG=/etc/kubernetes/admin.conf When you\u0026rsquo;re done, you can run:\nkubectl get nodes We can see here that we have a cluster created, but the status is not ready. This is because we\u0026rsquo;re missing a CNI.\nDeploy a CNI There are a variety of Networking interfaces that could be deployed. For this simple example I\u0026rsquo;ve used calico. Simply apply this manifest from one of your nodes.\nkubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml When you\u0026rsquo;re done, you should have a working cluster.\nSummary At this point you should have a very basic Kubernetes cluster up and running and should be able to use storage classes with your vSphere environment. Your next steps should be building cool things on Kubernetes or tinkering around with the builds to use different CNIs, Container Runtimes, and automate it! Good luck!\n","permalink":"https://theithollow.com/2020/01/08/deploy-kubernetes-on-vsphere/","summary":"\u003cp\u003eIf you\u0026rsquo;re struggling to deploy Kubernetes (k8s) clusters, you\u0026rsquo;re not alone. There are a bunch of different ways to deploy Kubernetes and there are different settings depending on what cloud provider you\u0026rsquo;re using. This post will focus on installing Kubernetes on vSphere with Kubeadm. At the end of this post, you should have what you need to manually deploy k8s in a vSphere environment on ubuntu.\u003c/p\u003e\n\u003ch2 id=\"prerequisites\"\u003ePrerequisites\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eNOTE:\u003c/strong\u003e This tutorial uses the \u0026ldquo;in-tree\u0026rdquo; cloud provider for vSphere. This is not the preferred method for deployment going forward. More details can be found \u003ca href=\"https://cloud-provider-vsphere.sigs.k8s.io/concepts/in_tree_vs_out_of_tree.html\"\u003ehere\u003c/a\u003e for reference.\u003c/p\u003e","title":"Deploy Kubernetes on vSphere"},{"content":"Sometimes we need to run a container to do a specific task, and when its completed, we want it to quit. Many containers are deployed and continuously run, such as a web server. But other times we want to accomplish a single task and then quit. This is where a Job is a good choice.\nJobs and CronJobs - The Theory Perhaps, we need to run a batch process on demand. Maybe we built an automation routine for something and want to kick it off through the use of a container. We can do this by submitting a job to the Kubernetes API. Kubernetes will run the job to completion and then quit.\nNow, thats kind of handy, but what if we want to run this job we\u0026rsquo;ve automated at a specific time every single day. Maybe you need to perform some batch processing at the end of a workday, or perhaps you want to destroy pods at the end of the day to free up resources. There could be a myriad of things that you\u0026rsquo;d like to be done at different times during the day or night.\nA Kubernetes CronJob lets you schedule a container to be run at a time, defined by you, much the same way that a linux server would schedule a cron job. Think of the CronJob as a Job with a schedule.\nJobs and CronJobs - In Action First, we\u0026rsquo;ll deploy a simple job that will run curl against this blog just to see if it will work. I\u0026rsquo;ve created a simple alpine container with the curl installed on it. We\u0026rsquo;ll use this container image to run curl against this site. The manifest below will create our simple curl job.\napiVersion: batch/v1 kind: Job metadata: name: hollow-curl spec: template: spec: containers: - name: hollow-curl image: theithollow/hollowapp-blog:curl args: - /bin/sh - -c - curl https://theithollow.com restartPolicy: Never backoffLimit: 4 Apply the job manifest with the following command:\nkubectl apply -f [job manifest file].yml Once the job has been submitted, we can see that a job was run by running a \u0026ldquo;get\u0026rdquo; on the job object. We can see that a job was run and how long it took.\nTo go a bit deeper, we can run a describe on our job to see additional details about it. Some interesting information is shown below for the job I submitted to my cluster.\nAccording to the events, we can see that our job completed successfully already. Next, lets just check if there are any pods deployed. Sure enough, a pod was deployed and it has a status of completed.\nNow, what if we want to run the same job on a set schedule? For this example, we\u0026rsquo;ll create a simple CronJob that will run curl against this blog at the top of the hour. We can pretend that we\u0026rsquo;re checking uptime on the website if we want it to feel more like a real world example, but your cron job could execute anything you could dream up and drop into a container.\nLet\u0026rsquo;s build a manifest file to deploy a CronJob with the appropriate schedule, which in this case will be every hour, at the top of the hour. The manifest to create this CronJob is shown below.\napiVersion: batch/v1beta1 kind: CronJob metadata: name: hollow-curl spec: schedule: \u0026#34;0 * * * *\u0026#34; jobTemplate: spec: template: spec: containers: - name: hollow-curl image: theithollow/hollowapp-blog:curl args: - /bin/sh - -c - curl https://theithollow.com restartPolicy: OnFailure Deploy the CronJob by running:\nkubectl apply -f [manifest file name].yml Once deployed, we can check to see if we can list the CronJob by running a \u0026ldquo;get\u0026rdquo; on the cronjob object.\nNow, at the top of the hour, we should be able to run this again and see that its been run successfully. In my case, I ran a \u0026ndash;watch so that I could see what happened at the top of the hour. We can see that it went from not scheduled, to active, to completed.\nSummary Jobs and CronJobs may seem like a really familiar solution to what we\u0026rsquo;ve done to run batch jobs for many years. Now we can take some of our jobs that were stored on servers and replace them with a container and drop those in our Kubernetes cluster as well. Good luck with your jobs!\n","permalink":"https://theithollow.com/2019/12/16/kubernetes-jobs-and-cronjobs/","summary":"\u003cp\u003eSometimes we need to run a container to do a specific task, and when its completed, we want it to quit. Many containers are deployed and continuously run, such as a web server. But other times we want to accomplish a single task and then quit. This is where a Job is a good choice.\u003c/p\u003e\n\u003ch2 id=\"jobs-and-cronjobs---the-theory\"\u003eJobs and CronJobs - The Theory\u003c/h2\u003e\n\u003cp\u003ePerhaps, we need to run a batch process on demand. Maybe we built an automation routine for something and want to kick it off through the use of a container. We can do this by submitting a job to the Kubernetes API. Kubernetes will run the job to completion and then quit.\u003c/p\u003e","title":"Kubernetes - Jobs and CronJobs"},{"content":"One of my least favorite parts of computers is dealing with certificate creation. In fact, ya know those tweets about what you\u0026rsquo;d tweet if you were kidnapped and didn\u0026rsquo;t want to tip off the kidnapers?\nYeah, I\u0026rsquo;d tweet about how I love working with certificates. They are just not a fun thing for me. So when I found a new project where I needed certificates created, I was not really excited.\nWhat I found though, was a neat little project called jetstack cert-manager, that will automatically create and apply certificates to Kubernetes resources and it was really simple to setup and use.\nOverview Cert-manager is a Kubernetes add on that works with a variety of issuers including Hashicorp Vault and LetsEncrypt Certificate Authorities. You can use this to create your own certificates by issuing kubectl commands, but an even more powerful way to use it is to have it automatically create certificates when an ingress resource is deployed.\nWhen a new ingress resources is created to pass traffic through a reverse proxy, cert-manager responds by requesting a new certificate and validates it with the CA, then applies the certificate to secure your traffic. So in a sense, you can set this up one time, and then have all your web servers protected with an SSL cert just by adding a couple of lines to your ingress resources.\nLet\u0026rsquo;s see it in action in my AWS cluster using the LetsEncrypt. We\u0026rsquo;ll deploy cert-manager and issue a staging cert which is a temporary cert to make sure things are working properly, and then later a production certificate which is fully trusted by most browsers.\nPrerequisites Before we deploy cert-manager, let\u0026rsquo;s get some basics setup and configured. We\u0026rsquo;ll need an ingress resource that is available on the Internet so that LetsEncrypt can issue the http-01 or dns-01 challenges to validate the domain names. We\u0026rsquo;ll also need an ingress controller and obviously a Kubernetes cluster with some capacity in it for our resources.\nHere is a snapshot of the lab I\u0026rsquo;ll be working with:\nInstall Cert-Manager The installation of cert-manager is pretty simple. We need to apply a couple of manifests.\nFirst we\u0026rsquo;ll create a new namespace where the cert-manager resources will live.\nkubectl create namespace cert-manager Next up, its time to deploy cert-manager.\nkubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml I don\u0026rsquo;t want to minimize the work we\u0026rsquo;ve done here, but the deployment is pretty much done. You can see if everything looks ok by running:\nkubectl get all -namespace cert-manager There should be a list of pods and services similar the the example below.\nConfigure Cert-Manager Now we need to configure cert-manager to issue new certificates. We do this by deploying issuers. There are two type of issuers: Issuers and ClusterIssuers. Just like with Roles and ClusterRoles, the difference is the scope in which they work. Since I want to be able to share this service across namespaces, I\u0026rsquo;ll be deploying cluster issuers. The ClusterIssuer used in my cluster is shown below. You may notice that there are two different issuers being deployed. We\u0026rsquo;ll use staging first which has a more lenient rate limit, and production when we\u0026rsquo;ve got it working.\napiVersion: cert-manager.io/v1alpha2 #v11 kind: ClusterIssuer metadata: name: letsencrypt-staging namespace: cert-manager spec: acme: email: user@domain.com #use your own email address - change as needed solvers: - selector: {} http01: ingress: class: contour #ingress controller in use - change as needed privateKeySecretRef: name: letsencrypt-staging server: https://acme-staging-v02.api.letsencrypt.org/directory #letsencrypt url --- apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-production namespace: cert-manager spec: acme: email: user@domain.com #use your own email address - change as needed solvers: - selector: {} http01: ingress: class: contour #ingress controller in use - change as needed privateKeySecretRef: name: letsencrypt-production server: https://acme-v02.api.letsencrypt.org/directory #letsencrypt url Deploy the manifest with the apply command:\nkubectl apply -f [manifest file].yaml\nOnce deployed, you should be able to query them through kubectl.\nIf you want to go one level deeper, run a kubectl describe on the ClusterIssuers. You should see that the account was registered correctly.\nDeploy Staging App In this part of the post, I\u0026rsquo;m deploying a custom app of mine that has a backend database and things that aren\u0026rsquo;t shown here. However, the app pieces are shown below and your own app should work with the cert-manager additions.\nIn this phase we\u0026rsquo;ll first deploy our web app and make sure we can get a certificate installed, even if it isn\u0026rsquo;t a trusted cert.\nHere is the app manifest I\u0026rsquo;m deploying, complete with the ingress rules needed for cert-manager.\n--- apiVersion: apps/v1 kind: Deployment metadata: labels: app: hollowapp spec: replicas: 3 selector: matchLabels: app: hollowapp strategy: type: Recreate template: metadata: labels: app: hollowapp spec: containers: - name: hollowapp image: eshanks16/hollowapp:latest imagePullPolicy: Always ports: - containerPort: 5000 env: - name: SECRET_KEY value: \u0026#34;my-secret-key\u0026#34; - name: DATABASE_URL valueFrom: secretKeyRef: name: hollow-secret key: db.string --- apiVersion: v1 kind: Service metadata: name: hollowapp labels: app: hollowapp spec: type: ClusterIP ports: - port: 5000 protocol: TCP targetPort: 5000 selector: app: hollowapp --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hollowapp annotations: cert-manager.io/cluster-issuer: \u0026#34;letsencrypt-staging\u0026#34; #Name of the cluster issuer goes here!!!!! kubernetes.io/ingress.class: contour #ingress controller in use kubernetes.io/tls-acme: \u0026#34;true\u0026#34; #acme cert being requested labels: app: hollowapp spec: tls: #TLS Spec listed here. - hosts: - hollowapp.theithollowlab.com #External DNS Name secretName: hollowapp-tls-stage rules: - host: hollowapp.theithollowlab.com http: paths: - path: / backend: serviceName: hollowapp servicePort: 5000 The main pieces of the above manifest are:\ncert-manager.io/cluster-issuer: \u0026ldquo;letsencrypt-staging\u0026rdquo; - specify the ClusterIssuer\nkubernetes.io/ingress.class: contour - change based on your ingress controller in use.\nhost: hollowapp.theithollowlab.com - change to your own DNS name.\nsecretName: hollowapp-tls-stage - a secret name meaningful to you. This is where your app will store the certificate returned by cert-manager.\nApply your app manifest and wait a moment or two before checking the https://URL. It does take a little time for LetsEncrypt to validate the VM and provide the appropriate certificate information. You can see that my https site came up, although the certificate is not valid, as seen by the \u0026ldquo;Not Secure\u0026rdquo; moniker.\nNow that our test certificate worked, let\u0026rsquo;s change the issuer requested in our ingress manifest to use the production ClusterIssuer instead of staging. Re-apply that manifest and again wait a moment before the certificates are applied. Here we can see that a fully trusted certificate was automatically applied to my app.\nSummary Jetstack cert-manager is a pretty simple solution to install in your Kubernetes cluster that will automatically add new certificates for your ingress resources. The ingress resource gets applied with the proper specifications and cert-manager does the rest, to give you a fully trusted certificate. What exactly will you do with all the time you got back from not having to do certificate requests over and over again?\n","permalink":"https://theithollow.com/2019/12/02/jetstack-cert-manager/","summary":"\u003cp\u003eOne of my least favorite parts of computers is dealing with certificate creation. In fact, ya know those tweets about what you\u0026rsquo;d tweet if you were kidnapped and didn\u0026rsquo;t want to tip off the kidnapers?\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2019/11/certs-tweet-1024x293.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eYeah, I\u0026rsquo;d tweet about how I love working with certificates. They are just not a fun thing for me. So when I found a new project where I needed certificates created, I was not really excited.\u003c/p\u003e","title":"Jetstack Cert-Manager"},{"content":"Securing and hardening our Kubernetes clusters is a must do activity. We need to remember that containers are still just processes running on the host machines. Sometimes these processes can get more privileges on the Kubernetes node than they should, if you don\u0026rsquo;t properly setup some pod security. This post explains how this could be done for your own clusters.\nPod Security Policies - The Theory Pod Security policies are designed to limit what can be run on a Kubernetes cluster. Typical things that you might want to limit are: pods that have privileged access, pods with access to the host network, and pods that have access to the host processes just to name a few. Remember that a container isn\u0026rsquo;t as isolated as a VM so we should take care to ensure our containers aren\u0026rsquo;t adversely affecting our nodes\u0026rsquo;s health and security.\nPod Security Policies (PSP) are an optional admission controller added to a cluster. These admission controllers are an added check that determines if a pod should be admitted to the cluster. This is an additional check that comes after both authentication and authorization have been checked for the api call. A pod security policy uses the admission controller to check and see if our pod meets our extra level of scrutiny before being added to our cluster.\nPod Security Policies - In Action To demonstrate how Pod Security Policies work, we\u0026rsquo;ll create a policy that blocks pods from having privileged access to the host operating system. Now, the first thing we\u0026rsquo;ll do here is to enable the admission controller in our API server specification.\nNOTE: enabling the admission controller does not have to be the first step. In fact, once you enable the admission controller on the API server, no pods will be able to be deployed (or redeployed) because there will not be a policy that matches. By default, EVERY pod deployment will be blocked. It may be a good idea to apply your PSPs first so that it doesn\u0026rsquo;t interrupt operations.\nTo demonstrate, we\u0026rsquo;ll first create a pod specification that does not have any privileged [escalated] access to our cluster. The container specification of the manifest below has the \u0026ldquo;allowPrivilegeEscalation: false\u0026rdquo;. If you deploy this before enabling the admission controller. Everything should work fine. Save this file as not-escalated.yaml.\napiVersion: apps/v1 kind: Deployment metadata: name: not-escalated labels: app: not-escalated spec: replicas: 1 selector: matchLabels: app: not-escalated template: metadata: labels: app: not-escalated spec: containers: - name: not-escalated image: busybox command: [\u0026#34;sleep\u0026#34;, \u0026#34;3600\u0026#34;] securityContext: allowPrivilegeEscalation: false Now apply the manifest to your cluster with the command:\nkubectl create -f not-escalated.yaml Check to see if your pods got deployed by running:\nkubectl get pods At this point, all we\u0026rsquo;ve done is prove that a simple pod works on our cluster before we enable our admission controller. Hang on to this pod manifest, because we\u0026rsquo;ll use it again later.\nNow we\u0026rsquo;ll enable the admission control plugin for PodSecurityPolicy. You can see the api-server flag that I used below. NOTE: applying this during your cluster bootstrapping may prevent even the cricital kube-system pods from starting up. In this case, I\u0026rsquo;ve applied this configuration setting after the initial bootstrap phase were my kube-system pods are ALREADY running.\n- kube-apiserver - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy Once the PodSecurityPolicy admission control plugin is enabled, we can try to apply the not-escalated.yaml manifest again. Once deployed, we can check our replica set and we should notice that the pod was not deployed. Remember that without an appropriate policy set, NO PODS will be able to be deployed.\nIn the screenshot below you can see that no pods were found after I deployed the same manifest file. To dig deeper, I check the replica set that should have created the pods. There I see a desired count of \u0026ldquo;1\u0026rdquo; but no pods deployed.\nThe reason for this is that there is no PodSecurityPolicy that would allow a pod of ANY kind to be deployed. Let\u0026rsquo;s fix that next, but delete that deployment for now.\nOK, so now the next step is to create a psp that will let us deploy pods that don\u0026rsquo;t require privilege escalation (like the one we just did) but not let us deploy a pod that does require escalation.\nRight. Now lets apply our first pod security policy that allows pods to be deployed if they don\u0026rsquo;t require any special access. Below is a PSP that allows non-privileged pods to be deployed.\napiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: default-restricted spec: privileged: false hostNetwork: false allowPrivilegeEscalation: false #This is the main setting we\u0026#39;re looking at in this blog post. defaultAllowPrivilegeEscalation: false hostPID: false hostIPC: false runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - \u0026#39;configMap\u0026#39; - \u0026#39;downwardAPI\u0026#39; - \u0026#39;emptyDir\u0026#39; - \u0026#39;persistentVolumeClaim\u0026#39; - \u0026#39;secret\u0026#39; - \u0026#39;projected\u0026#39; allowedCapabilities: - \u0026#39;*\u0026#39; Deploy that PodSecurity Policy to your cluster with kubectl:\nkubctl create -f [pspfilename].yaml You might think that\u0026rsquo;s all we need to do, but it isn\u0026rsquo;t. The next step is to give the replica-controller access to this policy through the use of a ClusterRole and ClusterRoleBinding.\nFirst, the cluster role is create to link the \u0026ldquo;psp\u0026rdquo; resource to the \u0026ldquo;use\u0026rdquo; verb.\nkind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: default-restrictedrole rules: - apiGroups: - extensions resources: - podsecuritypolicies resourceNames: - default-restricted verbs: - use Apply the ClusterRole by apply the:\nkubectl create -f [clusterRoleManifest].yaml Next deploy the ClusterRoleBinding which links the previously created cluster role to the service accounts.\nkind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: default-psp-rolebinding subjects: - kind: Group name: system:serviceaccounts namespace: kube-system roleRef: kind: ClusterRole name: default-restrictedrole apiGroup: rbac.authorization.k8s.io Deploy the ClusterRoleBinding with the command:\nkubectl create -f [clusterRoleBindingManifest].yaml If everything has gone right, then we should have granted the replica-set controller the permissions needed to use the PodSecurityPolicy that allows our non-priviledged pods to be deployed. Time to test that by deploying that not-escalated.yaml manifest again.\nOur pod was deployed correctly! Now we need to test one more thing. We\u0026rsquo;ll change the option on that manifest file to AllowPriviliged access on our container. The manifest below has that flag flipped for you.\napiVersion: apps/v1 kind: Deployment metadata: name: escalated labels: app: escalated spec: replicas: 1 selector: matchLabels: app: escalated template: metadata: labels: app: escalated spec: containers: - name: not-escalated image: busybox command: [\u0026#34;sleep\u0026#34;, \u0026#34;3600\u0026#34;] securityContext: allowPrivilegeEscalation: true Now save that manifest as \u0026ldquo;escalated.yaml\u0026rdquo; and we\u0026rsquo;ll apply it to our cluster and then check the replicasets and pods again. This container is the one we\u0026rsquo;re trying to prevent from being deployed in our cluster.\nkubectl create -f escalated.yaml You can see from the screenshot that the priviledged container could not run in our cluster with our PodSecurityPolicy.\nJust a reminder, this policy on its own might not be good for a production environment. If you have any pods in your kube-system namespace for instance, that actually need privileged access, this policy will block those too. Keep that in mind even if everything seems to be working fine right now. Leaving this policy in place will also keep them from restarting if they fail, or the node is restarted. So be careful with them.\nSummary Pod Security Policies can be an important part to your cluster health since they can reduce unwanted actions being taken on your host. PSPs require an admission controller to be enabled on the kube-api server. After which all pods will be denied until a PSP that allows them is created, and permissions given to the appropriate service account.\n","permalink":"https://theithollow.com/2019/11/19/kubernetes-pod-security-policies/","summary":"\u003cp\u003eSecuring and hardening our Kubernetes clusters is a must do activity. We need to remember that containers are still just processes running on the host machines. Sometimes these processes can get more privileges on the Kubernetes node than they should, if you don\u0026rsquo;t properly setup some pod security. This post explains how this could be done for your own clusters.\u003c/p\u003e\n\u003ch2 id=\"pod-security-policies---the-theory\"\u003ePod Security Policies - The Theory\u003c/h2\u003e\n\u003cp\u003ePod Security policies are designed to limit what can be run on a Kubernetes cluster. Typical things that you might want to limit are: pods that have privileged access, pods with access to the host network, and pods that have access to the host processes just to name a few. Remember that a container isn\u0026rsquo;t as isolated as a VM so we should take care to ensure our containers aren\u0026rsquo;t adversely affecting our nodes\u0026rsquo;s health and security.\u003c/p\u003e","title":"Kubernetes - Pod Security Policies"},{"content":"There are a myraid of ways to deploy Kubernetes clusters these days.\nKubernetes the Hard Way Cluster API Kubeadm Kubespray kops Those are just a few of the ways and I\u0026rsquo;m sure you\u0026rsquo;ll have a favorite. But for the work I\u0026rsquo;ve been doing lately, I don\u0026rsquo;t want to spend a bunch of time cloning repos, updating configs, running ansible scripts and the like, just to get another clean kubernetes cluster in my lab to break. So, I took the individual parts of a Kubernetes build and created a list of ordered jobs in my Jenkins server.\nPutting jobs in a Jenkins server is certainly not a new concept, but I thought some folks might find inspiration in a step through list of my Kubernetes items. Here\u0026rsquo;s what my jobs are doing:\n0 - Terraform Cluster Destroy - Runs terraform destroy to tear down any previous Kubernetes cluster resources I may have already deployed. This is my \u0026ldquo;Start Over\u0026rdquo; button. 1 - Terraform k8s cluster Build - Deploy my virtual machines in my vSphere environment complete with control plane and worker nodes. Every time I run this I get fresh VMs for whatever type of k8s install I want to lay down on top of them. 2 - Build with Wardroom - This is where I use kubeadm along with ansible from the wardroom project. This sets up Kubernetes on the nodes deployed in the previous step. 3 - Sonobuoy Conformance Tests - This job runs every time my previous job completes. A test to see if my newly deployed Kubernetes cluster is conformant. We do this before deploying any other resources on the cluster. If this is broken, no point in moving on to app deployments. 4 - Storage Class Install - Some of my apps require having a storage class setup for my cluster. I don\u0026rsquo;t always need a storage class, but its nice to have one ready if I need it instead of having to stop my application testing to go configure a storage class quick. Lets do it from the start. 5 - Metal Load Balancer - I\u0026rsquo;m running on vSphere and I would still like to be able to use the Kubernetes LoadBalancer resources like I can with a Cloud Provider. Metal LB lets me do this. Again, not always needed, but lets set it up anyway. 6 - Contour Ingress Deployment - I\u0026rsquo;ve got a load balancer setup, so I might as well deploy an ingress controller as well. Contour just went to version 1.0 and seemed like a decent choice for a default ingress controller in my lab. Connected through my Metal LB deployed in the previous step. 7 - Install Helm and Tiller - This job deploys tiller into my Kubernetes cluster so I might deploy any Helm charts I find interesting. 8 - Install Prometheus, Grafana, Alertmanager - Next, I install a monitoring stack so I have some basic monitoring I can mess with. 9 - Install Elasticsearch, FluentD, Kibana - Lastly, I have a job to install an EFK stack for logging of my pods and critical Kubernetes components like the kubelets. Now, I won\u0026rsquo;t always run this whole list when I\u0026rsquo;m building a cluster. It really depends on what I\u0026rsquo;m trying to do. For example, some times I\u0026rsquo;ll deploy my nodes via the Terraform job, so that I can then use Kubeadm manually and test configs. Or maybe I\u0026rsquo;ll get Kubernetes fully deployed, but not deploy Contour. I can manually deploy nginx or something else. Breaking these steps down into small chucks lets me pick and choose how far up the stack I want to go so that I can begin my testing.\nIn an automated production deployment, it probably makes sense to link all these jobs together, but for a training lab its really convenient to pick and choose the level of completeness of a build. Maybe this will inspire one of your projects.\n","permalink":"https://theithollow.com/2019/11/11/modularized-kubernetes-environments-with-jenkins/","summary":"\u003cp\u003eThere are a myraid of ways to deploy Kubernetes clusters these days.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/kelseyhightower/kubernetes-the-hard-way\"\u003eKubernetes the Hard Way\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"/2019/11/04/clusterapi-demystified/\"\u003eCluster API\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"/2019/11/04/clusterapi-demystified/https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/\"\u003eKubeadm\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/kubernetes-sigs/kubespray\"\u003eKubespray\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://github.com/kubernetes/kops\"\u003ekops\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThose are just a few of the ways and I\u0026rsquo;m sure you\u0026rsquo;ll have a favorite. But for the work I\u0026rsquo;ve been doing lately, I don\u0026rsquo;t want to spend a bunch of time cloning repos, updating configs, running ansible scripts and the like, just to get another clean kubernetes cluster in my lab to break. So, I took the individual parts of a Kubernetes build and created a list of ordered jobs in my Jenkins server.\u003c/p\u003e","title":"Modularized Kubernetes Environments with Jenkins"},{"content":"Deploying Kubernetes clusters may be the biggest hurdle in learning Kubernetes and one of the challenges in managing Kubernetes. ClusterAPI is a project designed to ease this burden and make the management and deployment of Kubernetes clusters simpler.\nThe Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.\nkubernetes-sigs/cluster-api\nThis post is designed to dive into ClusterAPI to investigate how it works, and how you can use it.\nLogical Architecture Let\u0026rsquo;s take a look at what ClusterAPI might look like in your environment when it was fully configured.\nClusterAPI uses a management cluster and a series of components deployed within it (we\u0026rsquo;ll discuss those a bit later) to manage many \u0026ldquo;workload\u0026rdquo; clusters across different providers. Think of it this way, the management cluster builds workload clusters for you across vSphere, AWS, and other platforms. All you have to do is apply a desired state configuration file to the management cluster, and it does the rest.\nYeah, thats pretty neat.\nHow Does it Work? OK, so we have a management cluster setup. How do we use it to build our workload clusters? Well, we get to leverage the power of control loops which has been discussed here. The ClusterAPI setup can be broken into three phases.\n1 - Install ClusterAPI into the Management Cluster First things first, your management cluster needs some components installed so that a plain old Kubernetes cluster can become a CAPI enabled management cluster. This is done by applying a Kubernetes manifest that has the components defined. The process for this is listed here https://cluster-api.sigs.k8s.io/tasks/installation.html\nThe install manifest will install the following components:\nnamespace - cluster-api-system custom resource definition - clusters custom resource definition - machinedeployments custom resource definition - machines custom resource definition - machinesets role - CAPI - leader election cluster role - CAPI manager role binding - CAPI leader election deployment - CAPI controller manager The components above will deploy the appropriate components that make up the ClusterAPI control loops.\n2 - Install Bootstrap Components The bootstrap components will be responsible for turning infrastructure nodes (servers, vms, etc) into a Kuberenetes node through the use of cloud-init. Again, to deploy the bootstrap components, just apply the manifest to the management cluster. When the bootstrap components are deployed, the following resources will be created in the management cluster.\nnamespace - cabpk-system custom resource definition - kubeadmconfigs.bootstrap custom resource definition - kubeadmconfigtemplates.bootstrap role - cabpk-leader-election-role cluster role - cabpk-manager-role cluster role - cabpk-proxy-role role binding - cabpk-leader-election-rolebinding cluster role binding - cabpk-manager-rolebinding cluster role binding - capbk-proxy-rolebinding service - cabpk-controller-manager deployment - capbk-controller-manager At this point, the components necessary to configure workload clusters is ready to go, the next piece is needed to actually deploy the infrastructure resources for us to run the bootstrap components on.\n3 - Install Infrastructure Components Now, after the ClusterAPI components are installed the next step is to install the infrastructure components. These are specific to the cloud provider that you\u0026rsquo;ll be installing workload clusters on. This means you\u0026rsquo;ll need to get the right bootstrap components for the particular cloud you plan to install clusters on.\nAWS If you plan to deploy workloads on an AWS cloud provider you will find the following resources:\nnamespace - capa-system custom resource definition - awsclusters.infrastructure custom resource definition - awsmachines.infrastructure custom resource definition - awsmachinetemplates.infrastructure role - capa-leader-election-role cluster role - capa-manager-role cluster role - capa-proxy-role role binding capa-leader-election-rolebinding cluster role binding - capa-manager-rolebinding cluster role binding - capa-proxy-rolebinding secret - capa-manager-bootstrap-credentials service - capa-controller-manager-metrics-service deployment - capa-controller-manager vSphere If you plan to deploy vSphere workload clusters then the following components will be deployed in the management cluster:\nnamespace - capv-system custom resource definition - vsphereclusters.infrastructure custom resource definition - vspheremachines.infrastructure custom resource definition - vspheremachinetemplates.infrastructure role - capv-leader-election-role cluster role - capv-manager-role cluster role - capv-proxy-role role binding - capv-leader-election-rolebinding cluster role binding - capv-manager-rolebinding cluster role binding - capv-proxy-rolebinding service - capv-controller-manager-metrics-service deployment - capv-controller-manager Deploy a Workload Cluster At this point our management cluster should be ClusterAPI enabled and control loops patiently waiting for some instructions. The next step would be to provide some configurations to the management cluster and have them spring into action and deploy our clusters.\nTo do this, we want to provide several more Kubernetes manifests specific to the target cloud provider. Here is a look at some examples for a vSphere cluster.\nvSphere Manifests First the Cluster manifest that describes our Kubernetes cluster object to be created:\napiVersion: cluster.x-k8s.io/v1alpha2 kind: Cluster metadata: name: vsphere-cluster-1 namespace: default spec: clusterNetwork: pods: cidrBlocks: - 100.96.0.0/11 serviceDomain: cluster.local services: cidrBlocks: - 100.64.0.0/13 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: VSphereCluster name: vsphere-cluster-1 namespace: default --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: VSphereCluster metadata: name: vsphere-cluster-1 namespace: default spec: cloudProviderConfiguration: global: insecure: true secretName: cloud-provider-vsphere-credentials secretNamespace: kube-system network: name: VMs-170 providerConfig: cloud: controllerImage: gcr.io/cloud-provider-vsphere/cpi/release/manager:v1.0.0 storage: attacherImage: quay.io/k8scsi/csi-attacher:v1.1.1 controllerImage: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1 livenessProbeImage: quay.io/k8scsi/livenessprobe:v1.1.0 metadataSyncerImage: gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.1 nodeDriverImage: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.1 provisionerImage: quay.io/k8scsi/csi-provisioner:v1.2.1 registrarImage: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 virtualCenter: 10.10.50.11: datacenters: HollowLab workspace: datacenter: HollowLab datastore: Synology02-NFS01 folder: kubernetes resourcePool: \u0026#39;HollowCluster/Resources/capv-workload\u0026#39; server: 10.10.50.11 server: 10.10.50.11 Next, the control plane object which will define our Kubernetes control-plane nodes and configurations:\napiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig metadata: name: vsphere-cluster-1-controlplane-0 namespace: default spec: clusterConfiguration: apiServer: extraArgs: cloud-provider: external controllerManager: extraArgs: cloud-provider: external imageRepository: k8s.gcr.io initConfiguration: nodeRegistration: criSocket: /var/run/containerd/containerd.sock kubeletExtraArgs: cloud-provider: external name: \u0026#39;{{ ds.meta_data.hostname }}\u0026#39; preKubeadmCommands: - hostname \u0026#34;{{ ds.meta_data.hostname }}\u0026#34; - echo \u0026#34;::1 ipv6-localhost ipv6-loopback\u0026#34; \u0026gt;/etc/hosts - echo \u0026#34;127.0.0.1 localhost {{ ds.meta_data.hostname }}\u0026#34; \u0026gt;\u0026gt;/etc/hosts - echo \u0026#34;{{ ds.meta_data.hostname }}\u0026#34; \u0026gt;/etc/hostname users: - name: capv sshAuthorizedKeys: - ssh-rsa OMITTED sudo: ALL=(ALL) NOPASSWD:ALL --- apiVersion: cluster.x-k8s.io/v1alpha2 kind: Machine metadata: labels: cluster.x-k8s.io/cluster-name: vsphere-cluster-1 cluster.x-k8s.io/control-plane: \u0026#34;true\u0026#34; name: vsphere-cluster-1-controlplane-0 namespace: default spec: bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig name: vsphere-cluster-1-controlplane-0 namespace: default infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: VSphereMachine name: vsphere-cluster-1-controlplane-0 namespace: default version: 1.15.3 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: VSphereMachine metadata: labels: cluster.x-k8s.io/cluster-name: vsphere-cluster-1 cluster.x-k8s.io/control-plane: \u0026#34;true\u0026#34; name: vsphere-cluster-1-controlplane-0 namespace: default spec: datacenter: HollowLab diskGiB: 50 memoryMiB: 2048 network: devices: - dhcp4: true dhcp6: false networkName: VMs-170 numCPUs: 2 template: ubuntu-1804-kube-v1.15.3 And then finally the machine deployments which lists our worker nodes for this cluster:\napiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfigTemplate metadata: name: vsphere-cluster-1-md-0 namespace: default spec: template: spec: joinConfiguration: nodeRegistration: criSocket: /var/run/containerd/containerd.sock kubeletExtraArgs: cloud-provider: external name: \u0026#39;{{ ds.meta_data.hostname }}\u0026#39; preKubeadmCommands: - hostname \u0026#34;{{ ds.meta_data.hostname }}\u0026#34; - echo \u0026#34;::1 ipv6-localhost ipv6-loopback\u0026#34; \u0026gt;/etc/hosts - echo \u0026#34;127.0.0.1 localhost {{ ds.meta_data.hostname }}\u0026#34; \u0026gt;\u0026gt;/etc/hosts - echo \u0026#34;{{ ds.meta_data.hostname }}\u0026#34; \u0026gt;/etc/hostname users: - name: capv sshAuthorizedKeys: - ssh-rsa OMITTED sudo: ALL=(ALL) NOPASSWD:ALL --- apiVersion: cluster.x-k8s.io/v1alpha2 kind: MachineDeployment metadata: labels: cluster.x-k8s.io/cluster-name: vsphere-cluster-1 name: vsphere-cluster-1-md-0 namespace: default spec: replicas: 3 selector: matchLabels: cluster.x-k8s.io/cluster-name: vsphere-cluster-1 template: metadata: labels: cluster.x-k8s.io/cluster-name: vsphere-cluster-1 spec: bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfigTemplate name: vsphere-cluster-1-md-0 namespace: default infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: VSphereMachineTemplate name: vsphere-cluster-1-md-0 namespace: default version: 1.15.3 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: VSphereMachineTemplate metadata: name: vsphere-cluster-1-md-0 namespace: default spec: template: spec: datacenter: HollowLab diskGiB: 50 memoryMiB: 2048 network: devices: - dhcp4: true dhcp6: false networkName: VMs-170 numCPUs: 2 template: ubuntu-1804-kube-v1.15.3 The end result of applying these manifests to the management cluster is that CAPI will deploy our new workload cluster components as we\u0026rsquo;ve specified. The below screenshot is the result of applying these manifests to the management cluster to create a workload cluster in my vSphere environment.\nAWS Manifests The AWS Manifests are very similar to the vSphere manifests but of course the underlying infrastructure is different so they must be modified a bit. Here are the AWS manifests used in my cluster. NOTE: these manifests have some info from my environment like my ssh-key name so you can\u0026rsquo;t use them as is.\nKubernetes Cluster Objects:\n--- apiVersion: cluster.x-k8s.io/v1alpha2 kind: Cluster metadata: name: aws-cluster-1 spec: clusterNetwork: pods: cidrBlocks: [\u0026#34;192.168.0.0/16\u0026#34;] infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSCluster name: aws-cluster-1 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSCluster metadata: name: aws-cluster-1 spec: region: us-east-2 sshKeyName: vmc-cna-admin Control Plane Manifest:\napiVersion: cluster.x-k8s.io/v1alpha2 kind: Machine metadata: name: aws-cluster-1-controlplane-0 labels: cluster.x-k8s.io/control-plane: \u0026#34;true\u0026#34; cluster.x-k8s.io/cluster-name: \u0026#34;aws-cluster-1\u0026#34; spec: version: 1.15.3 bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig name: aws-cluster-1-controlplane-0 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachine name: aws-cluster-1-controlplane-0 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachine metadata: name: aws-cluster-1-controlplane-0 spec: instanceType: t2.medium ami: id: ami-0ca9e222761de069b iamInstanceProfile: \u0026#34;control-plane.cluster-api-provider-aws.sigs.k8s.io\u0026#34; sshKeyName: \u0026#34;vmc-cna-admin\u0026#34; --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig metadata: name: aws-cluster-1-controlplane-0 spec: initConfiguration: nodeRegistration: name: \u0026#39;{{ ds.meta_data.hostname }}\u0026#39; kubeletExtraArgs: cloud-provider: aws clusterConfiguration: apiServer: extraArgs: cloud-provider: aws controllerManager: extraArgs: cloud-provider: aws --- apiVersion: cluster.x-k8s.io/v1alpha2 kind: Machine metadata: name: aws-cluster-1-controlplane-1 labels: cluster.x-k8s.io/control-plane: \u0026#34;true\u0026#34; cluster.x-k8s.io/cluster-name: \u0026#34;aws-cluster-1\u0026#34; spec: version: 1.15.3 bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig name: aws-cluster-1-controlplane-1 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachine name: aws-cluster-1-controlplane-1 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachine metadata: name: aws-cluster-1-controlplane-1 spec: instanceType: t2.medium ami: id: ami-0ca9e222761de069b iamInstanceProfile: \u0026#34;control-plane.cluster-api-provider-aws.sigs.k8s.io\u0026#34; sshKeyName: \u0026#34;vmc-cna-admin\u0026#34; --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig metadata: name: aws-cluster-1-controlplane-1 spec: joinConfiguration: controlPlane: {} nodeRegistration: name: \u0026#39;{{ ds.meta_data.hostname }}\u0026#39; kubeletExtraArgs: cloud-provider: aws --- apiVersion: cluster.x-k8s.io/v1alpha2 kind: Machine metadata: name: aws-cluster-1-controlplane-2 labels: cluster.x-k8s.io/control-plane: \u0026#34;true\u0026#34; cluster.x-k8s.io/cluster-name: \u0026#34;aws-cluster-1\u0026#34; spec: version: 1.15.3 bootstrap: configRef: apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig name: aws-cluster-1-controlplane-2 infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachine name: aws-cluster-1-controlplane-2 --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachine metadata: name: aws-cluster-1-controlplane-2 spec: instanceType: t2.medium ami: id: ami-0ca9e222761de069b iamInstanceProfile: \u0026#34;control-plane.cluster-api-provider-aws.sigs.k8s.io\u0026#34; sshKeyName: \u0026#34;vmc-cna-admin\u0026#34; --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfig metadata: name: aws-cluster-1-controlplane-2 spec: joinConfiguration: controlPlane: {} nodeRegistration: name: \u0026#39;{{ ds.meta_data.hostname }}\u0026#39; kubeletExtraArgs: cloud-provider: aws Machine Deployments:\napiVersion: cluster.x-k8s.io/v1alpha2 kind: MachineDeployment metadata: name: aws-cluster-1-md-0 labels: cluster.x-k8s.io/cluster-name: aws-cluster-1 nodepool: nodepool-0 spec: replicas: 3 selector: matchLabels: cluster.x-k8s.io/cluster-name: aws-cluster-1 nodepool: nodepool-0 template: metadata: labels: cluster.x-k8s.io/cluster-name: aws-cluster-1 nodepool: nodepool-0 spec: version: 1.15.3 bootstrap: configRef: name: aws-cluster-1-md-0 apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfigTemplate infrastructureRef: name: aws-cluster-1-md-0 apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachineTemplate --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: AWSMachineTemplate metadata: name: aws-cluster-1-md-0 spec: template: spec: instanceType: t2.medium ami: id: ami-0ca9e222761de069b iamInstanceProfile: \u0026#34;nodes.cluster-api-provider-aws.sigs.k8s.io\u0026#34; sshKeyName: \u0026#34;vmc-cna-admin\u0026#34; --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfigTemplate metadata: name: aws-cluster-1-md-0 spec: template: spec: joinConfiguration: nodeRegistration: name: \u0026#39;{{ ds.meta_data.hostname }}\u0026#39; kubeletExtraArgs: cloud-provider: aws Once those manifests had been deployed to my management cluster, my AWS account started spinning up a workload cluster.\nManagement Cluster Objects At this point, a pair of clusters are deployed and we can see this from running the kubectl get clusters command in our management Kubernetes cluster.\nAnd if we look at our machines object through the Kubernetes API we can see our machines that are deployed.\nThe cool part here is we can modify our manifests and let the Kubernetes Control Loops fix our clusters. Think about if we need to add nodes or remove nodes!\nIn fact, this blog post is over and I should spin down my AWS nodes so I don\u0026rsquo;t have to keep paying for them when I\u0026rsquo;m not using those resources. I\u0026rsquo;ll just delete the resources out of my management cluster and I\u0026rsquo;ll spin them up again later.\nkubectl delete -f machinedeployment.yaml kubectl delete -f controlplane.yaml kubectl delete -f aws-cluster.yaml Bootstrapping Management Clusters I know what you\u0026rsquo;re thinking. This solution builds Kubernetes clusters for me, but I need a Kubernetes cluster built before I can even use this. Well, you\u0026rsquo;re right, but there is a workaround for that if you need to get your management cluster setup.\nRight from our own laptop, we can use the \u0026ldquo;kind\u0026rdquo; project that I\u0026rsquo;ve written about previously to deploy our management components.\nFrom your laptop you\u0026rsquo;d install a tool like clusterctl which leverages kind to build a Kubernetes cluster on docker images inside our laptop. The ClusterAPI components discussed above are deployed into this cluster and they in turn build our management cluster and \u0026ldquo;pivot\u0026rdquo; (or move) the components from our kind cluster to the management cluster. Basically we spin up a temporary cluster on our laptop which deploys a management cluster and copies the components to it for us to then manage our workloads. The process looks similar to the diagram below for setting up a management cluster.\nSummary ClusterAPI will let you manage your Kubernetes clusters via desired state configuration manifests just like the applications you deploy on top of these clusters, giving users a familiar experience. The project provides a quick way to stand up, modify, and delete clusters in your environment to alleviate the headaches around cluster management.\n","permalink":"https://theithollow.com/2019/11/04/clusterapi-demystified/","summary":"\u003cp\u003eDeploying Kubernetes clusters may be the biggest hurdle in learning Kubernetes and one of the challenges in managing Kubernetes. ClusterAPI is a project designed to ease this burden and make the management and deployment of Kubernetes clusters simpler.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eThe Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.\u003c/p\u003e\n\u003cp\u003ekubernetes-sigs/cluster-api\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eThis post is designed to dive into ClusterAPI to investigate how it works, and how you can use it.\u003c/p\u003e","title":"ClusterAPI Demystified"},{"content":"In the traditional server world, we\u0026rsquo;ve taken great lengths to ensure that we can micro-segment our servers instead of relying on a few select firewalls at strategically defined chokepoints. What do we do in the container world though? This is where network policies come into play.\nNetwork Policies - The Theory In a default deployment of a Kubernetes cluster, all of the pods deployed on the nodes can communicate with each other. Some security folks might not like to hear that, but never fear, we have ways to limit the communications between pods and they\u0026rsquo;re called network policies.\nI still see it in many places today where companies have a perimeter firewall protecting their critical IT infrastructure, but if an attacker were to get through that perimeter, they\u0026rsquo;d have pretty much free reign to connect to whatever system they wanted. We introduced micro-segmentation with technologies like VMware NSX to make a whole bunch of smaller zones even to the point of a zone for each virtual machine NIC. This added security didn\u0026rsquo;t come easily, but it is a significant improvement over a few perimeter firewalls protecting everything.\nNow before we go forward with this post, I should say that a Kubernetes Network Policy is not a firewall, but it does restrict the traffic between pods so that they can\u0026rsquo;t all communicate with each other.\nWe\u0026rsquo;ve defined our applications as code already, so why not code in some network security as well by only allowing certain pods to communicate with our pods. For instance if you have a three tiered application, maybe you\u0026rsquo;d set policies so your web pods could only communicate with the app pods, and the app pods communicate with the database pods, but not the web directly to the database. This is a more sound security design since we\u0026rsquo;re removing attack vectors in our pods.\nNow, setting up a network policy is still done in the standard Kubernetes methodology of applying manifests in a yaml format. If you apply the correct network policies to the Kubernetes API, the network plugin will apply the proper rules for you to restrict the traffic. Not all network plugins can do this however so ensure you\u0026rsquo;re using the proper network plugin such as Calico. NOTE: flannel is not capable of applying network policies as of the time of this writing.\nNetwork Policies can restrict, Ingress or Egress or both if you need. You might want to consider a strategy to keep your rules straight though or you could have a hard time troubleshooting later on. I prefer to only restrict ingress traffic if I can swing it, just to limit the complexity.\nNetwork Policies - In Action In this section we\u0026rsquo;ll apply a network policy to limit access to a backend MySQL database pod. In this example, I have an app pod that requires access to a backend MySQL database, but I don\u0026rsquo;t want some other \u0026ldquo;rogue\u0026rdquo; pod to get deployed and have access to that database as well. It COULD have super secret information in it that I need to protect.\nBelow is a picture of what we\u0026rsquo;ll be testing.\nBelow is a Network Policy that will allow ingress traffic to a pod with label app:hollowdb from any pods with a label of app:hollowapp over port 3306.\n--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: hollow-db-policy #policy name spec: podSelector: matchLabels: app: hollowdb #pod applied to policyTypes: - Ingress #Ingress and/or Egress ingress: - from: - podSelector: matchLabels: app: hollowapp #pod allowed ports: - protocol: TCP port: 3306 #port allowed Once the manifest was applied with:\nkubectl apply -f [manifest file].yml And our database has been deployed of course, then we can test out the policy. To do that we\u0026rsquo;ll deploy a mysql container with an allowed label to ensure we can connect to the backend database over 3306. The interactive command below will deploy the container and get us a mysql prompt.\nkubectl run hollowapp --image=mysql -it --labels=\u0026#34;app=hollowapp\u0026#34; -- bash From there we\u0026rsquo;ll try to connect to the hollowdb container with our super secret root password. As you can see we can login and show the databases in the container. So the network policy is allowing traffic as intended.\nNow, lets deploy our example rouge container the same with but use a label that is not \u0026ldquo;hollowapp\u0026rdquo; as seen below.\nkubectl run rogue1 --image=mysql -it --labels=\u0026#34;app=rogue1\u0026#34; -- bash This time, when we get our mysql prompt we are unable to connect to the MySQL database.\nSummary Kubernetes clusters allow traffic between all pods by default, but if you\u0026rsquo;ve got a network plugin capable of using Network Policies, then you can start locking down that traffic. Build your Network Policy manifests and start including them with your application manifests.\n","permalink":"https://theithollow.com/2019/10/21/kubernetes-network-policies/","summary":"\u003cp\u003eIn the traditional server world, we\u0026rsquo;ve taken great lengths to ensure that we can micro-segment our servers instead of relying on a few select firewalls at strategically defined chokepoints. What do we do in the container world though? This is where network policies come into play.\u003c/p\u003e\n\u003ch2 id=\"network-policies---the-theory\"\u003eNetwork Policies - The Theory\u003c/h2\u003e\n\u003cp\u003eIn a default deployment of a Kubernetes cluster, all of the pods deployed on the nodes can communicate with each other. Some security folks might not like to hear that, but never fear, we have ways to limit the communications between pods and they\u0026rsquo;re called network policies.\u003c/p\u003e","title":"Kubernetes - Network Policies"},{"content":"In programming, we sometime set breakpoints as a way of debugging our code. Maybe a small piece of our routine isn\u0026rsquo;t functioning optimally and we want the program to pause, part way through, so we can identify the issues with that one section of code.\nThese breakpoints might be great for coding, but we can apply this to our own lives as well. I\u0026rsquo;ve recently switched jobs and between ending my previous job and starting the new one, I took some time off. My own personal breakpoint where I paused the larger routine (in this metaphor, the routine is my work life) so that I could focus on pieces of my life that might need more attention.\nI\u0026rsquo;m sure that I\u0026rsquo;m not alone when I say that sometimes the stress of work deadlines, task lists, emails, slack messages, etc. can sometimes be so heavy that we can\u0026rsquo;t even see why we decided to go into our chosen field to begin with. I\u0026rsquo;ve had points in my life where the thought of sitting down behind a keyboard was just the thing that I did everyday, and not the thing I was excited about. It goes with any job in any career, I suppose. Some days, you just don\u0026rsquo;t want to go to work but you push through.\nA breakpoint like a vacation gives you a chance to reset. During my time off, I set aside time to do nothing and just reflect on the things that haven\u0026rsquo;t gotten enough attention. Some house work, some outdoor (non-screen time) activities, some exercise, and non-work stuff. I was tempted many times to sit down and study something new, not because I wanted to, but because of a feeling of being left behind in some way. The thoughts that my skills were somehow obsolete because I took time off were ever present, but I resisted the urge to act on them.\nEventually, I did do a little studying towards the end of my vacation, but it was because it was something I wanted to do. How about that? I took time away from work and by the end of it, I wanted to get back to studying, coding, writing blog posts, consulting, and the rest of the technology hamster wheel many of us run on.\nSummary We shouldn\u0026rsquo;t think of vacations as a time when we\u0026rsquo;re suddenly standing still and everyone else is passing by us. Instead these breakpoints should be considered to be a vital part of our career. After taking some time to step back, I feel more energized and enthusiastic to return back to work. I\u0026rsquo;m sure the results that this enthusiasm will provide will be well worth the time spent away from the office.\n","permalink":"https://theithollow.com/2019/10/14/set-your-breakpoints-vacations/","summary":"\u003cp\u003eIn programming, we sometime set breakpoints as a way of debugging our code. Maybe a small piece of our routine isn\u0026rsquo;t functioning optimally and we want the program to pause, part way through, so we can identify the issues with that one section of code.\u003c/p\u003e\n\u003cp\u003eThese breakpoints might be great for coding, but we can apply this to our own lives as well. I\u0026rsquo;ve recently switched jobs and between ending my previous job and starting the new one, I took some time off. My own personal breakpoint where I paused the larger routine (in this metaphor, the routine is my work life) so that I could focus on pieces of my life that might need more attention.\u003c/p\u003e","title":"Set Your Breakpoints - Vacations"},{"content":"I\u0026rsquo;m not going to lie to you, as of the time of this writing, maybe the biggest hurdle to learning Kubernetes is getting a cluster stood up. Right now there are a myriad of ways so stand up a cluster, but none of them are really straight forward yet. If you\u0026rsquo;re interested in learning how Kubernetes works, and just want to setup a basic cluster to poke around in, this post is for you.\nNormally, when building a Kubernetes cluster, you\u0026rsquo;ll need some virtual machines or physical servers. You\u0026rsquo;d need some machines for the control plane and some machines to run your workloads and sometimes these servers are the same. But using an open source project called, \u0026quot; Kind\u0026quot;, lets us spin up a Kubernetes cluster on docker containers using your own laptop instead of needing a bunch of VMs.\nNow as you can imagine, using docker containers as your Kubernetes nodes is probably not a good thing to run your production workloads on, but a great solution to do some labbing on your laptop. There are other projects popping up that use Kind as a temporary Kubernetes cluster. See the details on the Cluster API project.\nInstall Kind Let\u0026rsquo;s install Kind on a mac laptop to show how it works. First we need to make sure that we\u0026rsquo;ve got a Docker daemon up and running on our laptop. You can install the Docker desktop from the Docker Community Edition site.\nOnce we\u0026rsquo;ve got Docker up and running, it\u0026rsquo;s time to download the Kind binary to our laptop and then put that binary into our system path. We can do this by opening up a terminal on our laptop and pulling down the binary using curl, and then changing the permissions so we can execute the binary.\nNOTE: change the path based on which release you\u0026rsquo;re looking for. This release is from v0.5.1 but check the Kind project release page for the latest changes.\ncurl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.5.1/kind-$(uname)-amd64 chmod +x ./kind Once we\u0026rsquo;ve downloaded the binary we need to move it to a location on our laptop that is in the PATH variable. If you aren\u0026rsquo;t sure where you can place the binary, run:\necho $PATH The above command will show what locations are in the system path. Move your binary to one of these locations either through the command line or a finder window. When you\u0026rsquo;re done, you should be able to check the version with the command:\nkind version If everything is setup, we should see something similar to the image below.\nI should also mention that in order to interact with any clusters built on your laptop, you\u0026rsquo;ll still need the kubectl binary on your laptop to interact with the cluster.\nBuild a Cluster Once you\u0026rsquo;re at this stage, building a new cluster is pretty simple. Just run:\nkind create cluster After running this command, a new Kubernetes cluster will be stood up on your laptop using docker containers. You\u0026rsquo;ll notice that at the end of the create cluster routine, it lists the command needed to get the kubeconfig file and export it so that it can be used with the kubectl binary and start interacting with the cluster. In my case it was:\nexport KUBECONFIG=\u0026#34;$(kind get kubeconfig-path --name=\u0026#34;kind\u0026#34;)\u0026#34; At this point you\u0026rsquo;ve got a working cluster. You can now start issuing your kubectl commands once you\u0026rsquo;re run the export command above. What you do with your Kubernetes cluster from there on out is your business and outside the goal of this post.\nIf you want to list your clusters, you can run:\nkind get clusters This command will show you the list of all kind clusters on your system. You\u0026rsquo;ll notice by default that the name of your cluster is also \u0026ldquo;kind\u0026rdquo;. If you\u0026rsquo;d like to create a cluster with a specific name, you can use the command:\nkind create cluster --name [cluster_name_here] When you\u0026rsquo;re done with your cluster, you can delete the cluster by running:\nkind delete cluster --name [cluster_name_here] Customize Kind If you followed the instructions above, you\u0026rsquo;ve deployed a cluster with a single node. Thats great, but isn\u0026rsquo;t going to cut it if we need to do more extensive testing/learning.\nLuckily, we can pass kind a configuration file to customize the way our kind cluster is build. For this example, let\u0026rsquo;s deploy a six node cluster with three control plane nodes and three worker nodes.\nFirst, we need to create a configuration file which is built in YAML. The config file I\u0026rsquo;ve used is listed below. Save the file on your laptop so it can be passed to the next command.\n#6 node cluster kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 nodes: - role: control-plane - role: control-plane - role: control-plane - role: worker - role: worker - role: worker Once you\u0026rsquo;ve created the config file, the create command can be run again with the config file passed as a parameter.\nkind create cluster --name [Cluster_name_here] --config [config_file_name.yml] Once the cluster is built, and you\u0026rsquo;ve exported your KUBECONFIG again, you\u0026rsquo;ll see that the cluster matches the configs.\nSummary Kind is a really useful tool that can be helpful when we are learning Kubernetes. Standing up a new cluster right on your laptop so that you can issue commands to it is a great way to learn things. But the applications of a tool like Kind go far beyond that. We\u0026rsquo;ll look at leveraging Kind to bootstrap a production cluster in another post.\n","permalink":"https://theithollow.com/2019/10/07/a-kind-way-to-learn-kubernetes/","summary":"\u003cp\u003eI\u0026rsquo;m not going to lie to you, as of the time of this writing, maybe the biggest hurdle to learning Kubernetes is getting a cluster stood up. Right now there are a myriad of ways so stand up a cluster, but none of them are really straight forward yet. If you\u0026rsquo;re interested in learning how Kubernetes works, and just want to setup a basic cluster to poke around in, this post is for you.\u003c/p\u003e","title":"A Kind Way to Learn Kubernetes"},{"content":"So Long AHEAD I have been fortunate to work for a fantastic company the past five and half years. While starting at AHEAD I had ambitions to be a top caliber VMware expert and work with people who would challenge me. Part of my decision to join the AHEAD team was to see how good I really was. AHEAD had plenty of talent and three VCDXs when I started and I needed to know how I stacked up. In the end, I think I did OK.\nAHEAD provided me with a lot of opportunity for professional growth. When I started, I was configuring VMware SRM deployments and deploying VCE VBlocks. By the end of my tenure, I obtained two VCDX certifications of my own, helped to build an automation practice and become an expert in vRealize Automation, and later an Architect for Amazon Web Services. I led some of the largest consulting engagements in AHEAD\u0026rsquo;s history and started taking on more responsibility in solution development.\nI truly value the time I spent at AHEAD and will miss the folks that I got to work with there on a daily basis. This was sincerely a great place to work and learn. To my former AHEADian employees, I would like to say thank you for the opportunity to work together.\nHello VMware Based on my background, it probably doesn\u0026rsquo;t surprise many people that I\u0026rsquo;m taking a position at VMware. Next week, I start a new position as a Senior Field Engineer - Cloud Native Applications.\nI expect to put my experience with VMware virtualization, Cloud Management, and the consulting work I\u0026rsquo;ve done previously, and apply what I know to the growing interest in Kubernetes for VMware.\nImage result for kubernetes logo\nI have to tell you that I\u0026rsquo;m really excited for the new job opportunity. Much like when I joined AHEAD, I\u0026rsquo;m looking to join a group that consists of experts in their field. The team I\u0026rsquo;m joining not only works to help customers improve their Kubernetes environments, but is also actively involved in writing the Kubernetes open source code. The story seems similar to my last job change, where I\u0026rsquo;ve got my work cut out for me to see if I can hang with some of what I perceive to be, the best in the biz.\nThank You Many of you have asked where I\u0026rsquo;m headed, and I apologize for not replying but I\u0026rsquo;m doing my best to take some time off between jobs and ignore social media. I really do thank you for all of your well wishes. I have received a lot of compliments and encouragement from people who have benefitted from my blog or videos over the past few years. Thank you for saying such nice things and I hope to do a lot more in the coming months around cloud native applications and the Kubernetes platform. I hope you stay with me on my journey.\n","permalink":"https://theithollow.com/2019/09/30/a-change-of-scenery/","summary":"\u003ch2 id=\"so-long-ahead\"\u003eSo Long AHEAD\u003c/h2\u003e\n\u003cp\u003eI have been fortunate to work for a fantastic company the past five and half years. While starting at \u003ca href=\"http://thinkahead.com\"\u003eAHEAD\u003c/a\u003e I had ambitions to be a top caliber VMware expert and work with people who would challenge me. Part of my decision to join the AHEAD team was to see how good I really was. AHEAD had plenty of talent and three VCDXs when I started and I needed to know how I stacked up. In the end, I think I did OK.\u003c/p\u003e","title":"A Change of Scenery"},{"content":"If you\u0026rsquo;ve just gotten started with Kubernetes, you might be curious to know how the desired state is achieved? Think about it, you pass a YAML file to the API server and magically stuff happens. Not only that, but when disaster strikes (e.g. pod crashes) Kubernetes also makes it right again so that it matches the desired state.\nThe mechanism that allows for Kubernetes to enforce this desired state is the control loop. The basics of this are pretty simple. A control loop can be though of in three stages.\nObserve - What is the desired state of our objects? Check Differences - What is the current state of our objects and the differences between our desired state? Take Action - Make the current state look like the desired state. Repeat - Repeat this over and over again. OK, control loops look simple enough right? Now Kubernetes doesn\u0026rsquo;t have a single control loop, but many loops all running simultaneously. Each of these loops have a specific set of tasks to handle. For example, the Deployment controller watches for new Deployments to be created and then does its job of creating ReplicaSets. The ReplicaSet controller has its own loop that sees a new ReplicaSet is desired and does its job to create new Pod specs. The Pod controller does its job by creating new pod specs which the Kubelet will then deploy. You can see how these control loops can be strung together to converge on a desired state.\nThe control loop isn\u0026rsquo;t the only mechanism used to drive a desired state. You might be thinking to yourself, \u0026ldquo;Hey, all these control loops running at the same time seems inefficient and probably put a lot of strain on the API server to return these checks.\u0026rdquo; Well, you\u0026rsquo;d be right, if these loops kept happening every second. The API server would be overwhelmed with requests. So in order for these checks to happen, but not overwhelm the API server, they are run less frequently, such as five minute increments.\nOK, a check every five minutes shouldn\u0026rsquo;t overwhelm the API server but now you have a new question. \u0026ldquo;We can\u0026rsquo;t wait 5 minutes for each of these checks to happen; that\u0026rsquo;s way too slow for us to fix issues and deploy new resources. So how does it work faster?\u0026rdquo;\nThe answer is an informer. Think of an informer like an event hook that notifies something about a change that has happened. The informer can speed up the whole process. Take the example of the Deployment again. Once the deployment is created, the informer can trigger the Deployment Control loop to fire so the process starts immediately without waiting for the schedule to kick it off again. Once the deployment control loop finishes and creates the ReplicaSet specification, an informer can tell the ReplicaSet control loop to fire again and so on.\nBetween the schedule that ensures the control loop runs periodically/ catches anything that might have been missed, and the informer which tells the loops to run \u0026ldquo;right now,\u0026rdquo; we have a pretty powerful solution.\nBuild Native K8s Apps These control loops can do more than just manage our Kubernetes clusters. They can be a powerful mechanism for you to develop your own applications that run natively on Kubernetes. Perhaps you would create your own control loops to take action whenever a custom resource is created? If you\u0026rsquo;d like to see an example of this in action, take a peak at whats going on with the ClusterAPI project.\nThe ClusterAPI plans to use Kubernetes constructs such as Control Loops, Custom Resource Definitions, and pods to deploy new k8s clusters when provided with a configuration. Pretty slick huh?\nWhat else could you use a control loop to build? And should you leverage the power of Kubernetes to run it?\n","permalink":"https://theithollow.com/2019/09/16/kubernetes-desired-state-and-control-loops/","summary":"\u003cp\u003eIf you\u0026rsquo;ve just gotten started with Kubernetes, you might be curious to know how the desired state is achieved? Think about it, you pass a YAML file to the API server and magically stuff happens. Not only that, but when disaster strikes (e.g. pod crashes) Kubernetes also makes it right again so that it matches the desired state.\u003c/p\u003e\n\u003cp\u003eThe mechanism that allows for Kubernetes to enforce this desired state is the control loop. The basics of this are pretty simple. A control loop can be though of in three stages.\u003c/p\u003e","title":"Kubernetes - Desired State and Control Loops"},{"content":"I don\u0026rsquo;t know about you, but I learn things best when I have a visual to reference. Many of my posts in this blog are purposefully built with visuals, not only because I think its helpful for the readers to \u0026ldquo;get the picture\u0026rdquo;, but also because that\u0026rsquo;s how I learn.\nKubernetes can feel like a daunting technology to start learning, especially since you\u0026rsquo;ll be working with code and the command line for virtually all of it. That can be a scary proposition to an operations person who is trying to break into something brand new. But last week I was introduced to a project from VMware called Octant, that helps visualize whats actually going on in our Kubernetes cluster.\nOctant gives us a graphical user interface to view whats going on in our Kubernetes cluster. The project runs on your local workstation, so you don\u0026rsquo;t need a web server or anything provisioned first, and when you\u0026rsquo;re done using it, you can stop it. Once the service has been installed, you can run it whenever you want just by typing \u0026ldquo;octant\u0026rdquo; in your shell.\nOnce the tool is running, you can open a browser to the dashboard and begin poking around your own Kubernetes cluster. Octant uses the kubeconfig file that was configured to connect to your cluster, so if you\u0026rsquo;re missing permissions to a certain namespace for example, you won\u0026rsquo;t be able to view it with octant either.\nOn the left hand side of the dashboard you\u0026rsquo;ll have a list of objects to view. Selecting one of those will show a table with the objects and some details about them. For example, you can see the \u0026ldquo;Selector\u0026rdquo; configured for a particular deployment in my example below.\nOnce you find an interesting object to look into further, simply click the object for find more details. For instance I can see my metadata about my pods, and their status.\nOK, right now it still kind of looks like a web GUI to show us code again, but take a look at the resource viewer tab. Here, we can see just how our pods, services, ingress rules, replica sets and deployments are all hooked together. What a great way to quickly view resources in your cluster.\nIf you\u0026rsquo;re trying to learn Kubernetes, or have just been stuck running dozens of \u0026ldquo;kubectl\u0026rdquo; commands over and over just to troubleshoot something, I urge you to take a peak at octant and see if that helps.\nLastly, if you\u0026rsquo;d like to know more about octant, this TGIK (Thank God it\u0026rsquo;s Kubernetes) session with Bryan Lisles was a really great show.\n","permalink":"https://theithollow.com/2019/08/20/kubernetes-visually-with-vmware-octant/","summary":"\u003cp\u003eI don\u0026rsquo;t know about you, but I learn things best when I have a visual to reference. Many of my posts in this blog are purposefully built with visuals, not only because I think its helpful for the readers to \u0026ldquo;get the picture\u0026rdquo;, but also because that\u0026rsquo;s how I learn.\u003c/p\u003e\n\u003cp\u003eKubernetes can feel like a daunting technology to start learning, especially since you\u0026rsquo;ll be working with code and the command line for virtually all of it. That can be a scary proposition to an operations person who is trying to break into something brand new. But last week I was introduced to a project from VMware called \u003ca href=\"https://github.com/vmware/octant\"\u003eOctant\u003c/a\u003e, that helps visualize whats actually going on in our Kubernetes cluster.\u003c/p\u003e","title":"Kubernetes Visually - With VMware Octant"},{"content":"DaemonSets can be a really useful tool for managing the health and operation of the pods within a Kubernetes cluster. In this post we\u0026rsquo;ll explore a use case for a DaemonSet, why we need them, and an example in the lab.\nDaemonSets - The Theory DaemonSets are actually pretty easy to explain. A DaemonSet is a Kubernetes construct that ensures a pod is running on every node (where eligible) in a cluster. This means that if we were to create a DaemonSet on our six node cluster (3 master, 3 workers), the DaemonSet would schedule the defined pods on each of the nodes for a total of six pods. Now, this assumes there are either no taints on the nodes, or there are tolerations on the DaemonSets.\nIn reality, DaemonSets behave very similarly to a Kubernetes deployment, with the exception that they will be automatically distributed to ensure the pods are deployed on each node in the cluster. Also, if a new node is deployed to your cluster after the DaemonSet has been deployed, the new node will also get the DaemonSet deployed by the scheduler, after the join occurs.\nSo why would we use a DaemonSet? Well, a common use for DaemonSets would be for logging. Perhaps we need to ensure that our log collection service is deployed on each node in our cluster to collect the logs from that particular node. This might be a good use case for a DaemonSet. Think of this another way; we could run and install services on each of our Kubernetes nodes by installing the app on the OS. But now that we\u0026rsquo;ve got a container orchestrator available to use, lets take advantage of that and build those tools into a container and automatically schedule them on all nodes within the cluster, in one fell swoop.\nDaemonSets - In Practice For this example, we\u0026rsquo;ll deploy a simple container as a DaemonSet to show that they are distributed on each node. While this example, is just deploying a basic container, it could be deploying critical services like a log collector. The manifest below contains the simple Daemonset manifest.\napiVersion: apps/v1 kind: DaemonSet metadata: name: daemonset-example labels: app: daemonset-example spec: selector: matchLabels: app: daemonset-example template: metadata: labels: app: daemonset-example spec: tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule containers: - name: busybox image: busybox args: - sleep - \u0026#34;10000\u0026#34; Note that the DaemonSet has a toleration so that we can deploy this container on our master nodes as well, which are tainted.\nWe apply the manifest with the:\nkubectl apply -f [file name].yaml After we apply it, lets take a look at the results by running:\nkubectl get pods --selector=app=daemonset-example -o wide You can see from the screenshot above, there are six pods deployed. Three on master nodes and three more on worker nodes. Great, so we didn\u0026rsquo;t have to use any affinity rules to do this, the DaemonSet made sure we had one pod per node as we had hoped.\nWhat happens if we add another worker nodes to the cluster? Well, let\u0026rsquo;s try it.\nI\u0026rsquo;ve used kubeadm to join another node to my cluster. Shortly after the join completed, I noticed the DaemonSets started deploying another pod without my intervention.\nAnd, as you\u0026rsquo;d hoped, a minute later the container was up and running on my new node \u0026ldquo;k8s-worker-3\u0026rdquo;.\nSummary DaemonSets can make pretty short work of deploying resources throughout your k8s nodes. You\u0026rsquo;ll see many applications use DaemonSets to ensure that each node of your cluster is being properly managed.\n","permalink":"https://theithollow.com/2019/08/13/kubernetes-daemonsets/","summary":"\u003cp\u003eDaemonSets can be a really useful tool for managing the health and operation of the pods within a Kubernetes cluster. In this post we\u0026rsquo;ll explore a use case for a DaemonSet, why we need them, and an example in the lab.\u003c/p\u003e\n\u003ch2 id=\"daemonsets---the-theory\"\u003eDaemonSets - The Theory\u003c/h2\u003e\n\u003cp\u003eDaemonSets are actually pretty easy to explain. A DaemonSet is a Kubernetes construct that ensures a pod is running on every node (where eligible) in a cluster. This means that if we were to create a DaemonSet on our six node cluster (3 master, 3 workers), the DaemonSet would schedule the defined pods on each of the nodes for a total of six pods. Now, this assumes there are either no \u003ca href=\"/?p=9736\"\u003etaints on the nodes, or there are tolerations\u003c/a\u003e on the DaemonSets.\u003c/p\u003e","title":"Kubernetes - DaemonSets"},{"content":"Today Sysdig announced a new update to their Cloud Native Visibility and Security Platform, with the release of Sysdig Secure 2.4.\nThe new version of the Secure product includes some pretty nifty enhancements.\nRuntime profiling with machine learning - New containers will be profiled after deployment to give insights into the processes, file system activity, networking and system calls. Once the profiling is complete, these profiles can be used to create policy sets for the expected behavior. Sysdig also offers a confidence level of the profile. Consistent behavior generating a higher confidence level whereas variable behavior would have a lower level.\nFalco rule builder - If you\u0026rsquo;ve been using the open source version of Falco, you may really like the rule builder which places a neat UI for configuring these rules. The rule builder eliminates the need for deep knowledge of Falco expressions and just lets you get stuff done.\nAdditional vulnerability management - Sysdig already does vulnerability management, but now there are new features such as the vulnerability reporter where you can create custom queries for your environment. Also, the scan results UI gets a face lift and the alerting mechanisms were updated to notify users about CVE exposures and changes to images.\nWhat is Sysdig Secure? If you\u0026rsquo;re not familiar with Sysdig Secure and you\u0026rsquo;re running a Kubernetes cluster, you might want to check it out. https://sysdig.com/products/secure/\nSysdig Secure aims to help \u0026ldquo;secure\u0026rdquo; (you didn\u0026rsquo;t see that coming did you) your container environment by providing vulnerability management, compliance, runtime security and forensics tools. Sysdig Secure is the other half of their Cloud-Native Visibility and Security Platform. As you may recall, I tried out the Sysdig Monitor program and you can read that review here.\nSecure Vulnerability Management We have to harden our containers just like we harden our virtual machine images. Sysdig Secure can scan our containers within our registry, or through CI/CD to tell us about what nasty bugs we might have in our images. Luckily for me, my customized app is not mission critical or hold important data.\nSecure Compliance If you need to baseline your environment with some standard compliance frameworks like the Center for Internet Security (CIS) then the compliance piece of Sysdig Secure will get you started. Here you can quickly view if you\u0026rsquo;ve deviated from the industry best practices. Users are also given data as metrics so each compliance run is a full audit trail. You can see my k8s API server compliance. Failures in my lab were created for illustrative purposes only of course :)\nSecure Runtime Security We\u0026rsquo;ve got to constantly be reviewing our environment for changes to ensure that our hardened environment hasn\u0026rsquo;t been weakened along the way. The Runtime security from Sysdig keeps us appraised of changes to our environment like the addition of a privileged clusterrole for example.\nSecure Forensics Despite our best efforts, sometimes we still have issues in the environment that need investigated. Or, hopefully, we just need to do some investigation for an audit to prove that we really are highly secured. Sysdig Secure provides the ability to capture whats happening and dive into the details over defined period.\nSummary If you\u0026rsquo;re not sure how you\u0026rsquo;re going to protect your Kubernetes environment, maybe go have a discussion with a Sysdig representative or partner to see what can be done to get you started in your journey. Good luck!\n","permalink":"https://theithollow.com/2019/08/06/sysdig-secure-2-4-announced/","summary":"\u003cp\u003eToday Sysdig announced a new update to their Cloud Native Visibility and Security Platform, with the release of Sysdig Secure 2.4.\u003c/p\u003e\n\u003cp\u003eThe new version of the Secure product includes some pretty nifty enhancements.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eRuntime profiling with machine learning -\u003c/strong\u003e New containers will be profiled after deployment to give insights into the processes, file system activity, networking and system calls. Once the profiling is complete, these profiles can be used to create policy sets for the expected behavior. Sysdig also offers a confidence level of the profile. Consistent behavior generating a higher confidence level whereas variable behavior would have a lower level.\u003c/p\u003e","title":"Sysdig Secure 2.4 Announced"},{"content":"One of the best things about Kubernetes, is that I don\u0026rsquo;t have to think about which piece of hardware my container will run on when I deploy it. The Kubernetes scheduler can make that decision for me. This is great until I actually DO care about what node my container runs on. This post will examine one solution to pod placement, through taints and tolerations.\nTaints - The Theory Suppose we had a Kubernetes cluster where we didn\u0026rsquo;t want any pods to run on a specific node. You might need to do this for a variety of reasons, such as:\none node in the cluster is reserved for special purposes because it has specialized hardware like a GPU one node in the cluster isn\u0026rsquo;t licensed for some software running on it one node is in a different network zone for compliance reasons one node is in timeout for doing something naughty Whatever the particular reason, we need a way to ensure our pods are not placed on a certain node. That\u0026rsquo;s where a taint comes in.\nTaint\u0026rsquo;s are a way to put up a giant stop sign in front of the K8s scheduler. You can apply a taint to a k8s node to tell the scheduler you\u0026rsquo;re not available for any pods.\nTolerations - The Theory How about use case where we had really slow spinning disks in a node. We applied a taint to that node so that our normal pods won\u0026rsquo;t be placed on that piece of hardware, due to it\u0026rsquo;s poor performance, but we have some pods that don\u0026rsquo;t need fast disks. This is where Tolerations could come into play.\nA toleration is a way of ignoring a taint during scheduling. Tolerations aren\u0026rsquo;t applied to nodes, but rather the pods. So, in the example above, if we apply a toleration to the PodSpec, we could \u0026ldquo;tolerate\u0026rdquo; the slow disks on that node and still use it.\nTaints - In Action Let\u0026rsquo;s apply a taint to our Kubernetes cluster. But first, you might check to see if you have a taint applied already. Depending upon how you deployed your cluster, your master node(s) might have a taint applied to them to keep pods from running on the master nodes. You can run the:\nkubectl describe node [k8s master node] OK, now lets apply a taint to a couple of nodes in our cluster. I\u0026rsquo;ll create a taint with a key/value pair of \u0026ldquo;hardware:slow\u0026rdquo; to identify nodes that should not run my pods any longer because of their slow hardware specifications.\nkubectl taint nodes [node name] [key=value]:NoSchedule In my case I ran this twice because I tainted two nodes. I should mention that this can be done through labels as well to quickly taint multiple nodes. Also, we ran the command with the \u0026ldquo;NoSchedule\u0026rdquo; effect which keeps the scheduler from choosing this node, but you could also use other effects like \u0026ldquo;PreferNoSchedule\u0026rdquo; or \u0026ldquo;NoExecute\u0026rdquo; as well.\nAt this point, two of my three available worker nodes are tainted with the \u0026ldquo;hardware\u0026rdquo; key pair. Lets deploy some pods and see how they\u0026rsquo;re scheduled. I\u0026rsquo;ll deploy nginx pods to my workers and I\u0026rsquo;ll deploy three pods which ordinarily we\u0026rsquo;d expect to be deployed evenly across my cluster. The manifest file below is what will be deployed.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object strategy: type: RollingUpdate rollingUpdate: #Update Pods a certain number at a time maxUnavailable: 1 #Total number of pods that can be unavailable at once maxSurge: 1 #Maximum number of pods that can be deployed above desired state replicas: 3 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: containers: - name: nginx-container #the name of the container within the pod image: nginx:1.7.9 #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod After applying the nginx deployment, we\u0026rsquo;ll check our pods and see which nodes they are running on. To do this run:\nkubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name As you can see, I\u0026rsquo;ve got three pods deployed and they\u0026rsquo;re all on k8s-worker-0. This is the only node that wasn\u0026rsquo;t tainted in my cluster, so this confirms that the taints on k8s-worker-1 and k8s-worker-2 are working correctly.\nTolerations - In Action Now I\u0026rsquo;m going to delete that deployment and deploy a new deployment that tolerates our \u0026ldquo;hardware\u0026rdquo; taint.\nI\u0026rsquo;ve created a new manifest file that is the same as we ran before, except this time I added a toleration for the taint we applied to our nodes.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object strategy: type: RollingUpdate rollingUpdate: #Update Pods a certain number at a time maxUnavailable: 1 #Total number of pods that can be unavailable at once maxSurge: 1 #Maximum number of pods that can be deployed above desired state replicas: 3 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: tolerations: - key: \u0026#34;hardware\u0026#34; operator: \u0026#34;Equal\u0026#34; value: \u0026#34;slow\u0026#34; effect: \u0026#34;NoSchedule\u0026#34; containers: - name: nginx-container #the name of the container within the pod image: nginx:1.7.9 #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod Lets apply this new manifest to our cluster and see what happens to the pod placement decisions by the scheduler.\nWell, look there. Now those same three pods were distributed across the three nodes evenly for this deployment that tolerated the node taints.\nSummary Its hard to say what needs you might have for scheduling pods on specific nodes in your cluster, but by using taints and tolerations you can adjust where these pods are deployed.\nTaints are applied at the node level and prevent nodes from being used. Tolerations are applied at the pod level and can tell the scheduler which taints they are able to withstand.\n","permalink":"https://theithollow.com/2019/07/29/kubernetes-taints-and-tolerations/","summary":"\u003cp\u003eOne of the best things about Kubernetes, is that I don\u0026rsquo;t have to think about which piece of hardware my container will run on when I deploy it. The Kubernetes scheduler can make that decision for me. This is great until I actually DO care about what node my container runs on. This post will examine one solution to pod placement, through taints and tolerations.\u003c/p\u003e\n\u003ch2 id=\"taints---the-theory\"\u003eTaints - The Theory\u003c/h2\u003e\n\u003cp\u003eSuppose we had a Kubernetes cluster where we didn\u0026rsquo;t want any pods to run on a specific node. You might need to do this for a variety of reasons, such as:\u003c/p\u003e","title":"Kubernetes - Taints and Tolerations"},{"content":"You\u0026rsquo;ve been dabbling in the world of Kubernetes for a while now and have probably noticed there are a whole lot of vendors packaging their own version of Kubernetes.\nYou might be having a fun time comparing the upstream Kubernetes version vs the packaged versions put out by Redhat, VMware, and others. But how do we know that those packaged versions are supporting the required APIs so that all Kubernetes clusters have the same baseline of features?\nThe Cloud Native Computing Foundation (CNCF) runs the certification program to ensure vendors are supporting a \u0026ldquo;high level of common functionality.\u0026rdquo; - https://github.com/cncf/k8s-conformance\nLuckily for us, we can also use this testing program to check our own clusters to make sure they are conformant. To do this, I\u0026rsquo;ll be using the Heptio Sonobuoy project to test my own Kubernetes cluster, and you can test whichever cluster you\u0026rsquo;d like using the same tool.\nRun Sonobuoy on Against Your Cluster To get started, download sonobuoy from the link here and place the sonobuoy binary into your PATH.\nNow we can run the sonobuoy tool assuming that our KUBECONFIG is setup and we can issue standard kubectl commands against our existing cluster already. If not, you\u0026rsquo;ve gotta do that first.\nSo let\u0026rsquo;s run the sonobuoy commands and test our cluster.\nsonobuoy run --wait --mode quick I\u0026rsquo;ve run the sonobuoy command with the \u0026ndash;mode quick switch so that the test will do some very basic checks. You can remove this for the full test which will of course take longer.\nThe tests have been run and now we want to view the results.\nresults=$(sonobuoy retrieve) sonobuoy e2e $results As you can see from my test, I have no failures with my existing Kubernetes cluster which I\u0026rsquo;m ecstatic about. Of course it shouldn\u0026rsquo;t fail any tests, I\u0026rsquo;m deploying Kubernetes through the Heptio Wardroom project.\nIf you\u0026rsquo;re really curious, the sonobuoy tests create a zip file with details about your cluster. After running it you\u0026rsquo;ll notice this new compressed file in your working directory as I have.\nIf you were to unzip that file you\u0026rsquo;d see a series of directories related to the tests the tool performed. Below is a screenshot showing a subset of the directories and files.\nWhen you\u0026rsquo;re all done running your conformance tests, be sure to run the delete command to clean up the resources used to run the tests.\nsonobuoy delete --wait Summary It\u0026rsquo;s hard to say which of the packaged versions of Kubernetes will be the most popular a few years from now. I think many people are still trying to figure this out and have questions/concerns about the future. But, the good news is that a common set of features should be available from any certified Kubernetes package. And the better news is that you can test it yourself with the conformance tests. Be sure to run a conformance test before pushing any cluster into a production environment.\n","permalink":"https://theithollow.com/2019/07/16/test-your-kubernetes-cluster-conformance/","summary":"\u003cp\u003eYou\u0026rsquo;ve been dabbling in the world of Kubernetes for a while now and have probably noticed there are a whole lot of vendors packaging their own version of Kubernetes.\u003c/p\u003e\n\u003cp\u003eYou might be having a fun time comparing the upstream Kubernetes version vs the packaged versions put out by Redhat, VMware, and others. But how do we know that those packaged versions are supporting the required APIs so that all Kubernetes clusters have the same baseline of features?\u003c/p\u003e","title":"Test Your Kubernetes Cluster Conformance"},{"content":"Any system that\u0026rsquo;s going to be deployed for the enterprise needs to have at least a basic level of monitoring in place to manage it. Kubernetes is no exception to this rule. When we, as a community, underwent the shift from physical servers to virtual infrastructure, we didn\u0026rsquo;t ignore the new VMs and just keep monitoring the hardware, we had to come up with new products to monitor our infrastructure. Sysdig is building these new solutions for the Kubernetes world.\nI tried out the solution with their free 14 day free trial and it seems pretty great. Sysdig runs a SaaS platform where you login to see your metrics that are sent to the service via their agents which are installed in your Kubernetes nodes. To get started on your own, you can visit Sysdig.com to setup your own account.\nI won\u0026rsquo;t take you through the installation process which includes deploying agents to your Kubernetes nodes and authenticating them with the portal, but when you setup your account the wizard will take you through a few steps and point you to the installation documentation.\nOnce you\u0026rsquo;re set up, you should start to see some metrics in your Sysdig Monitor dashboards.\nNow, this is usually the time I freak out, when all the things that I might need to be paying attention to, rush through my mind. But Sysdig is giving me nice graphs about a lot of metrics I\u0026rsquo;d commonly need to be watching in my Kubernetes cluster so my containers are healthy.\nOne of the things I immediately noticed is that if I move my mouse curser over any of the graphs, a pop out window shows me more details, and the time that I\u0026rsquo;ve selected lines up for all the graphs, so if I\u0026rsquo;m correlating items, I can easily see whats happening at the same time in different metrics.\nCool, we can see pretty graphs about utilization which will really help us in troubleshooting, and in resource management in general, but what about alerting us about issues? Of course we can do that.\nThere are plenty of alerts that you can configure from the console. Many of which you just need to turn on, but you can also modify the alerts to your liking so that you don\u0026rsquo;t get spammed with emails\u0026hellip; or slack messages!\nWhoa whoa whoa, wait a second! Did he just say Slack messages? Thats right, Sysdig can configure your alerts to send message to a slack channel if you prefer that method. (Email\u0026rsquo;s dead right?) As you can tell from the screenshot below, you can actually configure your alerts to go to many locations including AWS SNS, PagerDuty, etc.\nConfigure your alerts that meet your requirements and sit back and watch your Kubernetes cluster cruise along, knowing that it\u0026rsquo;s all under control. Yeah, we know its harder than that, but it does make us feel better to see whats going on and know we\u0026rsquo;ll get a message when we\u0026rsquo;re not paying attention right?\nOK, before we sign off, I wanted to show one more custom dashboard that I was able to very quickly create to show network traffic between my linux nodes. I don\u0026rsquo;t know why but staring at these network maps just gives me a good feeling. As with all these dashboards, you can edit the scope to narrow down the time the metrics are displaying. This is really a must with this much data. Narrowing down the scope to a time when you know events happened, is a really important feature for troubleshooting issues.\nSummary Sysdig is doing some pretty cool stuff with monitoring and thats good news because we need some good tools to monitor our new container infrastructure. There is a lot more that they are doing such as with their \u0026ldquo;Secure\u0026rdquo; product for auditing and compliance which I didn\u0026rsquo;t talk about in this post. If you want the real low-down on what Sysdig\u0026rsquo;s up to, go check out a trial for yourself or contact a sales rep. Have fun!\n","permalink":"https://theithollow.com/2019/06/23/monitoring-kubernetes-with-sysdig-monitor/","summary":"\u003cp\u003eAny system that\u0026rsquo;s going to be deployed for the enterprise needs to have at least a basic level of monitoring in place to manage it. Kubernetes is no exception to this rule. When we, as a community, underwent the shift from physical servers to virtual infrastructure, we didn\u0026rsquo;t ignore the new VMs and just keep monitoring the hardware, we had to come up with new products to monitor our infrastructure. \u003ca href=\"https://sysdig.com/\"\u003eSysdig\u003c/a\u003e is building these new solutions for the Kubernetes world.\u003c/p\u003e","title":"Monitoring Kubernetes with Sysdig Monitor"},{"content":"We\u0026rsquo;re getting into the habit of tagging everything these days. It\u0026rsquo;s been drilled into our heads that we don\u0026rsquo;t care about names of our resources anymore because we can add our own metadata to resources to later identify them, or to use for automation. But up until June 6th, AWS wouldn\u0026rsquo;t let us tag one of the most important resources of all, our accounts.\nOn June 6th though, our cloud world changed when AWS announced that we can now add tags to our accounts through organizations.\nWhy would we want to tag accounts? Well, that\u0026rsquo;s a bit of an open ended question. You may be tagging accounts to identify what the purpose of that account was for, who\u0026rsquo;s responsible for it, whether it has sensitive data in it, and about another thousand things that you might come up with. The world is your oyster as they say.\nTo tag your AWS Account simple login to your AWS Organizations portal where you can view your list of accounts. Select the account to update and then click the \u0026ldquo;EDIT TAGS\u0026rdquo; link on the right hand side of the screen.\nCreate your tags based on whatever rules your organization has and what you\u0026rsquo;re using the accounts for. Below are some common examples that could be applied at the account level. Save your changes.\nWhen you go back to the accounts screen, you\u0026rsquo;ll now see the tags listed when you select the account. This might be an easy way to identify who owns the account.\nIf you\u0026rsquo;ve got an up to date version of the awscli, you can also list tags for your resource within the organizations command.\nSummary Now we can use tags on our AWS Accounts just like the rest of our AWS resources. This may seem like a trivial update that AWS announced, but this is important functionality for a cloud to have and for organizations to take advantage of. What tags will you use on your AWS Accounts? Post your suggestions in the comments.\n","permalink":"https://theithollow.com/2019/06/17/aws-account-tagging/","summary":"\u003cp\u003eWe\u0026rsquo;re getting into the habit of tagging everything these days. It\u0026rsquo;s been drilled into our heads that we don\u0026rsquo;t care about names of our resources anymore because we can add our own metadata to resources to later identify them, or to use for automation. But up until June 6th, AWS wouldn\u0026rsquo;t let us tag one of the most important resources of all, our accounts.\u003c/p\u003e\n\u003cp\u003eOn June 6th though, our cloud world changed when \u003ca href=\"https://aws.amazon.com/about-aws/whats-new/2019/06/aws-organizations-now-supports-tagging-and-untagging-of-aws-acco/\"\u003eAWS announced\u003c/a\u003e that we can now add tags to our accounts through organizations.\u003c/p\u003e","title":"AWS Account Tagging"},{"content":"The Kubernetes series has now ventured into some non-native k8s discussions. Helm is a relatively common tool used in the industry and it makes sense to talk about why that is. This post covers the basics of Helm so we can make our own evaluations about its use in our Kubernetes environment.\nHelm - The Theory So what is Helm? In the most simplest terms its a package manager for Kubernetes.\nThink of Helm this way, Helm is to Kubernetes as yum/apt is to Linux. Yeah, sounds pretty neat now doesn\u0026rsquo;t it?\nJust like yum and apt, Helm is configured with a public repository where shared software packages are maintained for users to quickly access and install. For Helm, this is called the \u0026ldquo;stable\u0026rdquo; repo and it\u0026rsquo;s managed by Google. You can of course add additional repositories if you\u0026rsquo;re managing your own Helm charts.\nHelm (as of version 2) uses a Kubernetes pod named \u0026ldquo;Tiller\u0026rdquo; as a server that interacts with the K8s api. Helm also has its own client which as you might guess interacts with Tiller to take actions on the Kubernetes cluster. The example below shows the basic setup and interactions with Kubernetes.\nHelm uses charts to define the details about software packages such as default values or variables, dependencies, templates, readmes and licenses, some of which are optional. We won\u0026rsquo;t go into much detail about this in an introductory post, but the file structure for creating Helm charts is shown below.\n/chartname Chart.yaml #file containing information about the chart requirements.yaml #optional file used for listing dependencies values.yaml #default parameter values for the chart charts/ #any dependent charts would be stored here templates/ #templates here are combined with values to generate k8s manifests LICENSE #optional README.md #optional The big piece to understanding what Helm is doing is in the templates. When you combine the Helm templates with Helm values it creates the valid k8s manifests for the Kubernetes API to deploy. This gives us quite a bit of flexibility to do things like calculate values dynamically before deploying a static manifest to the Kubernetes cluster.\nOK, Helm sounds pretty neat, why isn\u0026rsquo;t it always used? Well, there are a couple of drawbacks at the moment. The first of which is around Tiller, which needs to be able to act as a Kubernetes administrator via RBAC. For Tiller to deploy our Deployments, Secrets, Services, etc., it will need access to all of those components. When a user uses the helm client to interact with Tiller, we\u0026rsquo;ve basically give our clients Admin access to the cluster which is a problem. Helm v3 (when released) aims to fix this RBAC by removing tiller from the k8s cluster. There are also some things to think about such as whether or not you really want to be able to use dynamic variables in your YAML or not.\nHelm - In Action For the lab portion of this post, we\u0026rsquo;ll install the Helm components and deploy a package from a repo. If you like the idea of Helm, you could continue your learning by writing your own Helm charts, but its outside the scope of this post.\nTo get started, we need to deploy Tiller to our Kubernetes cluster. To do this, we\u0026rsquo;ll start by deploying a manifest like we\u0026rsquo;ve done many times before. This manifest sets up a service account and role for Tiller to perform actions against the cluster with appropriate (well, maybe too much) permissions.\nCopy the manifest code below to a yml file and apply it against your k8s cluster as we did in the rest of this series.\napiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system Next up, we\u0026rsquo;ll need to install the Helm client on our workstation. I am using a Mac so I used homebrew to do this, but it can be done for other operating systems as well. Please see the official documentation if you aren\u0026rsquo;t using homebrew on Mac.\nhttps://helm.sh/docs/using_helm/#installing-helm\nI\u0026rsquo;m running the command:\nbrew install kubernetes-helm Now that the Helm client is installed, we can use it do initialize Helm which deploys Tiller for us, using the service account we created through the previously mentioned manifest file. To do this, run:\nhelm init --service-account [service account name] After the init process finishes, you\u0026rsquo;re pretty much ready to use Helm to deploy packages. Lets take a look at some other Helm stuff first.\nIf we run \u0026ldquo;helm repo list\u0026rdquo; we can see a list of the available repositories where our packages could be stored. You\u0026rsquo;ll notice taht out of the box, the \u0026ldquo;stable\u0026rdquo; repo is already configured for us.\nIts usually a good idea to update your repos, just like you might do with yum or apt so lets run a helm repo update before we try to apply anything.\nOK, now lets go find some software to deploy. We can use the search feature to look for software we might want to deploy. Below I\u0026rsquo;ve used search to find both Jenkins as well as mysql. You\u0026rsquo;ll notice there are many versions of mysql but only one version of Jenkins at the moment.\nNow as a deployment test, we\u0026rsquo;ll try to deploy Jenkins through Helm. Here we\u0026rsquo;ll run the helm install command and the name of the package + our common name that we\u0026rsquo;ll use for the package. For example:\nhelm install [repo]/[package] --name [common name] You\u0026rsquo;ll notice from the screenshot above, that jenkins was deployed into the default namespace and there are a list of resources (the full list is truncated) that were deployed in our k8s cluster. You\u0026rsquo;ll also notice that at the bottom of the return data there are some notes.\nThese notes are pretty helpful in getting stated with the package we just deployed. For example in this case it shows how to start using Jenkins and the URL to access.\nIf we were to use the kubectl client, we can see our new pod and service in our cluster.\nAnd even better, we can see that we can access the Jenkins service and the application.\nSummary Helm, may be a pretty great way to deploy common packages from vendors just like apt/yum have been for linux. Its also a great tool if you want to dynamically update parameters for your Kubernetes manifests at deployment time without re-writing your static manifest files for each environment. Play around with Helm and see what you think, and watch out for Helm v3 if permissions are a concern of yours.\n","permalink":"https://theithollow.com/2019/06/10/kubernetes-helm/","summary":"\u003cp\u003eThe Kubernetes series has now ventured into some non-native k8s discussions. Helm is a relatively common tool used in the industry and it makes sense to talk about why that is. This post covers the basics of Helm so we can make our own evaluations about its use in our Kubernetes environment.\u003c/p\u003e\n\u003ch2 id=\"helm---the-theory\"\u003eHelm - The Theory\u003c/h2\u003e\n\u003cp\u003eSo what is Helm? In the most simplest terms its a package manager for Kubernetes.\u003cbr\u003e\nThink of Helm this way, Helm is to Kubernetes as yum/apt is to Linux. Yeah, sounds pretty neat now doesn\u0026rsquo;t it?\u003c/p\u003e","title":"Kubernetes - Helm"},{"content":"The focus of this post is on pod based backups, but this could also go for Deployments, replica sets, etc. This is not a post about how to backup your Kubernetes cluster including things like etcd, but rather the resources that have been deployed on the cluster. Pods have been used as an example to walk through how we can take backups of our applications once deployed in a Kubernetes cluster.\nPod Backups - The Theory Typically, when I\u0026rsquo;m talking about backups, the \u0026ldquo;theory\u0026rdquo; part is almost self-explanatory. If you deploy a production workload, you must ensure that it\u0026rsquo;s also backed up in case there is a problem with the application. However, with Kubernetes, you can certainly make the argument that a backup is not necessary.\nNo, this doesn\u0026rsquo;t mean that you can finally stop worrying about backups altogether, but Kubernetes resources are usually designed to handle failure. If a pod dies, it can be restarted using replica-sets etc. Also, it\u0026rsquo;s possible that a human makes an error and a deployment gets removed or something. Again, this is probably easily fixed by re-deploying the pods through a Kubernetes manifest file stored in version control for safe keeping.\nWhat about state though? Sometimes our pods contain state data for our application. This is really where a backup seems most appropriate to me in a Kubernetes cluster because that state data can\u0026rsquo;t be restored through redeploying your pods from code. Users may have logged into your app, created a profile, etc and this is important data that needs to be restored if the app is to be considered production. My personal views on this (yours may vary) is that if you have stateful data, perhaps that data should be stored outside of your Kubernetes cluster. For example if your app writes to a database, maybe a services such as Amazon RDS might be a better place to store that data than a pod with persistent volumes. I\u0026rsquo;ll let you make this decision.\nIt is also possible to use backups for reasons other than disaster recovery. Maybe we\u0026rsquo;ve got a bunch of pods deployed but are migrating them to a new cluster. Backup/restore processes might be an excellent way for migrating those apps.\nPod Backups - In Action For the demo section of this post, we\u0026rsquo;re going to use a tool called \u0026ldquo;Velero\u0026rdquo; (formerly named Ark) which is open-sourced software provided by Heptio. The tool will backup our pod(s) to object storage, in this case an S3 bucket in AWS, and then restore this pod to our Kubernetes cluster once disaster strikes.\nSetup Velero Before we can do anything, we need to install the velero client on our workstation. Much like we installed the kubectl client, we need a client for velero as well. Download the binaries and place them in your system path.\nOnce the client has been installed, we need to run the setup, which does things like deploy a new namespace named velero and a pod. We\u0026rsquo;ll also configure our authentication to AWS so we can use a pre-created S3 bucket for storing our backups.\nThe code below shows the command run from my workstation to setup Velero in my lab. You\u0026rsquo;ll notice I\u0026rsquo;ve set a provider to AWS, entered the name of an S3 bucket created, specified my aws credentials file with access to the s3 bucket and then specified the AWS region to be used.\nvelero install \\ --provider aws \\ --bucket theithollowvelero \\ --secret-file ~/.aws/credentials \\ --backup-location-config region=us-east-1 After running this command, we can checkout whats happened in our kubernetes cluster by checking out the velero namespace. You can see there is a deployment created in that namespace now thanks to the velero install command.\nOK, velero should be ready to go now. Before we can demonstrate it we\u0026rsquo;ll need a test pod to restore. I\u0026rsquo;ve deployed a simple nginx pod in a new namespace called \u0026ldquo;velerotesting\u0026rdquo; that we\u0026rsquo;ll use to backup and restore. You can see the details below.\nBackup Pod Time to run a backup! This is really simple to do with Velero. To run a one-time backup of our pod, we can run:\nvelero backup create [backup job name] --include-namespaces [namespace to backup] That\u0026rsquo;s it! Our backup is being created and if I take a look at the S3 bucket where our backups are stored, I now see some data in this bucket.\nI wouldn\u0026rsquo;t be doing Velero enough justice to leave it at this though. Velero can also run backups on a schedule and instead of backing up an entire namespace, backups can be created based on labels of your pods as well. To find more information about all the different capabilities of Velero, please see their official documentation. This post is focusing on a 1 time backup of an entire namespace.\nRestore Pod Everything seems fine, so let\u0026rsquo;s change that for a second. I\u0026rsquo;ll delete the namespace (and the included pods) from the cluster.\nHere is our nginx container that is no longer working in our cluster.\nOh NO! We need to quickly restore our nginx container ASAP. Declare an emergency and prep the bridge call!\nTo do a restore of our pods and namespace, we\u0026rsquo;ll run:\nvelero restore create --from-backup [backup name] Just in case you can\u0026rsquo;t remember the name of your backup, you can also use velero to show your backups via the get command if you need to review your backups.\nLet\u0026rsquo;s check on the restore now. Here\u0026rsquo;s our nginx site.\nSummary I know this seems like a silly example since our nginx pod is really basic, but it should give you an idea of what can be done with Velero to backup your workloads. A quick note, that if you are using a snapshot provider to check and see if its supported. https://velero.io/docs/v1.0.0/support-matrix/\nIf you plan to use your Kubernetes cluster in production, you should plan to have a way to protect your pods whether through version control of stateless applications or with backups and tools like Velero.\n","permalink":"https://theithollow.com/2019/06/03/kubernetes-pod-backups/","summary":"\u003cp\u003eThe focus of this post is on pod based backups, but this could also go for Deployments, replica sets, etc. This is not a post about how to backup your Kubernetes cluster including things like etcd, but rather the resources that have been deployed on the cluster. Pods have been used as an example to walk through how we can take backups of our applications once deployed in a Kubernetes cluster.\u003c/p\u003e","title":"Kubernetes - Pod Backups"},{"content":"As with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn\u0026rsquo;t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other\u0026rsquo;s work, and we can do that through role based access controls.\nRole Based Access - The Theory Before we dive too deep, lets first understand the three pieces in a Kubernetes cluster that are needed to make role based access work. These are Subjects, Resources, and Verbs.\nSubjects - Users or processes that need access to the Kubernetes API. Resources - The k8s API objects that you\u0026rsquo;d grant access to Verbs - List of actions that can be taken on a resource These three items listed above are used in concert to grant permissions such that a user ( Subject) is allowed access to take an action ( verb) on an object ( Resource).\nNow we need to look at how we tie these three items together in Kubernetes. The first step will be to create a Role or a ClusterRole. Now both of these roles will be used to tie the Resources together with a Verb, the difference between them is that a Role is used at a namespace level whereas a ClusterRole is for the entire cluster.\nOnce you\u0026rsquo;ve created your Role or your Cluster Role, you\u0026rsquo;ve tied the Resource and Verb together and are only missing the Subject now. To tie the Subject to the Role, a RoleBinding or ClusterRoleBinding is needed. As you can guess the difference between a RoleBinding or a ClusterRoleBinding is whether or not its done at the namespace or for the entire Cluster, much like the Role/ClusterRole described above.\nIt should be noted that you can tie a ClusterRole with a RoleBinding that lives within a namespace. This enables administrators to use a common set of roles for the entire cluster and then bind them to a specific namespace for use.\nRole Based Access - In Action Let\u0026rsquo;s see some of this in action. In this section we\u0026rsquo;ll create a service account ( Subject) that will have full permissions ( Verbs) on all objects ( Resource) in a single namespace.\nWe can assume that multiple teams are using our k8s cluster and we\u0026rsquo;ll separate them by namespaces so they can\u0026rsquo;t access each others pods in a namespace that is not their own. I\u0026rsquo;ve already created a namespace named \u0026ldquo;hollowteam\u0026rdquo; to set our permissions on.\nLets first start by creating our service account or Subject. Here is a manifest file that can be deployed for our cluster to create the user. Theres not too much to this manifest, but a ServiceAccount with name \u0026ldquo;hollowteam-user\u0026rdquo; and the namespace it belongs to.\n--- apiVersion: v1 kind: ServiceAccount metadata: name: hollowteam-user namespace: hollowteam Next item is the Role which will tie our user \u0026ldquo;hollowteam-user\u0026rdquo; to the verbs. In the below manifest, we need to give the role a name and attach it to a namespace. Below this, we need to add the rules. The rules are going to specify the resource and the verb to tie together. You can see we\u0026rsquo;re tying resources of \u0026ldquo;*\u0026rdquo; [wildcard for all] with a verb.\n--- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: hollowteam-full-access namespace: hollowteam rules: - apiGroups: [\u0026#34;\u0026#34;, \u0026#34;extensions\u0026#34;, \u0026#34;apps\u0026#34;] resources: [\u0026#34;*\u0026#34;] verbs: [\u0026#34;*\u0026#34;] Then finally we tie the Role to the Service Account through a RoleBinding. We give the RoleBinding a name and assign the namespace. After this we list the Subjects. You could have more than one subject, but in this case we have one, and it is the Service Account we created earlier (hollowteam-user) and the namespace its in. Then the last piece is the roleRef which is the reference to the Role we created earlier which was named \u0026ldquo;hollowteam-full-access\u0026rdquo;.\n--- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: hollowteam-role-binding namespace: hollowteam subjects: - kind: ServiceAccount name: hollowteam-user namespace: hollowteam roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: hollowteam-full-access At this point I assume that you have deployed all these manifest files with the command below:\nkubectl apply -f [manifestfile.yml] At this point things should be ready to go, but you\u0026rsquo;re likely still logging in with administrator privileges through the admin.conf KUBECONFIG file. We need to start logging in with different credentials; the hollowteam-user credentials. To do this we\u0026rsquo;ll create a new KUBECONFIG file and use it as part of our connection information.\nTo do this for our hollowteam-user, we need to run some commands to generate the details for our KUBECONFIG file. To start, we need to get the ServiceAccount token for our hollowteam-user. To do this run:\nkubectl describe sa [user] -n [namespace] Now we can see the details from the example user I created earlier named \u0026ldquo;hollowteam-user\u0026rdquo;. Note down the service account token which in my case is hollowteam-user-token-b25n4.\nNext up, we need to grab the client token. To do this we need to get the secret used for our hollowteam-user in base64. To do this run:\nkubectl get secret [user token] -n [namespace] -o \u0026#34;jsonpath={.data.token}\u0026#34; | base64 -D The output of running this command on my user token is shown below. Note down this client token for use later.\nThe last thing we need to gather is the Client Certificate info. To do this, we\u0026rsquo;ll use our user token again and run the following command:\nkubectl get secret [user token] -n [namespace] -o \u0026#34;jsonpath={.data[\u0026#39;ca\\.crt\u0026#39;]}\u0026#34; The details of my certificate are shown below. Again, copy down this output for use in our KUBECONFIG file.\nNext, we can take the data we\u0026rsquo;ve gathered and throw it in a new config.conf file. Use the following as a template and place your details in the file.\napiVersion: v1 kind: Config preferences: {} users: - name: [user here] user: token: [client token here] clusters: - cluster: certificate-authority-data: [client certificate here] server: https://[ip or dns entry here for cluster:6443 name: [cluster name.local] contexts: - context: cluster: [cluster name.local] user: [user here] namespace: [namespace] name: [context name] current-context: [set context] The final product from my config is shown below for reference. (Don\u0026rsquo;t worry, you won\u0026rsquo;t be able to use this to access my cluster.)\napiVersion: v1 kind: Config preferences: {} users: - name: hollowteam-user user: token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJob2xsb3d0ZWFtIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImhvbGxvd3RlYW0tdXNlci10b2tlbi1iMjVuNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJob2xsb3d0ZWFtLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhYjUyYzc4Yi03NWU0LTExZTktYTQ1Ny0wMDUwNTY5MzQyNTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6aG9sbG93dGVhbTpob2xsb3d0ZWFtLXVzZXIifQ.vpqfaqRq1DI55lRcB6YwSn7sqdMspHHPEI7wWQ2XmVB8SqXiF8OW0e1lXc169Z1RcjTQUhSZfWuORm_pGXZBRuh6r1vS7tCxZR-MiCAM144A9_a6I1Q8F2WfAE5bT1q0YvfKMUiWaHLtewWSZG6pCK_USCWAvFP4tgCa5h83WU_Br-fYKt4n7JT2CglC5qnIk8RPxrY7Kj13NthUkKHyVdsCt43zbh82zg8tJqX6yCqcglLKXNxSRkYVKxREF-PLmA0S2lc4UE_GWLlDM5r69_ZyRTN3-qUIqV4k3EJTqdNMPt13SzZ1kguX-hT6NkadZ2VSXG3aKeasC6TVE1T53Q clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1EVXdNVEF5TURJeE5Wb1hEVEk1TURReU9EQXlNREl4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTXZzCnN0alYyQjJ2YW5pa1BGZDErMVNKaEpUR1lJUkRXR0xjcGhValg4cnZ5cndMcE1zdGlRZFFCK2prTjdZWmk4QmYKVHRVeWp4RXZYUnlmbndGWUhDK3huTUFPaTREbzR5MHpoemdibkxyY2NlVUJoV1pwcnBPeXdUOE1aakltaUI5LwpYL1M4bUFHdU8xRVphOS9kNjhUSXVHelkxZzBlQWUvOG93Rkx4MDBPNkY3dUd1RmwrNitpdEgxdzlUNnczbjFZCnpZbzdoMzlDeElDZGd1YjFMNTV4cWlTbVAyYnpJT2UybmsyclRxOGlKR3VveEM3Q01qaUxzQWFqNjRwemFHNWEKVHNXeXdqY3d0aWgxZGxTTzhjR3oveUNrdHRwTHJqZHhLbGdjRjRCL1pOb0NQQ3FRbU9iYmlid2dvUEZpZnpQdgpVY2lGWWNZUm1PT0FuRmYxRUNFQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJcjEvUDljRFFjNDRPaHdkUjQzeG51VnlFM0cKL2FwbTdyTzgrSW54WGc1ZkkzSHdDTjNXSGowYXZYS0VkeHhnWFd5bjNkMVNIR0Y2cnZpYWVvMFNEMjNoUTZkcQp4SW9JaDVudi9WSUZ4OWdXQ1hLMnVGV3RqclUvUTI0aHZVcnRzSTVWV1Uwc0EzWjdoNDVVUUhDa25RVkN4N3NMCjZQYmJiY3F0dm1aQzk0TEhnS0VCUEtPMWpGWjRHcEF6d0ZxSStmWUQ2aHF3dk5kWC9PQVpDRUtLejJSOFZHT2MKS2N4b2k4cVRYV0NYL0x4OU1JSEFEcG1wUjFqT3p1Q3FEQ0RLc0VJdEhidUtWRHdHZzFlMFZPY3Q2Y0Fsc2dsawp5NkNpYW45aE9oYnVBTm00eHZGbTFpK2tBeFNRTkczb0x1d3VrU0NMMHkyNlYyT3FZcFMzdGFYVGkwdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://10.10.50.160:6443 name: hollowcluster.local contexts: - context: cluster: hollowcluster.local user: hollowteam-user namespace: hollowteam name: hollowteam current-context: hollowteam Save this file and then update your KUBECONFIG info which is usually done by updating your KUBECONFIG environment variable. On my Mac laptop I used:\nexport KUBECONFIG=~/.kube/newfile.conf Now its time to test. If we did everything right, the logged in entity should only have access to the hollowteam namespace. Lets first test to make sure that we can access our hollowteam namespaces resources by doing a get pods command.\nThe first command works and shows that we can access the pod (that I pre-created for this demo) and everything looks fine there. What about if we try a command that affects things outside of the namespace?\nHere we see that we get an error message when we try to access pods that are outside of our namespace. Again, this was the desired result.\nSummary Role Based Access controls are part of the basics for most platforms and Kubernetes is no different. Its useful to carefully plan out your roles so that they can be reused for multiple use cases but I\u0026rsquo;ll leave this part up to you. I hope the details in this post will help you to segment your existing cluster by permissions, but don\u0026rsquo;t forget that you can always create more clusters for separation purposes as well. Good luck in your container endeavors.\n","permalink":"https://theithollow.com/2019/05/20/kubernetes-role-based-access/","summary":"\u003cp\u003eAs with all systems, we need to be able to secure a Kubernetes cluster so that everyone doesn\u0026rsquo;t have administrator privileges on it. I know this is a serious drag because no one wants to deal with a permission denied error when we try to get some work done, but permissions are important to ensure the safety of the system. Especially when you have multiple groups accessing the same resources. We might need a way to keep those groups from stepping on each other\u0026rsquo;s work, and we can do that through role based access controls.\u003c/p\u003e","title":"Kubernetes - Role Based Access"},{"content":"Learning new things can be pretty exciting, and lucky for IT Professionals, there is no lack of things to learn. But this exciting world of endless configurations, code snippets, routes, and processes can have a demoralizing effect as well when you\u0026rsquo;re constantly bombarded with things you don\u0026rsquo;t know.\nGrowth Hurts a Little I\u0026rsquo;m not immune to the feelings of stupidity. I work with some smart folks in my day job as well as smart customers. I see what people are doing on twitter and realize that no matter what I already know, there is so much more that I could know.\nEvery time that I start learning a new concept I feel this feeling of dread, where I know I\u0026rsquo;m going to spend some time feeling like an idiot. Usually my process begins with reading blog posts and documentation online and then trying stuff in my lab. When I inevitably fail, I begin googling or start asking questions from whatever colleague I know who\u0026rsquo;s done this before.\nTo me, this is really the most painful part. Admitting to a colleague that you don\u0026rsquo;t understand something is hard to do, especially if you have to ask multiple questions in sequence which shows that you really don\u0026rsquo;t understand how it works. I think this is hard to deal with, especially if you\u0026rsquo;ve achieved a lot in another area where you are considered an authority on a different subject. Now that you\u0026rsquo;re learning something new, you\u0026rsquo;re starting at the bottom again.\nFrom time to time I see evidence online that I\u0026rsquo;m not alone with these thoughts. Take the tweet below from Jeffrey Snover from Microsoft.\nFirst, I hope Mr. Snover doesn\u0026rsquo;t mind me using his tweet as an example, but I don\u0026rsquo;t think he\u0026rsquo;ll take offense to it. If you\u0026rsquo;re not familiar with Mr. Snover\u0026rsquo;s work directly, you\u0026rsquo;ve probably seen or used the results of his career. If you look at Mr. Snover\u0026rsquo;s linkedin page you\u0026rsquo;ll see that he\u0026rsquo;s very credentialed. Distinguished Engineer/Lead Architect for Windows Server, Distinguished Engineer/Lead Architect for Windows Server, Technical Fellow/Chief Architect for Azure Infrastructure and Management Team, and Technical Fellow/Architect for Office 365 Substrate. Oh, I almost forgot, if you google \u0026ldquo;The Father of PowerShell\u0026rdquo;, you\u0026rsquo;ll find his name there too. By all accounts a smart person who has enough confidence in his abilities in the areas he does know, to be self-deprecating on twitter with a tweet like that.\nIt\u0026rsquo;s all part of the process\u0026hellip; Feeling stupid isn\u0026rsquo;t a feeling I\u0026rsquo;m very comfortable with. In fact that feeling of stupidity is sometimes the very thing that motivates me. I don\u0026rsquo;t like it and so the only way to fix it is to learn more about that topic so I don\u0026rsquo;t have to feel that way anymore. But I have to remind myself that this is all part of the process.\nThink of learning in terms of exercising. If you haven\u0026rsquo;t been on a consistent exercise regimen, and then begin training, you\u0026rsquo;ll probably feel very tired and sore after working out. You might not like this feeling very much, but know that its part of the process of getting fit. Why wouldn\u0026rsquo;t we consider that feeling stupid is also part of the normal process of learning.\nAs you exercise more, you can run farther at a time and feel less sore. As you keep studying, you learn more about your subject matter and feel less stupid. The question really comes down to what you\u0026rsquo;re going to do when you have those feelings.\nPushing Forward You\u0026rsquo;ll inevitably need to learn something that you know nothing about. You\u0026rsquo;ll probably feel stupid for a little while, or a long while. You might have to admit your ignorance to colleagues to get yourself over some hurdles so you can go tackle the next one. But the growth you\u0026rsquo;ll gain from admitting that you don\u0026rsquo;t know something is worth the effort.\nKeep trying, admit that you don\u0026rsquo;t know the material and work to find the right answers. Eventually those skills will become second nature to you and you\u0026rsquo;ll forget about all the pain it took to get you there.\nYour colleagues won\u0026rsquo;t think you\u0026rsquo;re stupid for asking questions. Especially if you\u0026rsquo;re asking the right questions and showing progress. Hopefully you\u0026rsquo;re in a situation where it\u0026rsquo;s OK to ask questions and admit your weaknesses.\nLimit your Learning in Progress It\u0026rsquo;s good to learn new things, but try not to learn too many new things at one time. Think of it this way, what if you try to learn a bunch of new things all at the same time and they all make you feel stupid. The feeling stupid part might be natural, but if every task you attempt makes you feel stupid, then this can have a demoralizing affect.\nIf every task you start makes you feel stupid, you might just wonder if you ARE stupid. Don\u0026rsquo;t do this to yourself. Limit your \u0026ldquo;Learning in Progress\u0026rdquo; (LIP) to a couple of things at a time. Just as too much Work in Progress (WIP) is bad for completing tasks, too much Learning in Progress can also be bad. Start tackling a subject and when you feel more comfortable with that subject, then you can move on. One thing at a time.\nSummary The unfortunate truth is that you probably will feel stupid if you\u0026rsquo;re doing it right. It\u0026rsquo;s simply part of the process and if you feel stupid, you\u0026rsquo;re probably on the right path. That feeling won\u0026rsquo;t last forever if you\u0026rsquo;ve got the guts to keep asking the right questions.\nWell, until you move on to the next topic you don\u0026rsquo;t understand at least.\n","permalink":"https://theithollow.com/2019/04/08/should-i-feel-this-stupid/","summary":"\u003cp\u003eLearning new things can be pretty exciting, and lucky for IT Professionals, there is no lack of things to learn. But this exciting world of endless configurations, code snippets, routes, and processes can have a demoralizing effect as well when you\u0026rsquo;re constantly bombarded with things you don\u0026rsquo;t know.\u003c/p\u003e\n\u003ch2 id=\"growth-hurts-a-little\"\u003eGrowth Hurts a Little\u003c/h2\u003e\n\u003cp\u003eI\u0026rsquo;m not immune to the feelings of stupidity. I work with some smart folks in my day job as well as smart customers. I see what people are doing on twitter and realize that no matter what I already know, there is so much more that I could know.\u003c/p\u003e","title":"Should I Feel this Stupid?"},{"content":"We love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.\nStatefulSets - The Theory StatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.\nOK, now that we know how the deployment of a StatefulSet works, what about the failure of a pod that is part of a StatefulSet? Well, the order is preserved. If we lose pod-2 due to a host failure, the StatefulSet won\u0026rsquo;t just deploy another pod, it will deploy a new \u0026ldquo;pod-2\u0026rdquo; because the identity matters in a StatefulSet.\nStatefulSets currently require a \u0026ldquo;headless\u0026rdquo; service to manage the identities. This is a service that has the same selectors that you\u0026rsquo;re used to, but won\u0026rsquo;t receive a clusterIP address, meaning that you can\u0026rsquo;t route traffic to the containers through this service object.\nStatefulSets - In Action The example that this blog post will use for a StatefulSet come right from the Kubernetes website for managing a mysql cluster. In this example, we\u0026rsquo;ll deploy a pair of pods with some mysql containers in them. This example also uses a sidecar container called xtrabackup which is used to aid in mysql replicaiton between mysql instances. So why a StatefulSet for mysql? Well, the order matters of course. Our first pod that gets deployed will contain our mysql master database where reads and writes are completed. The additional containers will contain the mysql replicated data but can only be used for read operations. The diagram below shows the setup of the application we\u0026rsquo;ll be creating.\nThe diagram below shows our application (which is also made up of a set of containers and services, but doesn\u0026rsquo;t matter in this case) and it connects directly to one of the containers. There is a headless service used to maintain the network identity of the pods, and another service that provides read access to the pods. The Pods have a pair of containers (mysql as the main container, and xtrabackup as a sidecar for replication). And we are also creating persistent storage based on the storage class we created in the Cloud Providers and Storage Classes post.\nBefore we get to deploying anything, we will create a new configmap. This config map has configuration information used by the containers to get an identity at boot time. The mysql configuration data below ensures that the master mysql container becomes master and the other containers are read-only.\napiVersion: v1 kind: ConfigMap metadata: name: mysql labels: app: mysql data: master.cnf: | # Apply this config only on the master. [mysqld] log-bin slave.cnf: | # Apply this config only on slaves. [mysqld] super-read-only Deploy the manifest above by running:\nkubectl apply -f [manifest file].yml Next, we\u0026rsquo;ll create our mysql services. Here we\u0026rsquo;ll create the headless service for use with our StatefulSet to manage the identities, as well as a service that handles traffic for mysql reads.\n# Headless service for stable DNS entries of StatefulSet members. apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: ports: - name: mysql port: 3306 clusterIP: None selector: app: mysql --- # Client service for connecting to any MySQL instance for reads. # For writes, you must instead connect to the master: mysql-0.mysql. apiVersion: v1 kind: Service metadata: name: mysql-read labels: app: mysql spec: ports: - name: mysql port: 3306 selector: app: mysql Deploy the above manifest file by running:\nkubectl apply -f [manifest file].yml Once the services are deployed, we can check them by looking at the command:\nkubectl get svc Notice that the CLUSTER-IP for the mysql service is \u0026ldquo;None\u0026rdquo;. This is our headless service for our StatefulSet.\nLastly, we deploy our StatefulSet. This manifest includes several configs that we haven\u0026rsquo;t talked about including initcontainers. The init containers spin up prior to the normal pods and take an action. In this case, they reference the pod ID and create a mysql config based on that value to ensure the pods know if they are the master pod, or the secondary pods that are read-only.\nYou\u0026rsquo;ll also see that there are scripts written into this manifest file that setup the replication by using the xtrabackup containers.\napiVersion: apps/v1 kind: StatefulSet metadata: name: mysql spec: selector: matchLabels: app: mysql serviceName: mysql replicas: 2 template: metadata: labels: app: mysql spec: initContainers: - name: init-mysql image: mysql:5.7 command: - bash - \u0026#34;-c\u0026#34; - | set -ex # Generate mysql server-id from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} echo [mysqld] \u0026gt; /mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. echo server-id=$((100 + $ordinal)) \u0026gt;\u0026gt; /mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then cp /mnt/config-map/master.cnf /mnt/conf.d/ else cp /mnt/config-map/slave.cnf /mnt/conf.d/ fi volumeMounts: - name: conf mountPath: /mnt/conf.d - name: config-map mountPath: /mnt/config-map - name: clone-mysql image: gcr.io/google-samples/xtrabackup:1.0 command: - bash - \u0026#34;-c\u0026#34; - | set -ex # Skip the clone if data already exists. [[ -d /var/lib/mysql/mysql ]] \u0026amp;\u0026amp; exit 0 # Skip the clone on master (ordinal index 0). [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} [[ $ordinal -eq 0 ]] \u0026amp;\u0026amp; exit 0 # Clone data from previous peer. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql # Prepare the backup. xtrabackup --prepare --target-dir=/var/lib/mysql volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d containers: - name: mysql image: mysql:5.7 env: - name: MYSQL_ALLOW_EMPTY_PASSWORD value: \u0026#34;1\u0026#34; ports: - name: mysql containerPort: 3306 volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 500m memory: 1Gi livenessProbe: exec: command: [\u0026#34;mysqladmin\u0026#34;, \u0026#34;ping\u0026#34;] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: exec: # Check we can execute queries over TCP (skip-networking is off). command: [\u0026#34;mysql\u0026#34;, \u0026#34;-h\u0026#34;, \u0026#34;127.0.0.1\u0026#34;, \u0026#34;-e\u0026#34;, \u0026#34;SELECT 1\u0026#34;] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 - name: xtrabackup image: gcr.io/google-samples/xtrabackup:1.0 ports: - name: xtrabackup containerPort: 3307 command: - bash - \u0026#34;-c\u0026#34; - | set -ex cd /var/lib/mysql # Determine binlog position of cloned data, if any. if [[ -f xtrabackup_slave_info ]]; then # XtraBackup already generated a partial \u0026#34;CHANGE MASTER TO\u0026#34; query # because we\u0026#39;re cloning from an existing slave. mv xtrabackup_slave_info change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it\u0026#39;s useless). rm -f xtrabackup_binlog_info elif [[ -f xtrabackup_binlog_info ]]; then # We\u0026#39;re cloning directly from master. Parse binlog position. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1 rm xtrabackup_binlog_info echo \u0026#34;CHANGE MASTER TO MASTER_LOG_FILE=\u0026#39;${BASH_REMATCH[1]}\u0026#39;,\\ MASTER_LOG_POS=${BASH_REMATCH[2]}\u0026#34; \u0026gt; change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. if [[ -f change_master_to.sql.in ]]; then echo \u0026#34;Waiting for mysqld to be ready (accepting connections)\u0026#34; until mysql -h 127.0.0.1 -e \u0026#34;SELECT 1\u0026#34;; do sleep 1; done echo \u0026#34;Initializing replication from clone position\u0026#34; # In case of container restart, attempt this at-most-once. mv change_master_to.sql.in change_master_to.sql.orig mysql -h 127.0.0.1 \u0026lt;\u0026lt;EOF $(\u0026lt;change_master_to.sql.orig), MASTER_HOST=\u0026#39;mysql-0.mysql\u0026#39;, MASTER_USER=\u0026#39;root\u0026#39;, MASTER_PASSWORD=\u0026#39;\u0026#39;, MASTER_CONNECT_RETRY=10; START SLAVE; EOF fi # Start a server to send backups when requested by peers. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \\ \u0026#34;xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root\u0026#34; volumeMounts: - name: data mountPath: /var/lib/mysql subPath: mysql - name: conf mountPath: /etc/mysql/conf.d resources: requests: cpu: 100m memory: 100Mi volumes: - name: conf emptyDir: {} - name: config-map configMap: name: mysql volumeClaimTemplates: - metadata: name: data spec: accessModes: [\u0026#34;ReadWriteOnce\u0026#34;] resources: requests: storage: 10Gi Deploy the manifest file above by running:\nkubectl apply -f [manifest file].yml Once the containers have been deployed, we can check on them with the kubectl get pods command.\nkubectl get pods Here, you can see that we have two mysql pods and they contain 2 containers. Each are running and they are numbered with the [podname]-X naming structure where we have a mysql-0 and a mysql-1. Our application needs to be able to write data, so it will be configured to write to mysql-0. However, if we have other reports that are being reviewed, those read only reports could be build based on the data from the other containers to reduce load on the writable container.\nWe can deploy our application where it is writing directly to the mysql-0 pod and the application comes up as usual.\nSummary Sometimes, you need to treat your containers more like pets than cattle. In some cases not all containers are the same, like the example in this post where our mysql-0 server is our master mysql instance. When the order of deployment matters, and not all containers are identical, a StatefulSet might be your best bet.\n","permalink":"https://theithollow.com/2019/04/01/kubernetes-statefulsets/","summary":"\u003cp\u003eWe love deployments and replica sets because they make sure that our containers are always in our desired state. If a container fails for some reason, a new one is created to replace it. But what do we do when the deployment order of our containers matters? For that, we look for help from Kubernetes StatefulSets.\u003c/p\u003e\n\u003ch2 id=\"statefulsets---the-theory\"\u003eStatefulSets - The Theory\u003c/h2\u003e\n\u003cp\u003eStatefulSets work much like a Deployment does. They contain identical container specs but they ensure an order for the deployment. Instead of all the pods being deployed at the same time, StatefulSets deploy the containers in sequential order where the first pod is deployed and ready before the next pod starts. (NOTE: it is possible to deploy pods in parallel if you need them to, but this might confuse your understanding of StatefulSets for now, so ignore that.) Each of these pods has its own identity and is named with a unique ID so that it can be referenced.\u003c/p\u003e","title":"Kubernetes - StatefulSets"},{"content":"Whenever I talk cloud with a customer, there is inevitably a discussion around how much the cloud costs vs what is in the data center. The conversation usually starts with one of several declarations.\n\u0026ldquo;The Cloud is more expensive than on-premises but we want the capabilities anyway.\u0026rdquo;\n\u0026ldquo;We need the Cloud so we can drive down our costs.\u0026rdquo;\nWell yes, if you\u0026rsquo;ve paid attention, those are two different arguments about why you need cloud, and both of them came to different conclusions about whether or not the public cloud is more expensive or less expensive than running your own data center.\nThis post is designed to show a simple example of how the public cloud is both more expensive and less expensive, but it depends on your application architecture.\nAn Example with IaaS For the example I\u0026rsquo;m about to use, I\u0026rsquo;ve chosen a sample three-tier application that might be running on a hyper-converged solution on-premises. It includes web, app, and database virtual machines and for simplicity have all been sized at 2 vCPUs, 8 GB of virtual memory, and 30 GB of disk.\nNow before I go any further, I know that a HUGE benefit to cloud is a switch to PaaS offerings which can drive down TCO because we don\u0026rsquo;t need to maintain the underlying pieces, but I\u0026rsquo;m ignoring that argument for this example. Trust me, it IS a consideration if you\u0026rsquo;re moving to cloud. For this example IaaS is being used, because you can much more easily do IaaS on-premises than PaaS and I\u0026rsquo;m trying to show a close equivalent to the app in the cloud regardless of if you swap in PaaS for parts of it.\nSo the first step in this example is to estimate the on-premises costs for our virtual machines per month and the equivalent on AWS for the same sized VMs. Assume we\u0026rsquo;re about to do a \u0026ldquo;lift and shift\u0026rdquo; for those VMs. Lets see what the costs broke out to be.\nSo the on-premises VMs came out to about $25 per month for a 2 vCPU / 8 GB vMem / 30 GB vDisk on a popular hyper-converged solution. We then used the AWS pricing calculator to determine what those same sized VMs would cost on AWS with the standard on-demand hourly pricing. You can see that the public cloud is in fact more expensive if you run your infrastructure this way.\nI also want to note that the price on-premises doesn\u0026rsquo;t include data center overhead such as physical security, HVAC, power, property taxes, etc. It only counts the hardware it runs on. Those items are tough to calculate since every region has different power prices, data centers are different sized so cooling is hard to calculate, etc., so we\u0026rsquo;ve ignored them. Clearly the price of the instances in AWS have these additional items included in the price so the AWS price looks inflated already, but you get the idea.\nNow, lets start making some changes to the application in AWS to take advantage of its features while still keeping the application IaaS based on EC2 only.\nAdd Reserved Instances The simplest way to decrease your costs is to add a Reserved Instance (RI). An AWS RI is a way for you to commit to running this instance long term and you can get a discount. These RIs come in 1 year or 3 year terms and since the calculations for the on-premises VMs used a three year depreciation schedule, I\u0026rsquo;ve used a three year RI to see what happens to the price. Note that no architectural changes were made to the app at this point.\nWe didn\u0026rsquo;t do anything other than commit to running these instances full time for the next three years and purchased an RI. You can see the price of those AWS EC2 instances came way down and they are almost as cost effective as the VMs on-prem now. If you factor in the data center overhead, they probably are more cost effective now but based on our premise, we\u0026rsquo;re still cheaper to run on-premises than in the cloud.\nChange the Architecture Lets make one more change to our application to add in some elasticity to the application. This is still an IaaS play because we\u0026rsquo;re only running on EC2 instances. Meaning, we haven\u0026rsquo;t swapped out the application server with Lambda [serverless] functions, or the database with RDS [database PaaS solution] or anything like that.\nHere we\u0026rsquo;ve taken the three tier app, resized it and added an autoscaling group for the web and app tier. Lets break down what happened.\nWe resized our Web and App EC2 instances from 2 vCPU and 8 GB of memory to 1 vCPU and 2 GB of memory. After we did that we added 3 year Reserved Instances to those to further drive down the price but we know those instances will run 100% of the time. Our database remained the same size with its own 3 year RI as well.\nThe blue boxes are the interesting parts. We obviously sized the VMs on-prem for a reason, most likely to hit maximum spikes in demand. In the cloud we can scale out our application when we need extra capacity. So we\u0026rsquo;re allowing our application tier and web tier to scale (they scale the same in this example) and the instances aren\u0026rsquo;t always provisioned. In the example I have one Web EC2 instance running 100% of the time. A second Web instance runs 25% of the time, a third runs 10% of the time and a fourth runs 5% of the time to hit those top peaks in demand. The same is done for the application even though you can now see that the web service and the application layer can now scale independently from each other.\nWe finally have an application that is finally cheaper to run in the cloud than on-premises even though we didn\u0026rsquo;t use PaaS services, or take into account data center overhead like HVAC, physical security, power or taxes.\nSummary The decision about whether the cloud is cheaper or more expensive really comes down to how you are building your applications more than price per unit. AWS needs to make a profit someplace so a straight lift and shift probably is more expensive than running that workload on-premises. But if you\u0026rsquo;re committed to re-architecture your applications into a more elastic pattern, you can save money in the cloud as well as providing higher availability for those applications through scaling and automation.\n","permalink":"https://theithollow.com/2019/03/19/its-up-to-you-to-decide-if-apps-are-cheaper-in-the-cloud/","summary":"\u003cp\u003eWhenever I talk cloud with a customer, there is inevitably a discussion around how much the cloud costs vs what is in the data center. The conversation usually starts with one of several declarations.\u003c/p\u003e\n\u003chr\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u0026ldquo;The Cloud is more expensive than on-premises but we want the capabilities anyway.\u0026rdquo;\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003chr\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u0026ldquo;We need the Cloud so we can drive down our costs.\u0026rdquo;\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003chr\u003e\n\u003cp\u003eWell yes, if you\u0026rsquo;ve paid attention, those are two different arguments about why you need cloud, and both of them came to different conclusions about whether or not the public cloud is more expensive or less expensive than running your own data center.\u003c/p\u003e","title":"Its Up to You to Decide if Apps are Cheaper in the Cloud"},{"content":"In the previous post we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn\u0026rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn\u0026rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we\u0026rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.\nCloud Providers - The Theory As we\u0026rsquo;ve learned in some of our other posts within this series, there are some objects that you can deploy through the kubectl client that don\u0026rsquo;t exist inside the Kubernetes cluster. Things like a load balancer for example. If your k8s cluster is running on EC2 instances within AWS, you can have AWS deploy a load balancer that points to your ingress controller. When it comes to storage, we can take that a step further and request EBS volumes be attached to our Kubernetes cluster for our persistent volumes.\nThis configuration can all be controlled from the kubectl client, if you setup a cloud provider. Later in this post we\u0026rsquo;ll see cloud providers in action when we setup our Cloud Provider for our vSphere environment. We won\u0026rsquo;t be able to use load balancers, but we will be able to use our vSphere storage.\nStorage Classes - The Theory Storage Classes are a nice way of giving yourself a template for the volumes that might need to be created in your Kubernetes cluster. Instead of creating a new PV for every pod, its much nicer to have a Storage Class that defines what a PV might look like. Then when the pods spin up with a persistent volume claim (PVC), it will create those PVs as they are needed from a template. Storage Classes are the mechanism that lets us deploy PVs from a template. In our Kubernetes cluster we\u0026rsquo;ve deployed PVs in a previous post, but we can\u0026rsquo;t do this with a storage class at this time because dynamic provisioning isn\u0026rsquo;t available yet. However, if we use a storage class connected with a Cloud Provider, we can use a storage class to automatically provision new vSphere storage for our cluster.\nWhen we combine storage classes and Cloud Providers together, we can get some pretty neat results.\nCloud Provider - In Action Before we can do much with our cloud provider, we need to set it up. I will be doing this on my vSphere hosted k8s cluster that was deployed in this previous post. To setup the cloud provider, I\u0026rsquo;ll be rebuilding this cluster with the kubeadm reset command to reset the cluster and then performing the kubeadm init process from scratch.\nSidebar: I had some help with this configuration and wanted to send a shout-out to Tim Carr who is a Kubernetes Architect at VMware as well as an all around good guy.\nvSphere Setup Before creating any Kubernetes configs, we need to make sure our vSphere environment is ready to be used by k8s. I\u0026rsquo;ve created an administrator accounts which permissions to the cluster, VMs, and datastores. Here is a look at my hosts and clusters view so you can see how I created config files later in the post.\nNow don\u0026rsquo;t go to far, you\u0026rsquo;ll need to make sure that an advanced configuration setting is added to your Kubernetes nodes. This setting ensures that Kubernetes can identify the disks that it might need to add to a pod from vSphere. To do this, power off your k8s cluster and modify the vm properties of the nodes. Go to VM Options and click the Edit Configurations link next to Configuration Parameters in the Advanced section.\nFrom there, add the key \u0026ldquo;disk.EnableUUID\u0026rdquo; and the corresponding value of \u0026ldquo;True\u0026rdquo;.\nI\u0026rsquo;ve also created a folder called \u0026ldquo;kubevol\u0026rdquo; in one of my vSphere datastores where new Kubernetes persistent volumes will live.\nLastly, all of my k8s nodes live within a single virtual machine folder, which is important.\nKubeadm init Setup First, we need to create a config file that will have configuration information for the connection to the vSphere environment. It will include the location of the Kubernetes hosts, datastores and connection strings shown in the above screenshots. Here is the example file I used in my lab. I saved it as vsphere.conf for now and placed it in the /etc/kubernetes directory on my master node.\nNOTE: It is possible to use a Base64 encoded password, which I didn\u0026rsquo;t do for this post. Just note that it can be done and should be used over plain text for any production environments. This is a getting stated post so we\u0026rsquo;ve eliminated some of the complexity here.\n[Global] user = \u0026#34;k8s@hollow.local\u0026#34; password = \u0026#34;Password123\u0026#34; port = \u0026#34;443\u0026#34; insecure-flag = \u0026#34;1\u0026#34; datacenters = \u0026#34;HollowLab\u0026#34; [VirtualCenter \u0026#34;vcenter1.hollow.local\u0026#34;] [Workspace] server = \u0026#34;vcenter1.hollow.local\u0026#34; datacenter = \u0026#34;HollowLab\u0026#34; default-datastore=\u0026#34;Synology02-NFS01\u0026#34; resourcepool-path=\u0026#34;HollowCluster/Resources\u0026#34; folder = \u0026#34;Kubernetes\u0026#34; [Disk] scsicontrollertype = pvscsi If you\u0026rsquo;re wondering what this config file contains, here are some descriptions of the fields.\nuser - Username with admin access to the vSphere cluster to create storage password - The credentials for the user to login port - The port used to connect to the vCenter server insecure-flag - Setting this to \u0026ldquo;1\u0026rdquo; means it will accept the certificate if it isn\u0026rsquo;t trusted datacenters - The datacenter within vCenter where Kubernetes nodes live server - The vCenter to connect to default datastore - Datastore where persistent volumes will be created resourcepool-path - The resource pool where the Kubernetes nodes live. folder - the folder where your Kubernetes nodes live. They should be in their own folder within vSphere. scsicontrollertype - Which type of vSphere scsi controller type should be used The config file you created in the previous step is the connection information for vSphere. However, when you run the Kubeadm init configuration, you need a configuration file that\u0026rsquo;s used to bootstrap your cluster.\nWe\u0026rsquo;ll need to create a yaml file that will be used during the kubeadm init process. Here is the config file for the kubeadm init. Note that the cloud provider is \u0026ldquo;vsphere\u0026rdquo; and and I\u0026rsquo;ve updated the cloud-config value to the path to the vsphere.conf file we just created in the previous step. I\u0026rsquo;ve placed both this config file and the vsphere.conf file in /etc/kubernetes/ directory on the k8s master.\napiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: cba33r.3f565pcpa8vbxdpu #generate your own token here prior or use an autogenerated token ttl: 0s usages: - signing - authentication nodeRegistration: kubeletExtraArgs: cloud-provider: \u0026#34;vsphere\u0026#34; cloud-config: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.13.4 apiServer: extraArgs: cloud-provider: \u0026#34;vsphere\u0026#34; cloud-config: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; extraVolumes: - name: cloud hostPath: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; mountPath: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; controllerManager: extraArgs: cloud-provider: \u0026#34;vsphere\u0026#34; cloud-config: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; extraVolumes: - name: cloud hostPath: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; mountPath: \u0026#34;/etc/kubernetes/vsphere.conf\u0026#34; networking: podSubnet: \u0026#34;10.244.0.0/16\u0026#34; Now, we have the details for our Kubernetes cluster to be re-initialized with the additional connection information for our vSphere environment. Lets re-run the kubeadm init command now with our new configuration file.\nkubeadm init --config /etc/kubernetes/[filename].yml The result will be the setup of your Kubernetes cluster again on the master node. After this is done, we still need to have our other worker nodes join the cluster much like we did in the original kubeadm post. Before we do this though, we want to create another config file on the worker nodes so that they know they are joining a cluster and have the vSphere provider available to them. We will also need some information from the master node that will be passed along during the join. To get this information run:\nkubectl -n kube-public get configmap cluster-info -o jsonpath=\u0026#39;{.data.kubeconfig}\u0026#39; \u0026gt; config.yml Take this config.yml file that was created on the master and place it on the k8s worker nodes. This discovery file provides information needed to join the workers to the cluster properly.\nCreate another yaml file in the /etc/kubernetes directory called vsphere-join.yml. This file should contain the join token presented at the last step of the kubeadm init provided earlier.\napiVersion: kubeadm.k8s.io/v1alpha3 kind: JoinConfiguration discoveryFile: config.yml token: cba33r.3f565pcpa8vbxdpu #token used to init the k8s cluster. Yours should be different nodeRegistration: kubeletExtraArgs: cloud-provider: vsphere #cloud provider is enabled on worker Run the kubeadm joint command like this on the worker nodes.\nkubeadm join --config /etc/kubernetes/vsphere-join.yml When the cluster is setup, there will be a connection to your vSphere environment that is housing the k8s nodes. Don\u0026rsquo;t forget to set your KUBECONFIG path to the admin.conf file and then deploy your flannel networking pods with the:\nkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml Let\u0026rsquo;s move on to storage classes for now that the cluster has been initialized and in a ready state.\nStorage Classes - In Action Before we create a Storage Class, login to your vSphere environment and navigate to your datastore where new disks should be created. Create a kubevols folder in that datastore.\nLets create a manifest file for a storage class. Remember that this class acts like a template to create PVs so the format may look familiar to you. I\u0026rsquo;ve created a new class named \u0026ldquo;thin-disk\u0026rdquo; which will create a thin provisioned disk provided by vSphere. I\u0026rsquo;ve also made this StorageClass the default, which you can of course only have 1 at a time. Note the provisioner listed here, as it is specific to the cloud provider being used.\nkind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: vsphere-disk #name of the default storage class annotations: storageclass.kubernetes.io/is-default-class: \u0026#34;true\u0026#34; #Make this a default storage class. There can be only 1 or it won\u0026#39;t work. provisioner: kubernetes.io/vsphere-volume parameters: diskformat: thin Deploy the Kubernetes manifest file with the apply command.\nkubectl apply -f [manifest file].yml You can check the status of your StorageClass by running:\nkubectl get storageclass Now to test out that storage class and see if it will create new PVs if a claim is issued. Create a new persistent volume claim and apply it. Here is one you can test with:\nkind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: vsphere-disk spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Apply it with the kubectl apply -f [manifest].yml command.\nWhen you apply the PVC you can check the status with a kubectl get PVC command to see whats happening. You can see from the screenshot below that good things are happening because the status is \u0026ldquo;Bound.\u0026rdquo;\nNow, for a quick look in the vSphere environment, we can look in the kubevols folder in the datastore we specified. That claim that was deployed used the Storage Class to deploy a vmdk for use by the Kubernetes cluster.\nSummary Persistent Volumes are necessary, and dynamically deploying them with a storage class may become necessary depending on your application. Adding a cloud provider can really increase the usability of your Kubernetes cluster, especially if its located within a public cloud where you can use a variety of services with your containers.\n","permalink":"https://theithollow.com/2019/03/13/kubernetes-cloud-providers-and-storage-classes/","summary":"\u003cp\u003eIn the \u003ca href=\"/?p=9598\"\u003eprevious post\u003c/a\u003e we covered Persistent Volumes (PV) and how we can use those volumes to store data that shouldn\u0026rsquo;t be deleted if a container is removed. The big problem with that post is that we have to manually create the volumes and persistent volume claims. It would sure be nice to have those volumes spun up automatically wouldn\u0026rsquo;t it? Well, we can do that with a storage class. For a storage class to be really useful, we\u0026rsquo;ll have to tie our Kubernetes cluster in with our infrastructure provider like AWS, Azure or vSphere for example. This coordination is done through a cloud provider.\u003c/p\u003e","title":"Kubernetes - Cloud Providers and Storage Classes"},{"content":"Containers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.\nVolumes - The Theory In the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.\nPersistent Volumes Persistent Volumes are simply a piece of storage in your cluster. Similar to how you have a disk resource in a server, a persistent volume provides storage resources for objects in the cluster. At the most simple terms you can think of a PV as a disk drive. It should be noted that this storage resource exists independently from any pods that may consume it. Meaning, that if the pod dies, the storage should remain intact assuming the claim policies are correct. Persistent Volumes are provisioned in two ways, Statically or Dynamically.\nStatic Volumes - A static PV simply means that some k8s administrator provisioned a persistent volume in the cluster and it\u0026rsquo;s ready to be consumed by other resources.\nDynamic Volumes - In some circumstances a pod could require a persistent volume that doesn\u0026rsquo;t exist. In those cases it is possible to have k8s provision the volume as needed if storage classes were configured to demonstrate where the dynamic PVs should be built. This post will focus on static volumes for now.\nPersistent Volume Claims Pods that need access to persistent storage, obtain that access through the use of a Persistent Volume Claim. A PVC, binds a persistent volume to a pod that requested it.\nWhen a pod wants access to a persistent disk, it will request access to the claim which will specify the size , access mode and/or storage classes that it will need from a Persistent Volume. Indirectly the pods get access to the PV, but only through the use of a PVC.\nClaim Policies We also reference claim policies earlier. A Persistent Volume can have several different claim policies associated with it including:\nRetain - When the claim is deleted, the volume remains.\nRecycle - When the claim is deleted the volume remains but in a state where the data can be manually recovered.\nDelete - The persistent volume is deleted when the claim is deleted.\nThe claim policy (associated at the PV and not the PVC) is responsible for what happens to the data on when the claim has been deleted.\nVolumes - In Action For the demonstration in the lab, we\u0026rsquo;ll begin by deploying something that looks like the diagram below. The application service and pod won\u0026rsquo;t change from what we\u0026rsquo;ve done before, but we need a front end to our application. However, the database pod will use a volume claim and a persistent volume to store the database for our application. Also, if you\u0026rsquo;re following my example exactly, I\u0026rsquo;m using an ingress controller for the application, but however you present your application outside of the Kubernetes cluster is fine.\nFirst, we\u0026rsquo;ll start by deploying a persistent volume through a manifest file. Remember that you can deploy these manifest files by running:\nkubectl apply -f [manifest file].yml Here is a sample manifest file for the persistent volume. This is a static persistent volume.\napiVersion: v1 kind: PersistentVolume metadata: name: mysqlvol spec: storageClassName: manual capacity: storage: 10Gi #Size of the volume accessModes: - ReadWriteOnce #type of access hostPath: path: \u0026#34;/mnt/data\u0026#34; #host location After you deploy your persistent volume, you can view it by running:\nkubectl get pv Now that the volume has been deployed, we can deploy our claim.\nNOTE: you can deploy the pv, pvc, pods, services, etc within the same manifest file, but for the purposes of this blog I\u0026rsquo;ll often break them up so we can focus on one part over the other.\n--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysqlvol spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Once your claim has been created, we can look for those claims by running:\nkubectl get pvc Great, the volume is setup and a claim ready to be used. Now we can deploy our database pod and service. The database pod will mount the volume via the claim and we\u0026rsquo;re specifying in our pod code, that the volume will be mounted in the /var/lib/mysql directory so it can store our database for mysql.\napiVersion: apps/v1 kind: Deployment metadata: name: hollowdb labels: app: hollowdb spec: replicas: 1 selector: matchLabels: app: hollowdb strategy: type: Recreate template: metadata: labels: app: hollowdb spec: containers: - name: mysql image: theithollow/hollowapp-blog:dbv1 imagePullPolicy: Always ports: - containerPort: 3306 volumeMounts: - name: mysqlstorage mountPath: /var/lib/mysql volumes: - name: mysqlstorage persistentVolumeClaim: claimName: mysqlvol --- apiVersion: v1 kind: Service metadata: name: hollowdb spec: ports: - name: mysql port: 3306 targetPort: 3306 protocol: TCP selector: app: hollowdb And now that we\u0026rsquo;ve got a working mysql container with persistent storage for the database, we can deploy our app.\nNOTE: In this example, my application container, checks to see if there is a database for the app created already. If there is, it will use that database, if there isn\u0026rsquo;t, it will create a database on the mysql server.\nAlso, I\u0026rsquo;m using a secret for the connection string as we\u0026rsquo;ve discussed in a previous post.\napiVersion: apps/v1 kind: Deployment metadata: labels: app: hollowapp name: hollowapp spec: replicas: 3 selector: matchLabels: app: hollowapp strategy: type: Recreate template: metadata: labels: app: hollowapp spec: containers: - name: hollowapp image: theithollow/hollowapp-blog:allin1 imagePullPolicy: Always ports: - containerPort: 5000 env: - name: SECRET_KEY value: \u0026#34;my-secret-key\u0026#34; - name: DATABASE_URL valueFrom: secretKeyRef: name: hollow-secret key: db.string --- apiVersion: v1 kind: Service metadata: name: hollowapp labels: app: hollowapp spec: type: ClusterIP ports: - port: 5000 protocol: TCP targetPort: 5000 selector: app: hollowapp --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hollowapp labels: app: hollowapp spec: rules: - host: hollowapp.hollow.local http: paths: - path: / backend: serviceName: hollowapp servicePort: 5000 Once the application has been deployed, it should connect to the database pod and set it up and then start presenting our application. Let\u0026rsquo;s check by accessing it from our ingress controller.\nAs you can see from the screenshot below, my app came up and I\u0026rsquo;m registering a user within my application as a test. This proves that I can submit data to my form and have it stored in the database pod.\nAfter I register a user I can then submit a post just to show that we\u0026rsquo;re posting data and displaying it through the application. You can see that the first post is successful and it foreshadows our next step.\nNow that the app works, lets test the database resiliency. Remember that with replica set, Kubernetes will make sure that we have a certain number of pods always running. If one fails, it will be rebuilt. Great when there is no state involved. Now we have a persistent volume with our database in it. Therefore, we should be able to kill that database pod and a new one will take its place and attach to the persistent storage. The net result will be an outage, but when it comes back up, our data should still be there. The diagram below demonstrates what will happen.\nSo lets break some stuff!\nLets kill our database pod from the command line.\nkubectl delete pod [database pod name] In the screenshot above you see that the pod was delete, and then I ran a \u0026ldquo;get pod\u0026rdquo; command to see whats happening. my DB pod is Terminating and a new one is in a running status already.\nLet\u0026rsquo;s check the state of our application. NOTE: depending on what app you have here, it may or may not handle the loss of a database connection well or not. Mine did fine in this case.\nBack in my application, I\u0026rsquo;m able to login with the user that I registered earlier, which is a good sign.\nAnd once I\u0026rsquo;m logged in, I can see my previous post which means my database is functioning even though its in a new pod. The volume still stored the correct data and was re-attached to the new pod.\nSummary Well persistent volumes aren\u0026rsquo;t the most interesting topic to cover around Kubernetes, but if state is involved, they are critical to the resiliency of your applications. If you\u0026rsquo;re designing your applications, consider whether a pod with persistent volumes will suffice, or if maybe an external service like a cloud database is the right choice for your applications.\n","permalink":"https://theithollow.com/2019/03/04/kubernetes-persistent-volumes/","summary":"\u003cp\u003eContainers are often times short lived. They might scale based on need, and will redeploy when issues occur. This functionality is welcomed, but sometimes we have state to worry about and state is not meant to be short lived. Kubernetes persistent volumes can help to resolve this discrepancy.\u003c/p\u003e\n\u003ch2 id=\"volumes---the-theory\"\u003eVolumes - The Theory\u003c/h2\u003e\n\u003cp\u003eIn the Kubernetes world, persistent storage is broken down into two kinds of objects. A Persistent Volume (PV) and a Persistent Volume Claim (PVC). First, lets tackle a Persistent Volume.\u003c/p\u003e","title":"Kubernetes - Persistent Volumes"},{"content":"Secret, Secret, I\u0026rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the previous post we used ConfigMaps to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a \u0026ldquo;secret\u0026rdquo;.\nSecrets - The Theory The short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a previous post in this series, but with non-clear text.\nConfigMaps would be good to store configuration data that is not sensitive. If it is sensitive information that not everyone should see, then a \u0026ldquo;secret\u0026rdquo; should be chosen over a ConfigMap. Password, keys, or private information should be stored as a secret instead of a ConfigMap.\nOne thing to note is that secrets can be stored a either \u0026ldquo;data\u0026rdquo; or \u0026ldquo;stringData\u0026rdquo; maps. Data would be used to store a secret in base64 format which you would provide. You can also use stringData which you\u0026rsquo;d provide an unencoded string but it would be stored as a base64 string when the secret is create. This is a valuable tool when your deployment creates a secret as part of the build process.\nSecrets - In Action Since a secret is just a more secured version of a ConfigMap, the demo will be the same as the last post, with the exception that we\u0026rsquo;ll use a secret over a ConfigMap to store our connection string. This is a better way to store a connection string over a ConfigMap because it does have a password in it which should be protected.\nFirst, we’ll deploy the database container and service. The database container has already been configured with new database with the appropriate username and password. The manifest file to deploy the DB and service is listed below.\napiVersion: apps/v1 kind: Deployment metadata: name: hollowdb labels: app: hollowdb spec: replicas: 1 selector: matchLabels: app: hollowdb template: metadata: labels: app: hollowdb spec: containers: - name: mysql image: theithollow/hollowapp-blog:dbv1 imagePullPolicy: Always ports: - containerPort: 3306 --- apiVersion: v1 kind: Service metadata: name: hollowdb spec: ports: - name: mysql port: 3306 targetPort: 3306 protocol: TCP selector: app: hollowdb We\u0026rsquo;ll deploy a new secret from a manifest after we take our connection string and convert it to base64. I took the following connection string:\nmysql+pymysql://hollowapp:Password123@hollowdb:3306/hollowapp then ran it through a bash command of:\necho -n \u0026#39;mysql+pymysql://hollowapp:Password123@hollowdb:3306/hollowapp\u0026#39; | base64 I took the result of that command and placed int in the Secret manifest under the db.string map.\napiVersion: v1 kind: Secret metadata: name: hollow-secret data: db.string: bXlzcWwrcHlteXNxbDovL2hvbGxvd2FwcDpQYXNzd29yZDEyM0Bob2xsb3dkYjozMzA2L2hvbGxvd2FwcA== Deploy the secret via a familiar command we\u0026rsquo;ve used for deploying our manifests.\nkubectl apply -f [manifest file].yml Now that the secret is deployed, we can deploy our application pods which will connect to the backend database just as we did with a ConfigMap in a previous post. The important difference between the Deployment manifest using a ConfigMap and the Deployment manifest using a secret is this section.\n- name: DATABASE_URL valueFrom: secretKeyRef: name: hollow-secret key: db.string The full deployment file that will read from the secret and use that secret is listed below.\napiVersion: apps/v1 kind: Deployment metadata: labels: app: hollowapp name: hollowapp spec: replicas: 3 selector: matchLabels: app: hollowapp strategy: type: Recreate template: metadata: labels: app: hollowapp spec: containers: - name: hollowapp image: theithollow/hollowapp-blog:allin1 imagePullPolicy: Always ports: - containerPort: 5000 env: - name: SECRET_KEY value: \u0026#34;my-secret-key\u0026#34; - name: DATABASE_URL valueFrom: secretKeyRef: name: hollow-secret key: db.string --- apiVersion: v1 kind: Service metadata: name: hollowapp labels: app: hollowapp spec: type: ClusterIP ports: - port: 5000 protocol: TCP targetPort: 5000 selector: app: hollowapp --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hollowapp labels: app: hollowapp spec: rules: - host: hollowapp.hollow.local http: paths: - path: / backend: serviceName: hollowapp servicePort: 5000 Deploy the manifest using:\nkubectl apply -f [manifest file].yml The result is that the app reads from the secret file, and uses that string as part of the connection to the backend database.\nSummary Sometimes you\u0026rsquo;ll want to store non-sensitive data and a ConfigMap is the easy way to store that data. In other cases, the data shouldn\u0026rsquo;t be available to everyone, like in the case of a password or a connection string. When those situations occur, a secret may be your method to store this data.\n","permalink":"https://theithollow.com/2019/02/25/kubernetes-secrets/","summary":"\u003cp\u003eSecret, Secret, I\u0026rsquo;ve got a secret! OK, enough of the Styx lyrics, this is serious business. In the \u003ca href=\"/?p=9583\"\u003eprevious post we used ConfigMaps\u003c/a\u003e to store a database connection string. That is probably not the best idea for something with a sensitive password in it. Luckily Kubernetes provides a way to store sensitive configuration items and its called a \u0026ldquo;secret\u0026rdquo;.\u003c/p\u003e\n\u003ch2 id=\"secrets---the-theory\"\u003eSecrets - The Theory\u003c/h2\u003e\n\u003cp\u003eThe short answer to understanding secrets would be to think of a ConfigMap, which we have discussed in a \u003ca href=\"/?p=9583\"\u003eprevious post\u003c/a\u003e in this \u003ca href=\"/2019/01/26/getting-started-with-kubernetes/\"\u003eseries\u003c/a\u003e, but with non-clear text.\u003c/p\u003e","title":"Kubernetes - Secrets"},{"content":"Sometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.\nConfigMaps - The Theory Not all of our applications can be as simple as the basic nginx containers we\u0026rsquo;ve deployed earlier in this series. In some cases, we need to pass configuration files, variables, or other information to our apps.\nThe theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.\nFor example, you could have a key value pair of \u0026ldquo;environment:dev\u0026rdquo; in a configmap for your development Kubernetes cluster. When you deploy your apps, you can key some of your logic off of the \u0026ldquo;environment\u0026rdquo; variable and do different things for production vs. development environments. I\u0026rsquo;m not sure this is necessary, but it\u0026rsquo;s just an example to get you thinking about it. Let\u0026rsquo;s see these ConfigMaps in action and I think you\u0026rsquo;ll get the picture.\nConfigMaps - In Action For a ConfigMap example, we\u0026rsquo;ll take a simple two tier app (lovingly called hollowapp) and we\u0026rsquo;ll configure the database connection string through a ConfigMap object. Try to put out of your mind how insecure this is for the moment, it\u0026rsquo;s just an example to prove the point.\nHere is a high level diagram of the lab we\u0026rsquo;ll be building. As you can see some of the items we\u0026rsquo;ve talked about in previous posts have been generalized. The main part is the App container using the ConfigMap to configure the connection string to the database service.\nFirst, we\u0026rsquo;ll deploy the database container and service. The database container has already been configured with new database with the appropriate username and password. The manifest file to deploy the DB and service is listed below.\napiVersion: apps/v1 kind: Deployment metadata: name: hollowdb labels: app: hollowdb spec: replicas: 1 selector: matchLabels: app: hollowdb template: metadata: labels: app: hollowdb spec: containers: - name: mysql image: theithollow/hollowapp-blog:dbv1 imagePullPolicy: Always ports: - containerPort: 3306 --- apiVersion: v1 kind: Service metadata: name: hollowdb spec: ports: - name: mysql port: 3306 targetPort: 3306 protocol: TCP selector: app: hollowdb To deploy the manifests, we run our familiar command:\nkubectl apply -f [manifest file].yml OK, the Database is deployed and ready to go. Now we need to deploy a ConfigMap with the database connection string.\nThe ConfigMap is listed below in another manifest file. Most of the configuration in the manifest should look familiar to you. The kind has been updated to a type of \u0026ldquo;ConfigMap\u0026rdquo;, but the important part is the data field. We have a key named. db.string and the value of that key is our connection string (yes, with the clear text super secret password).\napiVersion: v1 kind: ConfigMap metadata: name: hollow-config data: db.string: \u0026#34;mysql+pymysql://hollowapp:Password123@hollowdb:3306/hollowapp\u0026#34; #Key Value pair Value being a database connection string Deploy the ConfigMap using the same apply command as before and change the manifest file name.\nkubectl apply -f [manifest file].yml Now you can run a get command to list the ConfigMap.\nkubectl get configmap Now, before we deploy our app, lets deploy a test container to prove that we can read that key value pair from the ConfigMap.\nThe manifest below has an environment variable named DATABASE_URL and we\u0026rsquo;re telling it to get the value of that environment variable from the ConfigMap named hollow-config. Within the hollow-config ConfigMap, we\u0026rsquo;re looking for the key named db.string. The result that will be the value stored in our DATABASE_URL variable.\napiVersion: v1 kind: Pod metadata: name: shell-demo spec: containers: - name: nginx image: nginx env: - name: DATABASE_URL valueFrom: configMapKeyRef: name: hollow-config key: db.string Deploy the test shell-demo container with the kubectl apply command.\nkubectl apply -f [manifest file].yml Once it\u0026rsquo;s deployed, you can get an interactive shell into that container by running the following command.\nkubectl exec -it shell-demo -- /bin/bash Once we have a shell session, we can do an echo on our DATABASE_URL environment variable and it should show the string from our ConfigMap.\nYou can exit the shell session and then we\u0026rsquo;re ready to deploy our app. The manifest for the app is shown below. NOTE: it does require that you have your ingress controller running if you plan to access it through a browser.\napiVersion: apps/v1 kind: Deployment metadata: labels: app: hollowapp name: hollowapp spec: replicas: 3 selector: matchLabels: app: hollowapp strategy: type: Recreate template: metadata: labels: app: hollowapp spec: containers: - name: hollowapp image: eshanks16/k8s-hollowapp:v2 imagePullPolicy: Always ports: - containerPort: 5000 env: - name: SECRET_KEY value: \u0026#34;my-secret-key\u0026#34; - name: DATABASE_URL valueFrom: configMapKeyRef: name: hollow-config key: db.string --- apiVersion: v1 kind: Service metadata: name: hollowapp labels: app: hollowapp spec: type: ClusterIP ports: - port: 5000 protocol: TCP targetPort: 5000 selector: app: hollowapp --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: hollowapp labels: app: hollowapp spec: rules: - host: hollowapp.hollow.local http: paths: - path: / backend: serviceName: hollowapp servicePort: 5000 Deploy the app with the same command.\nkubectl apply -f [manifest file].yml The result is that our app is communicating correctly with our back end database container all because of our ConfigMap.\nSummary So this isn\u0026rsquo;t the most secure way to pass connection information to your containers, but it is a pretty effective way of storing parameters that you might need for your applications. What will you think of to use ConfigMaps for?\n","permalink":"https://theithollow.com/2019/02/20/kubernetes-configmaps/","summary":"\u003cp\u003eSometimes you need to add additional configurations to your running containers. Kubernetes has an object to help with this and this post will cover those ConfigMaps.\u003c/p\u003e\n\u003ch2 id=\"configmaps---the-theory\"\u003eConfigMaps - The Theory\u003c/h2\u003e\n\u003cp\u003eNot all of our applications can be as simple as the basic nginx containers we\u0026rsquo;ve deployed earlier in \u003ca href=\"/2019/01/26/getting-started-with-kubernetes/\"\u003ethis series\u003c/a\u003e. In some cases, we need to pass configuration files, variables, or other information to our apps.\u003c/p\u003e\n\u003cp\u003eThe theory for this post is pretty simple, ConfigMaps store key/value pair information in an object that can be retrieved by your containers. This configuration data can make your applications more portable.\u003c/p\u003e","title":"Kubernetes - ConfigMaps"},{"content":"DNS is a critical service in any system. Kubernetes is no different, but Kubernetes will implement its own domain naming system that\u0026rsquo;s implemented within your Kubernetes cluster. This post explores the details that you need to know to operate a k8s cluster properly.\nKubernetes DNS - The theory I don\u0026rsquo;t want to dive into DNS too much since it\u0026rsquo;s a core service most should be familiar with. But at a really high level, DNS translates an IP address that might be changing, with an easily remember-able name such as \u0026ldquo;theithollow.com\u0026rdquo;. Every network has a DNS server, but Kubernetes implements their own DNS within the cluster to make connecting to containers a simple task.\nThere are two implementations of DNS found within Kubernetes clusters. Kube-dns and CoreDNS. The default used with kubeadm after version 1.13 is to use CoreDNS which is managed by the Cloud Native Computing Foundation. Since I used kubeadm to setup our cluster and it\u0026rsquo;s the default version, this post will focus on CoreDNS.\nThis topic can be pretty detailed, but lets distill it into the basics. Each time you deploy a new service or pod, the DNS service sees the calls made to the Kube API and adds DNS entries for the new object. Then other containers within a Kubernetes cluster can use these DNS entries to access the services within it by name.\nThere are two objects that you could access via DNS within your Kubernetes cluster and those are Services and Pods. Now Pods can get a dns entry, but you\u0026rsquo;d be wondering why, since we\u0026rsquo;ve learned that accessing pods should be done through a Service. For this reason, pods don\u0026rsquo;t get a DNS entry by default with CoreDNS.\nServices get a DNS entry of the service name appended to the namespace and then appended to \u0026ldquo;svc.cluster.local\u0026rdquo;. Similarly pods get entries of PodIPAddress appended to the namespace and then appended to .pod.cluster.local if Pod DNS is enabled in your cluster.\nThe diagram below shows a pair of services and a pair of pods that are deployed across two different namespaces. Below this are the DNS entries that you can expect to be available for these objects, again assuming Pod DNS is enabled for your cluster.\nAny pods looking for a service within the same namespace can just use the common name of the \u0026ldquo;Service\u0026rdquo; instead of the FQDN. Kubernetes will add the proper DNS suffix to the request if one is not given. In fact, when you deploy new pods, Kubernetes specifies your DNS server for you unless you override it within a manifest file.\nKubernetes DNS - In Action Lets take a quick look at how DNS is working in our own Kubernetes cluster. Feel free to deploy any Kubernetes manifest that has a service and pod in the deployment. If you need one, you can use mine below.\napiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginxsvc spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30001 protocol: TCP selector: app: nginx You can deploy the file with:\nkubectl apply -f [manifest file].yml Now lets look at some information about our service which we\u0026rsquo;ll check later in this post.\nkubectl get svc Once our example pods and services are deployed, let\u0026rsquo;s deploy another container where we can run some DNS lookups. To do this we\u0026rsquo;ll deploy the shell-demo container and get an interactive shell by using the commands below. We\u0026rsquo;ll also install busybox once we\u0026rsquo;ve got a shell.\nKubectl create -f https://k8s.io/examples/application/shell-demo.yaml kubectl exec -it shell-demo -- /bin/bash Apt-get update Apt-get install busybox -y Now that busybox is installed and we\u0026rsquo;re in an interactive shell session, lets check to see what happens when we run a dns lookup on our example service.\nbusybox nslookup nginxsvc The results of our nslookup on the nginxsvc that we deployed early on in this post shows the ClusterIP address of the Service. Also, pay close attention to the FQDN. The FQDN is the [service name] +\u0026quot;.\u0026quot; + [namespace name] + \u0026ldquo;.\u0026rdquo; + svc.cluster.local.\nSummary DNS is a basic service that we rely on for almost all systems. Kubernetes is no different and it makes accessing other services very simple and straightforward. You can manipulate your DNS configs by update manifest files if you need, but for the basic operations, you should be good to go without even having to configure much. And remember\u0026hellip; it\u0026rsquo;s ALWAYS DNS.\nTo clean up your cluster run:\nkubectl delete -f [manifest file].yml #manifest used during the deploy of the nginxsvc kubectl delete pod shell-demo ","permalink":"https://theithollow.com/2019/02/18/kubernetes-dns/","summary":"\u003cp\u003eDNS is a critical service in any system. Kubernetes is no different, but Kubernetes will implement its own domain naming system that\u0026rsquo;s implemented within your Kubernetes cluster. This post explores the details that you need to know to operate a k8s cluster properly.\u003c/p\u003e\n\u003ch2 id=\"kubernetes-dns---the-theory\"\u003eKubernetes DNS - The theory\u003c/h2\u003e\n\u003cp\u003eI don\u0026rsquo;t want to dive into DNS too much since it\u0026rsquo;s a core service most should be familiar with. But at a really high level, DNS translates an IP address that might be changing, with an easily remember-able name such as \u0026ldquo;theithollow.com\u0026rdquo;. Every network has a DNS server, but Kubernetes implements their own DNS within the cluster to make connecting to containers a simple task.\u003c/p\u003e","title":"Kubernetes - DNS"},{"content":"It\u0026rsquo;s time to look closer at how we access our containers from outside the Kubernetes cluster. We\u0026rsquo;ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.\nIngress Controllers - The Theory Lets first talk about why we\u0026rsquo;d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at \u0026ldquo;http://store-name.com/\u0026quot; and a shopping cart page at \u0026ldquo;http://store-name.com/cart\u0026quot; and an api URI at \u0026ldquo;http://store-name.com/api\u0026quot;. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we\u0026rsquo;re starting to drift off the point which hopefully you get now.\nOK, so assuming we have these different services, if we\u0026rsquo;re in an on-prem Kubernetes service we\u0026rsquo;d need to expose each of those services to the external networks. We\u0026rsquo;ve done this in the past with a node port. The problem is, we can\u0026rsquo;t expose each app individually and have it work well because each service would need a different port. Imagine your users having to know which ports to use for each part of your application. It would be unusable like the example below.\nA much simpler solution would be to have a single point of ingress (you see where I\u0026rsquo;m going) and have this \u0026ldquo;ingress-controller\u0026rdquo; define where the traffic should be routed.\nIngress with a Kubernetes cluster comes in two parts.\nIngress Controller Ingress Resource The ingress-controller is responsible for doing the routing to the right places and can be thought of like a load balancer. It can route requests to the right place based on a set of rules applied to it. These rules are called an \u0026ldquo;ingress resource.\u0026rdquo; If you see ingress as part of a Kubernetes manifest, it\u0026rsquo;s likely not an ingress controller, but a rule that should be applied on the controller to route new requests. So the ingress controller likely is running in the cluster all the time and when you have new services, you just apply the rule so that the controller knows where to proxy requests for that service.\nIngress Controllers - In Action For this scenario we\u0026rsquo;re going to deploy the objects depicted in this diagram. We\u0026rsquo;ll have an ingress controller, a default backend, and two apps have different host names. Also notice, that we\u0026rsquo;ll be using an NGINX ingress controller and that it is setup in a different namespace which helps keep the cluster secure and clean for anyone that shouldn\u0026rsquo;t need to see that controller deployment.\nFirst, we\u0026rsquo;ll set the state by deploying our namespace where the ingress controller will live. To do that lets first start with a manifest file for our namespace.\n--- apiVersion: v1 kind: Namespace metadata: name: ingress Once you deploy your namespace, we can move on to our configmap that has information about our environment, used by our ingress controller when it starts up.\n--- apiVersion: v1 kind: ConfigMap metadata: name: nginx-ingress-controller-conf labels: app: nginx-ingress-lb namespace: ingress data: enable-vts-status: \u0026#39;true\u0026#39; Lastly, before we get to our ingress objects, we need to deploy a service account with permissions to deploy and read information from the Kubernetes cluster. Here is a manifest that can be used.\nNOTE: If you\u0026rsquo;re following this series, you may not know what a service account is yet. For now, think of the service account as an object with permissions attached to it. It\u0026rsquo;s a way for the Ingress controller to get permissions to interact with the Kubernetes API.\n--- apiVersion: v1 --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx namespace: ingress --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: nginx-role namespace: ingress rules: - apiGroups: - \u0026#34;\u0026#34; resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - \u0026#34;\u0026#34; resources: - nodes verbs: - get - apiGroups: - \u0026#34;\u0026#34; resources: - services verbs: - get - list - update - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch - apiGroups: - \u0026#34;\u0026#34; resources: - events verbs: - create - patch - apiGroups: - extensions resources: - ingresses/status verbs: - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: nginx-role namespace: ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-role subjects: - kind: ServiceAccount name: nginx namespace: ingress Now that those prerequisites are deployed, we\u0026rsquo;ll deploy a default backend for our controller. This is just a pod that will handle any requests where there is an unknown route. If someone accessed our cluster at http://clusternodeandport/ bananas we wouldn\u0026rsquo;t have a route that handled that so we\u0026rsquo;d point it to the default backend with a 404 error in it. NGINX has a sample backend that you can use and that code is listed below. Just deploy the manifest to the cluster.\n--- apiVersion: apps/v1 kind: Deployment metadata: name: default-backend namespace: ingress spec: replicas: 1 selector: matchLabels: app: default-backend template: metadata: labels: app: default-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-backend image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-backend namespace: ingress spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: default-backend kubectl apply -f [manifest file].yml Now that the backend service and pods are configured, it\u0026rsquo;s time to deploy the ingress controller through another deployment file. Here is a deployment of the Ingress Controller for NGINX.\n--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress spec: replicas: 1 selector: matchLabels: app: nginx-ingress-lb revisionHistoryLimit: 3 template: metadata: labels: app: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 serviceAccount: nginx containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 5 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-backend - --configmap=\\$(POD_NAMESPACE)/nginx-ingress-controller-conf - --v=2 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 - containerPort: 18080 To deploy your manifest, it\u0026rsquo;s again the:\nkubectl apply -f [manifest file].yml Congratulations, your controller is deployed, lets now deploy a Service that exposes the ingress controller to the outside world and in our case through a NodePort.\n--- apiVersion: v1 kind: Service metadata: name: nginx namespace: ingress spec: type: NodePort ports: - port: 80 name: http nodePort: 32000 - port: 18080 name: http-mgmt selector: app: nginx-ingress-lb Deploy the Service with:\nkubectl apply -f [manifest file].yml The service just deployed publishes itself on port 32000. This was hard coded into the yaml manifest that was just deployed. If you\u0026rsquo;d like to check the service to make sure, run:\nkubectl get svc --namespace=ingress As you can see from my screenshot, the outside port for http traffic is 32000. This is because I\u0026rsquo;ve hard coded the 32000 port as a NodePort into the ingress-svc manifest. Be careful doing this as this is the only service that can use this port. You can remove the nodeport but you will need to lookup the port assigned to this service if you do. Hard coding this port into the manifest was used to simplify these instructions and make things more clear as you follow along.\nNow we\u0026rsquo;ll deploy our application called hollowapp. The manifest file below has standard Deployments and a Service (that isn\u0026rsquo;t exposed externally). It also has a new resource of kind \u0026ldquo;ingress\u0026rdquo; which is our ingress rule to be applied on our controller. The main thing is anyone who access the ingress-controller with a host name of hollowapp.hollow.local will route traffic to our service. This means we need to setup a DNS record to point at our Kubernetes cluster for this resource. I\u0026rsquo;ve done this in my lab and you can change this to meet your own needs in your lab with your own app.\napiVersion: apps/v1 kind: Deployment metadata: labels: app: hollowapp name: hollowapp spec: replicas: 1 selector: matchLabels: app: hollowapp strategy: type: Recreate template: metadata: labels: app: hollowapp spec: containers: - name: hollowapp image: theithollow/hollowapp-blog:allin1-v2 imagePullPolicy: Always ports: - containerPort: 5000 env: - name: SECRET_KEY value: \u0026#34;my-secret-key\u0026#34; --- apiVersion: v1 kind: Service metadata: name: hollowapp labels: app: hollowapp spec: type: ClusterIP ports: - port: 5000 protocol: TCP targetPort: 5000 selector: app: hollowapp --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress #ingress resource metadata: name: hollowapp labels: app: hollowapp spec: rules: - host: hollowapp.hollow.local #only match connections to hollowapp.hollow.local. http: paths: - path: / #root path backend: serviceName: hollowapp servicePort: 5000 Apply the manifest above, or a modified version for your own environment with the command below:\nkubectl apply -f [manifest file].yml Now, we can test out our deployment in a web browser. First lets make sure that something is being returned when we hit the Kubernetes ingress controller via the web browser and I\u0026rsquo;ll use the IP Address in the URL.\nSo that is the default backend providing a 404 error, as it should. We don\u0026rsquo;t have an ingress rule for access the controller via an IP Address so it used the default backend. Now try that again by using the hostname that we used in the manifest file which in my case was http://hollowapp.hollow.local.\nAnd before we do that here\u0026rsquo;s a screenshot showing that the host name maps to the EXACT same IP address we used to test the 404 error.\nWhen we access the cluster by the hostname, we get our new application we\u0026rsquo;ve deployed.\nSummary The examples shown in this post can be augmented by adding a load balancer outside the cluster but you should get a good idea of what an ingress controller can do for you. Once you\u0026rsquo;ve got it up and running it can provide a single resource to access from outside the cluster and many service running behind it. Other advanced controllers exist as well, so this post should just serve as an example. Ingress controllers can do all sorts of things including handling TLS, monitoring, handling session persistence and others. Feel free to checkout all your ingress options from all sorts of sources including NGINX, Heptio, Traefik and others.\n","permalink":"https://theithollow.com/2019/02/13/kubernetes-ingress/","summary":"\u003cp\u003eIt\u0026rsquo;s time to look closer at how we access our containers from outside the Kubernetes cluster. We\u0026rsquo;ve talked about Services with NodePorts, LoadBalancers, etc., but a better way to handle ingress might be to use an ingress-controller to proxy our requests to the right backend service. This post will take us through how to integrate an ingress-controller into our Kubernetes cluster.\u003c/p\u003e\n\u003ch2 id=\"ingress-controllers---the-theory\"\u003eIngress Controllers - The Theory\u003c/h2\u003e\n\u003cp\u003eLets first talk about why we\u0026rsquo;d want to use an ingress controller in the first place. Take an example web application like you might have for a retail store. That web application might have an index page at \u0026ldquo;\u003ca href=\"http://store-name.com/%22\"\u003ehttp://store-name.com/\u0026quot;\u003c/a\u003e and a shopping cart page at \u0026ldquo;\u003ca href=\"http://store-name.com/cart%22\"\u003ehttp://store-name.com/cart\u0026quot;\u003c/a\u003e and an api URI at \u0026ldquo;\u003ca href=\"http://store-name.com/api%22\"\u003ehttp://store-name.com/api\u0026quot;\u003c/a\u003e. We could build all these in a single container, but perhaps each of those becomes their own set of pods so that they can all scale out independently. If the API needs more resources, we can just increase the number of pods and nodes for the api service and leave the / and the /cart services alone. It also allows for multiple groups to work on different parts simultaneously but we\u0026rsquo;re starting to drift off the point which hopefully you get now.\u003c/p\u003e","title":"Kubernetes - Ingress"},{"content":"You\u0026rsquo;ve been working with Kubernetes for a while now and no doubt you have lots of clusters and namespaces to deal with now. This might be a good time to introduce Kubernetes KUBECONFIG files and context so you can more easily use all of these different resources.\nKUBECONFIG and Context - The Theory When you first setup your Kubernetes cluster you created a config file likely stored in your $HOME/.kube directory. This is the KUBECONFIG file and it is used to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG file. This is why you would\u0026rsquo;ve needed to add this file to your $PATH variable so that it could be used correctly by the kubectl commands.\nThe KUBECONFIG file contains several things of interest including the cluster information so that kubectl is executing commands on the correct cluster. It also stores authentication information such as username/passwords, certificates or tokens. Lastly, the KUBECONFIG file stores contexts.\nContexts group access information under an easily recognizable name. So a context would include a cluster, a user and a namespace. Remember in the previous post where we talked about namespaces and how we could logically separate our Kubernetes cluster? Now we can use KUBECONFIG and context to set our default namespaces. No more logging in and being dumped into the default namespace.\nKUBECONFIG and Context - In Action Lets start by deploying a new namespace and an example pod so that we have something to work with.\nkind: Namespace apiVersion: v1 metadata: name: hollow-namespace labels: name: hollow-namespace --- apiVersion: v1 kind: Pod metadata: name: nginx namespace: hollow-namespace labels: name: nginx spec: containers: - name: nginx image: nginx kubectl apply -f {manifest file].yml Now we\u0026rsquo;ve got a new namespace named \u0026ldquo;hollow-namespace\u0026rdquo; that we can use to test out setting up our context in our KUBECONFIG file. Before we do that, lets take a look at our current context by running:\nkubectl config current-context Based on my setup my context is named kubernetes-admin@kubernetes. Lets take a closer look at the configuration by running:\nkubectl config view The output from the previous command shows us a good deal of detail about our current configuration. In fact, you can probably find this same information if you open your KUBECONFIG file in a text editor. NOTE: the KUBECONFIG file is probably not named \u0026ldquo;kubeconfig\u0026rdquo;. Mine was named admin.conf but the name isn\u0026rsquo;t really important.\nWe can see that in our config file, we have a \u0026ldquo;Contexts\u0026rdquo; section. This section sets our default cluster, namespace and user that kubectl will use with its commands.\nNow let\u0026rsquo;s update our KUBECONFIG with our new namespace that we created. We can do this via kubectl by running the following command:\nkubectl config set-context theithollow --namespace=hollow-namespace --cluster=kubernetes --user=kubernetes-admin You can create your own context name and change the namespace as you see fit. Since this is for theITHollow, I\u0026rsquo;ve used that as the context.\nOnce we\u0026rsquo;ve set a new context, we can re-run the \u0026ldquo;config view\u0026rdquo; command to see if our context changed at all. You should see the context section has been updated.\nSo if we run our get pods right now, we shouldn\u0026rsquo;t see any pods because we\u0026rsquo;re still in the default namespace. Let\u0026rsquo;s change our context so that we\u0026rsquo;re using the new namespace. In my case the hollow-namespace. Then we\u0026rsquo;ll view our current-context again after changing the context in use.\nkubectl config use-context theithollow kubectl config current-context Lets run a get pods command and if we did everything right, we should see our pod from the \u0026ldquo;hollow-namespace\u0026rdquo; instead of the pods from the default namespace.\nSummary After this post you should feel more comfortable not only with namespaces and how they can be set to default, but how to configure connections to multiple Kubernetes clusters. If you\u0026rsquo;d like to reset your cluster we can run the following commands to delete our namespace and pods, as well as removing the context information that was added and setting our context back to normal.\nkubectl config use-context kubernetes-admin@kubernetes kubectl config unset contexts.theithollow #replace theithollow with your context name kubectl delete -f [manifest name].yml #Manifest is the file used to deploy the namespace and the naked pod. ","permalink":"https://theithollow.com/2019/02/11/kubernetes-kubeconfig-and-context/","summary":"\u003cp\u003eYou\u0026rsquo;ve been working with Kubernetes for a while now and no doubt you have lots of clusters and namespaces to deal with now. This might be a good time to introduce Kubernetes KUBECONFIG files and context so you can more easily use all of these different resources.\u003c/p\u003e\n\u003ch2 id=\"kubeconfig-and-context---the-theory\"\u003eKUBECONFIG and Context - The Theory\u003c/h2\u003e\n\u003cp\u003eWhen you first setup your Kubernetes cluster you created a config file likely stored in your $HOME/.kube directory. This is the KUBECONFIG file and it is used to store information about your connection to the Kubernetes cluster. When you use kubectl to execute commands, it gets the correct communication information from this KUBECONFIG file. This is why you would\u0026rsquo;ve needed to add this file to your $PATH variable so that it could be used correctly by the kubectl commands.\u003c/p\u003e","title":"Kubernetes - KUBECONFIG and Context"},{"content":"In this post we\u0026rsquo;ll start exploring ways that you might be able to better manage your Kubernetes cluster for security or organizational purposes. Namespaces become a big piece of how your Kubernetes cluster operates and who sees what inside your cluster.\nNamespaces - The Theory The easiest way to think of a namespace is that its a logical separation of your Kubernetes Cluster. Just like you might have segmented a physical server into several virtual severs, we can segment our Kubernetes cluster into namespaces. Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the \u0026ldquo;get pods\u0026rdquo; command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.\nThe Kubernetes cluster we\u0026rsquo;ve been working with should have a few namespaces built in to our default deployment. These include:\nDefault - This is the namespace where all our pods and services run unless we specify a different one. This is the namespace our work has been completed in up until this point. kube-public - The public namespace is available to everyone with access to the Kubernetes cluster Kube-System - The system namespace is being used by your cluster right now. It stores objects related to the management of the Kubernetes cluster. It\u0026rsquo;s probably smart to leave the kube-system namespace alone. Here be dragons. So why would we create additional namespaces? Namespaces can be used for security purposes so that you can couple it with role based access control (RBAC) for your users. Instead of building multiple Kubernetes clusters which might waste resources, we can build a single cluster and then carve it up into namespaces if we need to give different teams their own space to work.\nYour cluster might also be carved up into namespaces by environment, such as Production and Development. Service Names can be re-used if they are placed in different namespaces so your code can be identical between the namespaces and not conflict with each other.\nIt\u0026rsquo;s also possible that you just want to segment some of your containers so that not everyone sees them. Maybe you\u0026rsquo;ve got a shared service that would be used between teams and don\u0026rsquo;t want it to show up in each team\u0026rsquo;s work space. Namespaces can be a great tool for Kubernetes hygiene.\nNamespaces - In Action To get our hands dirty, lets start by listing the namespaces that currently exist in our Kubernetes cluster. We can do this by running:\nkubectl get namespaces There are the three out of the box namespaces we discussed in our theory section. Now let\u0026rsquo;s create a new namespace with our desired state manifest files as we have in previous posts. We\u0026rsquo;ll also deploy a naked pod within this container to show that we can do it, and to later show how namespaces segment our resources.\nkind: Namespace apiVersion: v1 metadata: name: hollow-namespace labels: name: hollow-namespace --- apiVersion: v1 kind: Pod metadata: name: nginx namespace: hollow-namespace labels: name: nginx spec: containers: - name: nginx image: nginx Deploy the manifest file above with the:\nkubectl apply -f [manifest file].yml Once the namespace and pod have been created, lets run a quick get pod command as well, just to see what\u0026rsquo;s happening.\nThe screenshot shows that we created a new namespace and we created a new pod, but when we did our get pods command, nothing was listed. Thats because our context is still set to the default namespace, as it is by default. To see our pods in other namespaces we have a couple of options.\nFirst we can list all pods across all namespaces if our permissions allow. We can do this by running:\nkubectl get pod --all-namespaces Wow, there are a lot of other pods running besides the ones we\u0026rsquo;ve deployed! Most of these pods are running in the kube-system namespace though and we\u0026rsquo;ll leave them alone.\nRunning our commands across all namespaces can be too much to deal with, what if we just want to see all pods in a single namespace instead? For this we can run:\nkubectl get pod --namespace=[namespace name] Summary Now we\u0026rsquo;ve seen how we can logically segment our cluster into different areas for our teams to work. In future posts we\u0026rsquo;ll discuss namespaces more including how to set the current context so that we are using the correct namespace when we log in to our cluster.\nTo delete the pod and namespace used in this post run the following command:\nkubectl delete -f [manifest file].yml ","permalink":"https://theithollow.com/2019/02/06/kubernetes-namespaces/","summary":"\u003cp\u003eIn this post we\u0026rsquo;ll start exploring ways that you might be able to better manage your Kubernetes cluster for security or organizational purposes. Namespaces become a big piece of how your Kubernetes cluster operates and who sees what inside your cluster.\u003c/p\u003e\n\u003ch2 id=\"namespaces---the-theory\"\u003eNamespaces - The Theory\u003c/h2\u003e\n\u003cp\u003eThe easiest way to think of a namespace is that its a logical separation of your Kubernetes Cluster. Just like you might have segmented a physical server into several virtual severs, we can segment our Kubernetes cluster into namespaces. Namespaces are used to isolate resources within the control plane. For example if we were to deploy a pod in two different namespaces, an administrator running the \u0026ldquo;get pods\u0026rdquo; command may only see the pods in one of the namespaces. The pods could communicate with each other across namespaces however.\u003c/p\u003e","title":"Kubernetes - Namespaces"},{"content":"A critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In previous posts we\u0026rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?\nThis post will cover three options for publishing our services for accessing our applications.\nClusterIP - The Theory Whenever a service is created a unique IP address is assigned to the service and that IP Address is called the \u0026ldquo;ClusterIP\u0026rdquo;. Now since we need our Services to stay consistent, the IP address of that Service needs to stay the same. Remember that pods come and go, but services will need to be pretty consistent so that we have an address to always access for our applications or users.\nOk, big deal right. Services having an IP assigned to them probably doesn\u0026rsquo;t surprise anyone, but what you should know is that this ClusterIP isn\u0026rsquo;t accessible from outside of the Kubernetes cluster. This is an internal IP only meaning that other pods can use a Services Cluster IP to communicate between them but we can\u0026rsquo;t just put this IP address in our web browser and expect to get connected to the service in our Kubernetes cluster.\nNodePort - The Theory NodePort might seem familiar to you. Thats because we used NodePort when we deployed our sample deployment in the Services and Labels post. A NodePort exposes a service on each node\u0026rsquo;s IP address on a specific port. NodePort doesn\u0026rsquo;t replace ClusterIP however, all it does is direct traffic to the ClusterIP from outside the cluster.\nSome important information about NodePorts is that you can just publish your service on any port you want. For example if you have a web container deployed you\u0026rsquo;d likely be tempted to use a NodePort of 443 so that you can use a standard https port. This won\u0026rsquo;t work since NodePort must be within the port range 30000-32767. You can specify which port you want to use as long as its in this range, which is what we did in our previous post, but be sure it doesn\u0026rsquo;t conflict with another service in your cluster. If you don\u0026rsquo;t care what port it is, don\u0026rsquo;t specify one and your cluster will randomly assign one for you.\nLoadBalancer - The Theory LoadBalancers are not going to be covered in depth in this post. What you should know about them right now is that they won\u0026rsquo;t work if your cluster is build on-premises, like on a vSphere enviornment. If you\u0026rsquo;re using a cloud service like Amazon Elastic Container Service for Kubernetes (EKS) or other cloud provider\u0026rsquo;s k8s solution, then you can specify a load balancer in your manifest file. What it would do is spin up a load balancer in the cloud and point the load balancer to your service. This would allow you to use port 443 for example on your load balancer and direct traffic to one of those 30000 or higher ports.\nClusterIP - In Action As we mentioned, we can\u0026rsquo;t access our containers from outside our cluster by using just a ClusterIP so we\u0026rsquo;ll deploy a test container within the Kubernetes cluster and run a curl command against our nginx service. The picture below describes the process we\u0026rsquo;ll be testing.\nFirst, lets deploy our manifest which includes our service and nginx pod and container.\n--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: ingress-nginx spec: type: ClusterIP ports: - name: http port: 80 targetPort: 80 protocol: TCP selector: app: nginx We can deploy the manifest via:\nkubectl apply -f [manifest file].yml Once our pod and service has been deployed we can look at the service information to find the ClusterIP\nWe can see that the ClusterIP for our ingress-nginx service is 10.101.199.109. So next, we\u0026rsquo;re going to deploy another container just to run a curl command from within the cluster. To do this we\u0026rsquo;ll run some imperative commands instead of the declarative manifest files.\nkubectl create -f https://k8s.io/examples/application/shell-demo.yaml kubectl exec -it shell-demo -- /bin/bash Once you\u0026rsquo;ve run the two commands above, you\u0026rsquo;ll have a new pod named shell-demo and you\u0026rsquo;ve gotten an interactive terminal session into the container. Now we need to update the container and install curl.\napt-get update apt-get install curl curl [CLUSTERIP] As you can see, we can communicate with this service by using the ClusterIP from within the cluster.\nTo stop this test you can exit the interactive shell by ctrl+d. Then we can delete the test pod by running:\nkubectl delete pod shell-demo Then we can remove our service and pod by running:\nkubectl delete -f [manifest file].yml NodePort - In Action In this test, we\u0026rsquo;ll access our backend pods through a NodePort from outside our Kubernetes cluster. The diagram below should give you a good idea of where the ClusterIP, NodePort and Containers fit in. Additional containers (app) were added to help better understand how they might fit in.\nLets deploy another service and pod where we\u0026rsquo;ll also specify a NodePort of 30001. Below is another declarative manifest files.\napiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30001 protocol: TCP selector: app: nginx We can deploy our manifest file with:\nkubectl apply -f [manifest file].yml Once it\u0026rsquo;s deployed we can look at the services again. You\u0026rsquo;ll notice this is the same command as we ran earlier but this time there are two ports listed. The second port is whats mapped to our NodePort.\nIf we access the IP Address of one of our nodes with the port we specified, we can see our nginx page.\nAnd thats it. Now we know how we can publish our services both externally and internally. To delete your pods run:\nkubectl delete -f [manifest file].yml Summary There are different ways to publish your services depending on what your goals are. We\u0026rsquo;ll learn about other ways to expose your services externally in a future post, but for now we\u0026rsquo;ve got a few weapons in our arsenal to expose our pods to other pods or externally to the clients.\n","permalink":"https://theithollow.com/2019/02/05/kubernetes-service-publishing/","summary":"\u003cp\u003eA critical part of deploying containers within a Kubernetes cluster is understanding how they use the network. In \u003ca href=\"/2019/01/26/getting-started-with-kubernetes/\"\u003eprevious posts\u003c/a\u003e we\u0026rsquo;ve deployed pods and services and were able to access them from a client such as a laptop, but how did that work exactly? I mean, we had a bunch of ports configured in our manifest files, so what do they mean? And what do we do if we have more than one pod that wants to use the same port like 443 for https?\u003c/p\u003e","title":"Kubernetes - Service Publishing"},{"content":"It\u0026rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you\u0026rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we\u0026rsquo;ll cover the basics of Kubernetes endpoints in this post.\nEndpoints - The Theory During the post where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you\u0026rsquo;re doing, you don\u0026rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven\u0026rsquo;t covered yet).\nWhat you should know about endpoints is that there is a list of addresses your services will send traffic and its managed through endpoints. Those endpoints can be updated automatically through labels and selectors, or you can manually configure your endpoints depending on your use case.\nEndpoints - In Action Let\u0026rsquo;s take a look at some endpoints that we\u0026rsquo;ve used in our previous manifests. Lets deploy this manifest as we did in our previous posts and take a look to see what our endpoints are doing.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object replicas: 2 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: containers: - name: nginx-container #the name of the container within the pod image: nginx #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod --- apiVersion: v1 #version of the API to use kind: Service #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: ingress-nginx #Name of the service spec: #specifications for our object type: NodePort #Ignore for now discussed in a future post ports: #Ignore for now discussed in a future post - name: http port: 80 targetPort: 80 nodePort: 30001 protocol: TCP selector: #Label selector used to identify pods app: nginx We can deploy this manifest running:\nkubectl apply -f [manifest file].yml Now that its done we should be able to see the endpoints that were automatically created when our service selector matched our pod label. To see this we can query our endpoints through kubectl.\nkubectl get endpoints Your results will likely have different addresses than mine, but for reference, here are the endpoints that were returned.\nThe ingress-nginx endpoint is the one we\u0026rsquo;re focusing on and you can see it has two endpoints listed, both on port 80.\nNow those endpoints should be the IP addresses of our pods that we deployed in our manifest. To test this, lets use the get pods command with the -o wide switch to show more output.\nkubectl get pods -o wide You can see that the IP addresses associated with the pods matches the endpoints. So we proved that the endpoints are matching under the hood.\nHow about if we want to manually edit our endpoints if we don\u0026rsquo;t have a selector? Maybe we\u0026rsquo;re trying to have a resource to access an external service that doesn\u0026rsquo;t live within our Kubernetes cluster? We could create our own endpoint to do this for us. A great example might be an external database service for our web or app containers.\nLet\u0026rsquo;s look at an example where we\u0026rsquo;re using an endpoint to access an external resource from a container. In this case we\u0026rsquo;ll access a really simple web page just for a test. For reference, I accessed this service from my laptop first to prove that it\u0026rsquo;s working. If you\u0026rsquo;re doing this in your lab, you\u0026rsquo;ll need to spin up a web server and modify the IP Addresses accordingly.\nI know that it\u0026rsquo;s not very exciting, but it\u0026rsquo;ll get the job done. Next up we\u0026rsquo;ll deploy our Endpoint and a service with no selector. The following manifest should do the trick. Notice the ports used and the IP Address specified in the endpoint.\nkind: \u0026#34;Service\u0026#34; apiVersion: \u0026#34;v1\u0026#34; metadata: name: \u0026#34;external-web\u0026#34; spec: ports: - name: \u0026#34;apache\u0026#34; protocol: \u0026#34;TCP\u0026#34; port: 80 targetPort: 80 --- kind: \u0026#34;Endpoints\u0026#34; apiVersion: \u0026#34;v1\u0026#34; metadata: name: \u0026#34;external-web\u0026#34; subsets: - addresses: - ip: \u0026#34;10.10.50.53\u0026#34; #The IP Address of the external web server ports: - port: 80 name: \u0026#34;apache\u0026#34; Then we\u0026rsquo;ll deploy the manifest file to get our service and endpoint built.\nkubectl apply -f [manifest file] Let\u0026rsquo;s check the endpoint just to make sure it looks correct.\nkubectl get endpoints Once the service and endpoint are deployed, we\u0026rsquo;ll deploy a quick container into our cluster so we can use it to curl our web page as a test. We\u0026rsquo;ll use the imperative commands instead of a manifest file this time.\nkubectl create -f https://k8s.io/examples/application/shell-demo.yaml kubectl exec -it shell-demo -- /bin/bash apt-get update apt-get install curl curl external-web The curl command worked when performing a request against the \u0026ldquo;external-web\u0026rdquo; service, so we know it\u0026rsquo;s working!\nSummary If you\u0026rsquo;ve been following the series so far, you\u0026rsquo;ve already been using endpoints but just didn\u0026rsquo;t know it. You can add your own endpoints manually if you have a use case such as accessing a remote service like a database but for now its probably enough just to get the idea that they exist and that you can modify them if necessary.\n","permalink":"https://theithollow.com/2019/02/04/kubernetes-endpoints/","summary":"\u003cp\u003eIt\u0026rsquo;s quite possible that you could have a Kubernetes cluster but never have to know what an endpoint is or does, even though you\u0026rsquo;re using them behind the scenes. Just in case you need to use one though, or if you need to do some troubleshooting, we\u0026rsquo;ll cover the basics of Kubernetes endpoints in this post.\u003c/p\u003e\n\u003ch2 id=\"endpoints---the-theory\"\u003eEndpoints - The Theory\u003c/h2\u003e\n\u003cp\u003eDuring the \u003ca href=\"/?p=9427\"\u003epost\u003c/a\u003e where we first learned about Kubernetes Services, we saw that we could use labels to match a frontend service with a backend pod automatically by using a selector. If any new pods had a specific label, the service would know how to send traffic to it. Well the way that the service knows to do this is by adding this mapping to an endpoint. Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label, that IP Address is added to your endpoints and if this is all you\u0026rsquo;re doing, you don\u0026rsquo;t really need to know much about endpoints. However, you can have Services where the endpoint is a server outside of your cluster or in a different namespace (which we haven\u0026rsquo;t covered yet).\u003c/p\u003e","title":"Kubernetes - Endpoints"},{"content":"If you\u0026rsquo;ve been following the series, you may be thinking that we\u0026rsquo;ve built ourselves a problem. You\u0026rsquo;ll recall that we\u0026rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven\u0026rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn\u0026rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we\u0026rsquo;ll access one of our pods \u0026hellip; finally.\nServices - The Theory In the broadest sense, Kubernetes Services tie our pods together and provide a front end resource to access. You can think of them like a load balancer that automatically knows which servers it is trying to load balance. Since our pods may be created and destroyed even without our intervention, we\u0026rsquo;ll need a stable way to access them at a single address every time. Services give us a static resource to access that abstracts the pods behind them.\nOK, we probably don\u0026rsquo;t have a hard time understanding that a Service sits in front of our pods and distributes requests to them, but we might be asking how the service knows which pods it should be providing a front end for. I mean of course we\u0026rsquo;ll have pods for different reasons running in our cluster and we\u0026rsquo;ll need to assign services to our pods somehow. Well that introduces a really cool specification called labels. If you were paying close attention in previous posts, we had labels on some of our pods already. Labels are just a key value pair, or tag, that provides metadata on our objects. A Kubernetes Service can select the pods it is supposed to abstract through a label selector. Neat huh?\nTake a look at the example diagram below. Here we have a single Service that is front-ending two of our pods. The two pods have labels named \u0026ldquo;app: nginx\u0026rdquo; and the Service has a label selector that is looking for those same labels. This means that even though the pods might change addresses, as long as they are labeled correctly, the service, which stays with a constant address, will send traffic to them.\nYou might also notice that there is a Pod3 that has a different label. The Service 1 service won\u0026rsquo;t front end that pod so we\u0026rsquo;d need another service that would take care of that for us. Now we\u0026rsquo;ll use a service to access our nginx pod later in this post, but remember that many apps are multiple tiers. Web talks to app which talks to database. In that scenario all three of those tiers may need a consistent service for the others to communicate properly all while pods are spinning up and down.\nThe way in which services are able to send traffic to the backend pods is through the use of the kube-proxy. Every node of our Kubernetes cluster runs a proxy called the kube-proxy. This proxy listens to the master node API for services as well as endpoints (covered in a later post). Whenever it finds a new service, the kube-proxy opens a random port on the node in which it belongs. This port proxies connections to the backend pods.\nServices and Labels - In Action I know you\u0026rsquo;re eager to see if your cluster is really working or not, so let\u0026rsquo;s get to deploying our deployment manifest like we built in a previous post and then a Service to front-end that deployment. When we\u0026rsquo;re done, we\u0026rsquo;ll pull it open in a web browser to see an amazing webpage.\nLet\u0026rsquo;s start off by creating a new manifest file and deploying it to our Kubernetes cluster. The file is below and has two objects, Deployment \u0026amp; Service within the same file.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object replicas: 2 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: containers: - name: nginx-container #the name of the container within the pod image: nginx #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod --- apiVersion: v1 #version of the API to use kind: Service #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: ingress-nginx #Name of the service spec: #specifications for our object type: NodePort #Ignore for now discussed in a future post ports: #Ignore for now discussed in a future post - name: http port: 80 targetPort: 80 nodePort: 30001 protocol: TCP selector: #Label selector used to identify pods app: nginx We can deploy this by running a command we should be getting very familiar with at this point.\nkubectl apply -f [manifest file].yml After the the manifest has been deployed we can look at the pods, replica sets or deployments like we have before and now we can also look at our services by running:\nkubectl get services Neat! Now we have a Service named ingress-nginx which we defined in our manifest file. We also have a Kubernetes Service which, for now, we\u0026rsquo;ll ignore. Just know this is used to run the cluster. But do take a second to notice the ports column. Our ingress-ngnix service shows 80:30001/TCP. This will be discussed further in a future post, but the important thing is that the port after the colon \u0026ldquo;:\u0026rdquo; is the port we\u0026rsquo;ll access the service on from our computer.\nHere\u0026rsquo;s the real test, can we open a web browser and put in an IP Address of one of our Kubernetes nodes on port 30001 and get an nginx page?\nSummary Well the result isn\u0026rsquo;t super exciting. We have a basic nginx welcome page which isn\u0026rsquo;t really awe inspiring, but we did finally access an app on our Kubernetes cluster and it was by using Services and Labels coupled with our pods that we\u0026rsquo;ve been learning. Stay tuned for the next post where we dive deeper into Kubernetes.\n","permalink":"https://theithollow.com/2019/01/31/kubernetes-services-and-labels/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been following \u003ca href=\"/2019/01/26/getting-started-with-kubernetes/\"\u003ethe series\u003c/a\u003e, you may be thinking that we\u0026rsquo;ve built ourselves a problem. You\u0026rsquo;ll recall that we\u0026rsquo;ve now learned about Deployments so that we can roll out new pods when we do upgrades, and replica sets can spin up new pods when one dies. Sounds great, but remember that each of those containers has a different IP address. Now, I know we haven\u0026rsquo;t accessed any of those pods yet, but you can imagine that it would be a real pain to have to go lookup an IP Address every time a pod was replaced, wouldn\u0026rsquo;t it? This post covers Kubernetes Services and how they are used to address this problem, and at the end of this post, we\u0026rsquo;ll access one of our pods \u0026hellip; finally.\u003c/p\u003e","title":"Kubernetes - Services and Labels"},{"content":"After following the previous posts, we should feel pretty good about deploying our pods and ensuring they are highly available. We\u0026rsquo;ve learned about naked pods and then replica sets to make those pods more HA, but what about when we need to create a new version of our pods? We don\u0026rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where \u0026ldquo;Deployments\u0026rdquo; comes into play.\nDeployments - The Theory If you couldn\u0026rsquo;t tell from the introduction to this post, Deployments are an amazing object for us to handle changes to our applications. While replica sets are used to ensure a desired number of pods are always running and handle our high availability concerns, Deployments ensure that we can safely rollout new versions of our pods safely and without outages. They also make it possible to rollback a deployment if there is some terrible issue with the new version.\nNow, I want to make sure that we don\u0026rsquo;t think that Deployments replace replica sets because they don\u0026rsquo;t. Deployments are a construct a level above replica sets and actually manage the replica set objects.\nSo Deployments manage replica sets and replica sets manage pods and pods manage containers.\nDeployments - In Action Just as we have done with the other posts in this series we\u0026rsquo;ll start with creating a manifest file of our desired state configuration. Kubernetes will ensure that this desired state is applied and any items that need to be orchestrated will be handled to meet this configuration.\nWe\u0026rsquo;ll start by adding Deployment information to the replica set manifest we built previously. Remember that a Deployment sits at a level above replica sets so we can add the new construct to our manifest file we created in the replica set post. As usual, comments have been added to the file so it\u0026rsquo;s easier to follow.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object replicas: 2 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: containers: - name: nginx-container #the name of the container within the pod image: nginx #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod Now we\u0026rsquo;ll deploy this configuration to our kubernetes cluster by running:\nkubectl apply -f [manifest file].yml Once the deployment has been applied to our cluster, we\u0026rsquo;ll run:\nkubectl get deployments The results of our deployment should look similar to the following screenshot.\nThere are several columns listed here but the gist of it is:\nDESIRED - 2 replicas of the application were in our configuration. CURRENT - 2 replicas are currently running. UP-TO-DATE - 2 replicas that have been updated to get to the configuration we specified. AVAILABLE - 2 replicas are available for use This information may not seem to interesting at the moment, but remember that Deployments can take an existing set and perform a rolling update on them. When this occurs the information shown here may be more important.\nLet\u0026rsquo;s try to update our deployment and see what happens. Let\u0026rsquo;s modify our deployment manifest file to increase the number of replicas and also change the version of nginx that is being deployed. The file below makes those changes for you.\napiVersion: apps/v1 #version of the API to use kind: Deployment #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-deployment #Name of the deployment labels: #A tag on the deployments created app: nginx spec: #specifications for our object strategy: type: RollingUpdate rollingUpdate: #Update Pods a certain number at a time maxUnavailable: 1 #Total number of pods that can be unavailable at once maxSurge: 1 #Maximum number of pods that can be deployed above desired state replicas: 6 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the replica sets created app: nginx spec: containers: - name: nginx-container #the name of the container within the pod image: nginx:1.7.9 #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod We can apply the configuration file again and then check our Deployment status using some familiar commands:\nkubectl apply -f [new manifest file].yml kubectl get deployments Now we see some more interesting information. Depending on how fast you were with your commands, you might get different results so use the screenshot below for this discussion.\nSince I ran the get deployments command before the whole thing was finished, my desired state doesn\u0026rsquo;t match the current state. In fact, the current state has MORE replicas than are desired. The reason for this is that the new pods are deployed first and then the old pods are removed.\nIn the example below, you can see that we\u0026rsquo;re going to remove the version 1 containers while we spin up version 2. To do this we spin up one pod with the new version and then start terminating the old pods. When this is done, the next pod will be created in version 2 and eventually the last pod in version 1 will be removed.\nThe number of pods being deployed above desired state is configured in our manifest in the maxSurge specification. We could have made this number 2 and then two pods would be created at a time and two removed.\nNow, sometimes you have a bigger update than the one we used for a demonstration. In that case you might not want to keep running the get deployments command. You can run:\nkubectl rollout status deployment [deployment name] This command will show you a running list of whats happening with the rollout. I ran this during the deploy and here is an example of what you might see.\nSummary Deployments are just another step along the way in learning Kubernetes. We\u0026rsquo;ve gotten to a pretty good point here and now we know how we can deploy and update our deployments in our cluster. We STILL haven\u0026rsquo;t accessed our containers yet, but just hold on, we\u0026rsquo;re at the cusp of having a working container.\nWhen you\u0026rsquo;re done messing around with your cluster you can delete the deployment by running:\nkubectl delete -f [manifest file].yml ","permalink":"https://theithollow.com/2019/01/30/kubernetes-deployments/","summary":"\u003cp\u003eAfter following the previous posts, we should feel pretty good about deploying our \u003ca href=\"/2019/01/21/kubernetes-pods/\"\u003epods\u003c/a\u003e and ensuring they are highly available. We\u0026rsquo;ve learned about naked pods and then \u003ca href=\"/2019/01/28/kubernetes-replica-sets/\"\u003ereplica sets\u003c/a\u003e to make those pods more HA, but what about when we need to create a new version of our pods? We don\u0026rsquo;t want to have an outage when our pods are replaced with a new version do we? This is where \u0026ldquo;Deployments\u0026rdquo; comes into play.\u003c/p\u003e","title":"Kubernetes - Deployments"},{"content":"In a previous post we covered the use of pods and deployed some \u0026ldquo;naked pods\u0026rdquo; in our Kubernetes cluster. In this post we\u0026rsquo;ll expand our use of pods with Replica Sets.\nReplica Sets - The Theory One of the biggest reasons that we don\u0026rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can\u0026rsquo;t count on them to always be running. Kubernetes doesn\u0026rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let\u0026rsquo;s plan for it.\nReplica Sets are a level above pods that ensures a certain number of pods are always running. A Replica Set allows you to define the number of pods that need to be running at all times and this number could be \u0026ldquo;1\u0026rdquo;. If a pod crashes, it will be recreated to get back to the desired state. For this reason, replica sets are preferred over a naked pod because they provide some high availability.\nIf one of the pods thats is part of a replica set crashes, one will be created to take its place.\nReplica Sets - In Action Let\u0026rsquo;s deploy a Replica Set from a manifest file so we can see what happens during the deployment. First, we\u0026rsquo;ll need a manifest file to deploy. The manifest below will deploy nginx just like we did with the pods, except this time we\u0026rsquo;ll use a Replica Set and specify that we should always have 2 replicas running in our cluster.\napiVersion: apps/v1 #version of the API to use kind: ReplicaSet #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-replicaset spec: #specifications for our object replicas: 2 #The number of pods that should always be running selector: #which pods the replica set should be responsible for matchLabels: app: nginx #any pods with labels matching this I\u0026#39;m responsible for. template: #The pod template that gets deployed metadata: labels: #A tag on the pods created app: nginx spec: containers: - name: nginx-container #the name of the container within the pod image: nginx #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod Let\u0026rsquo;s deploy the Replica Sets from the manifest file by running a kubectl command and then afterwards we\u0026rsquo;ll run a command to list our pods.\nkubectl apply -f [manifest file].yml kubectl get pods As you can see from my screenshot, we now have two pods running as we were expecting. Note: if you run the command too quickly, the pods might still be in a creating state. If this happens wait a second and run the get pod command again.\nI wonder what would happen if we manually deleted one of those pods that was in the Replica Set? Let\u0026rsquo;s try it by running:\nkubectl delete pod [pod name] Run another \u0026ldquo;get pod\u0026rdquo; command quickly to see what happens after you delete one of your pods.\nAs you can see from my screenshot, I deleted a pod and then immediately ran another get pods command. You can see that one of my pods is in a Terminating status, but there is a new pod that is now running and thats because our Replica Set specified that two pods should be available. It is ensuring that we have that many at all times, even if one fails.\nSummary Now you know the basics of Replica Sets and why you\u0026rsquo;d use them over naked pods. Again, don\u0026rsquo;t worry that you can\u0026rsquo;t access your nginx container yet, we still haven\u0026rsquo;t gotten to that yet but we\u0026rsquo;re getting there. We\u0026rsquo;re going to learn a few more things before this happens but we\u0026rsquo;re well on our way now.\nIf you\u0026rsquo;re done with your testing you can remove the Replica Set we created by running:\nkubectl delete -f [mainfest file].yml ","permalink":"https://theithollow.com/2019/01/28/kubernetes-replica-sets/","summary":"\u003cp\u003eIn a \u003ca href=\"/2019/01/21/kubernetes-pods/\"\u003eprevious post\u003c/a\u003e we covered the use of pods and deployed some \u0026ldquo;naked pods\u0026rdquo; in our Kubernetes cluster. In this post we\u0026rsquo;ll expand our use of pods with Replica Sets.\u003c/p\u003e\n\u003ch2 id=\"replica-sets---the-theory\"\u003eReplica Sets - The Theory\u003c/h2\u003e\n\u003cp\u003eOne of the biggest reasons that we don\u0026rsquo;t deploy naked pods in production is that they are not trustworthy. By this I mean that we can\u0026rsquo;t count on them to always be running. Kubernetes doesn\u0026rsquo;t ensure that a pod will continue running if it crashes. A pod could die for all kinds of reasons such as a node that it was running on had failed, it ran out of resources, it was stopped for some reason, etc. If the pod dies, it stays dead until someone fixes it which is not ideal, but with containers we should expect them to be short lived anyway, so let\u0026rsquo;s plan for it.\u003c/p\u003e","title":"Kubernetes - Replica Sets"},{"content":"","permalink":"https://theithollow.com/homepage/","summary":"","title":"Homepage"},{"content":" The following posts are meant to get a beginner started with the process of understanding Kubernetes. They include basic level information to start understanding the concepts of the Kubernetes service and include both theory and examples.\nTo follow along with the series, a Kubernetes cluster should be deployed and admin permissions are needed to perform many of the steps. If you wish to follow along with each of the posts, a cluster with cloud provider integration may be needed. In some cases we need a Load Balancer and elastic storage options.\nIf you would like to follow along, there is a github project corresponding to this guide to save from copying and pasting code snippets.\nSetup a Kubernetes Cluster - Pick 1 Option Deploy Kubernetes on vSphere Deploy Kubernetes on AWS Using Kubernetes 1. Pods 2. Replica Sets 3. Deployments 4. Services and Labels 5. Endpoints 6. Service Publishing 7. Namespaces 8. Context 9. Ingress 10. DNS 11. ConfigMaps 12. Secrets 13. Persistent Volumes 14. Cloud Providers and Storage Classes 15. Stateful Sets 16. Role Based Access 17. Pod Backups 18. Helm Charts 19. Taints and Tolerations 20. DaemonSets 21. Network Policies 22. Pod Security Policies 23. Resource Requests and Limits 24. Pod Autoscaling 25. Liveness and Readiness Probes 26. Validating Admission Controllers 27. Jobs and CronJobs ","permalink":"https://theithollow.com/2019/01/26/getting-started-with-kubernetes/","summary":"\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2019/01/kubernetesguide-1024x610.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eThe following posts are meant to get a beginner started with the process of understanding Kubernetes. They include basic level information to start understanding the concepts of the Kubernetes service and include both theory and examples.\u003c/p\u003e\n\u003cp\u003eTo follow along with the series, a Kubernetes cluster should be deployed and admin permissions are needed to perform many of the steps. If you wish to follow along with each of the posts, a cluster with cloud provider integration may be needed. In some cases we need a Load Balancer and elastic storage options.\u003c/p\u003e","title":"Getting Started with Kubernetes"},{"content":" Amazon Web Services has released yet another service designed to improve the lives of people administering an AWS environment. There is a new backup service, cleverly named, AWS Backup.\nThis new service allows you to create a backup plan for Elastic Block Store (EBS) volumes, Elastic File System (EFS), DynamoDB, Relational Database Services (RDS), and Storage Gateway.\nNow we can build plans to automatically backup, tier and expire old backups automatically based on our own criteria.\nCreate a Backup Plan To get started, just login to your AWS console and look for the AWS Backup Service. You\u0026rsquo;ll get a familiar splash screen where you\u0026rsquo;ll click \u0026ldquo;Create Backup Plan\u0026rdquo;.\nFrom there, you can pick how to get started. For this post we\u0026rsquo;ll just build a new plan.\nGive the plan a name. I\u0026rsquo;ve named mine Daily-Evenings meaning that it will run at night. Pick your own schedule, but do notice that times are in UTC format so do your own math. Pick the backup window when these backups can be taken.\nAfter this you may add life cycle options to tier your backups onto lower cost storage. If you do this, you will want to pick an expiration period as well which must be 90 days or more from the time it was tiered. Then you may select which vault to use. I\u0026rsquo;ve selected the default which will be created for me automatically.\nJust like about everything else in AWS you can add your own tags. When you\u0026rsquo;re done, click \u0026ldquo;Create plan.\u0026rdquo;\nYou should see a success message with a link to assign resources to this backup plan. Click that link.\nGive the resource assignment a descriptive name. And then decide how the resources will be assigned to this backup routine. In my case I\u0026rsquo;m using a tag with a key of \u0026ldquo;backup\u0026rdquo; and a value of \u0026ldquo;evening\u0026rdquo;. This means any EBS volumes I create with a key value pair matching these will be backed up.\nNow you must wait for your backup to run. If you look under the jobs link, you\u0026rsquo;ll see that a job ran during the time frame that you specified.\nI created an EC2 instance with an EBS volume tagged with my key value pair specified earlier. Under protected resources, you can see that I have an EBS volume listed.\nIf I select the resource ID, I\u0026rsquo;m taken to a page with the snapshots created of that EBS volume. You\u0026rsquo;ll also notice that you can create an on-demand backup whenever you need, in case your schedule needs to be interrupted for an important upgrade or something.\nRestore Now to restore one of your backups, just find the resource and the associated backup shown earlier, select it and click the restore button.\nYou\u0026rsquo;ll be asked some questions like what kind of resource type will be restored, volume type and the size. Notice that the size I\u0026rsquo;m restoring is 100 Gibibytes. This is note worthy because my EBS volume was only 8 Gibibytes. Then we need to select the availability zone to restore it to.\nSelect which IAM role should be doing the restore. Default has the permissions it needs, but you can specify your own if default isn\u0026rsquo;t going to work with your companies security policies.\nOnce you\u0026rsquo;ve created the restore you can view it in the Jobs panel.\nOnce the restore is complete, go check it out in your console.\nYou can see in my EBS volumes, that I now have a 100 Gibibyte volume in the Availability Zone I selected.\nSummary There are plenty of ways to backup your resources in AWS including rolling your own snapshot routine with AWS Lambda, or using RDS snapshots natively. But this new tool lets administrators set schedules across the environment pretty easily and have a single portal to manage them. Another great service from AWS.\n","permalink":"https://theithollow.com/2019/01/22/aws-native-backups/","summary":"\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2019/01/awsbackup1-1024x298.png\"/\u003e \n\u003c/figure\u003e\n\n\u003cp\u003eAmazon Web Services has released yet another service designed to improve the lives of people administering an AWS environment. There is a new backup service, cleverly named, AWS Backup.\u003c/p\u003e\n\u003cp\u003eThis new service allows you to create a backup plan for Elastic Block Store (EBS) volumes, Elastic File System (EFS), DynamoDB, Relational Database Services (RDS), and Storage Gateway.\u003c/p\u003e\n\u003cp\u003eNow we can build plans to automatically backup, tier and expire old backups automatically based on our own criteria.\u003c/p\u003e","title":"AWS Native Backups"},{"content":"We\u0026rsquo;ve got a Kubernetes cluster setup and we\u0026rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we\u0026rsquo;ll need to understand a little bit about pods.\nPods - The Theory In a docker environment, the smallest unit you\u0026rsquo;d deal with is a container. In the Kubernetes world, you\u0026rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.\nThe pod provides several things to the containers running within it. These include options on how the container should run, any storage resources and a network namespace. The pod encapsulates these for the containers that run inside them.\nSingle Container Pods The simplest way to get your containers deployed in Kubernetes is the one container per pod approach. When you deploy your applications and you\u0026rsquo;ve got a web container and an app container, each of them could be in their own pods. When you need to scale out either the web or app containers, you would then deploy additional pods.\nMulti-container Pods Sometimes you\u0026rsquo;ll want to deploy more than one container per pod. In general, this is done when the containers are very tightly coupled. One good reason to do this is if the containers are sharing the same storage resource between the two containers. If they are, this is a good time to use a multi-container pod. Another common reason to use multi-container pods is for things like a service mesh where a sidecar container is deployed alongside your application containers to link them together.\nIt is important to note however, that all containers that run within a pod will share an network namespace so this means that the containers are sharing an IP address and thus will need to be on different ports for them to be accessed. Containers within the same pod may access each other by using the localhost address.\nIt\u0026rsquo;s also worth noting that a pod will have all containers up or none of them. Your pods will not be considered healthy until all containers that run within them are ready to go, so you can\u0026rsquo;t have one healthy container and one unhealthy container in the same pod. The pod deployment will fail if this happens.\nThose are the pod basics. Next up, let\u0026rsquo;s look at how to deploy a container within a pod.\nPods - In Action Before we dive too far into this, you should know that this section shows some basics about deploying your first pods in a Kubernetes cluster, but these are considered \u0026ldquo;naked pods\u0026rdquo; and should be avoided in production. This section is to get you comfortable first. In a later post we\u0026rsquo;ll discuss how to use pods with ReplicaSets or Deployments to make your pods more highly available and more useful.\nI also want to mention that there are two ways to deploy objects in Kubernetes. The first way is through the command line and the second is through a manifest file. You will likely spend time in both of them where the manifest file is your configuration stored in version control and the command line would be used for troubleshooting purposes. The focus for this blog post will be on the manifests files.\nDeploy an Nginx pod from Manifest To deploy nginx we first need to create a manifest file in YAML format. The code below will deploy our naked nginx container on our Kubernetes cluster. The file has comments for important lines so that you can see what each piece accomplishes.\napiVersion: v1 #version of the API to use kind: Pod #What kind of object we\u0026#39;re deploying metadata: #information about our object we\u0026#39;re deploying name: nginx-pod spec: #specifications for our object containers: - name: nginx-container #the name of the container within the pod image: nginx #which container image should be pulled ports: - containerPort: 80 #the port of the container within the pod Save the file and then we\u0026rsquo;ll deploy it by running the following command from our cli:\nkubectl apply -f [manifest file].yml Once deployed we should have a pod environment that looks similar to this although the IP address is likely different from mine.\nTo check on our pods, we can run:\nkubectl get pods To get even more details on the pod that was deployed, we can run:\nkubectl describe pod [pod name] The screenshot below omits a lot of the data (because it was difficult to fit on this page mainly) but it does show the events for the pod listed at the bottom. When you run this on your own you should see a wealth of knowledge about the pod.\nSummary That\u0026rsquo;s it for pods. I should note that you probably can\u0026rsquo;t connect to your container. Don\u0026rsquo;t worry, you\u0026rsquo;re not supposed to be able to yet. We\u0026rsquo;ll continue this conversation in a future post where we learn about other objects including Deployments, ReplicaSets and Services.\nIf you\u0026rsquo;re all done, you can delete your pod from your cluster by running:\nkubectl delete -f [mainfest file].yml ","permalink":"https://theithollow.com/2019/01/21/kubernetes-pods/","summary":"\u003cp\u003eWe\u0026rsquo;ve got a Kubernetes cluster setup and we\u0026rsquo;re ready to start deploying some applications. Before we can deploy any of our containers in a kubernetes environment, we\u0026rsquo;ll need to understand a little bit about pods.\u003c/p\u003e\n\u003ch2 id=\"pods---the-theory\"\u003ePods - The Theory\u003c/h2\u003e\n\u003cp\u003eIn a docker environment, the smallest unit you\u0026rsquo;d deal with is a container. In the Kubernetes world, you\u0026rsquo;ll work with a pod and a pod consists of one or more containers. You cannot deploy a bare container in Kubernetes without it being deployed within a pod.\u003c/p\u003e","title":"Kubernetes - Pods"},{"content":"I\u0026rsquo;ve been wanting to have a playground to mess around with Kubernetes (k8s) deployments for a while and didn\u0026rsquo;t want to spend the money on a cloud solution like AWS Elastic Container Service for Kubernetes or Google Kubernetes Engine . While these hosted solutions provide additional features such as the ability to spin up a load balancer, they also cost money every hour they\u0026rsquo;re available and I\u0026rsquo;m planning on leaving my cluster running. Also, from a learning perspective, there is no greater way to learn the underpinnings of a solution than having to deploy and manage it on your own. Therefore, I set out to deploy k8s in my vSphere home lab on some CentOS 7 virtual machines using Kubeadm. I found several articles on how to do this but somehow I got off track a few times and thought another blog post with step by step instructions and screenshots would help others. Hopefully it helps you. Let\u0026rsquo;s begin.\nPrerequisites This post will walk through the deployment of Kubernetes version 1.13.2 (latest as of this posting) on three CentOS 7 virtual machines. One virtual machine will be the Kubernetes master server where the control plane components will be run and two additional nodes where the containers themselves will be scheduled. To begin the setup, deploy three CentOS 7 servers into your environment and be sure that the master node has 2 CPUs and all three servers have at least 2 GB of RAM on them. Remember the nodes will run your containers so the amount of RAM you need on those is dependent upon your workloads.\nAlso, I\u0026rsquo;m logging in as root on each of these nodes to perform the commands. Change this as you need if you need to secure your environment more than I have in my home lab.\nPrepare Each of the Servers for K8s On all three (or more if you chose to do more nodes) of the servers we\u0026rsquo;ll need to get the OS setup to be ready to handle our kubernetes deployment via kubeadm. Lets start with stopping and disabling firewalld by running the commands on each of the servers:\nsystemctl stop firewalld systemctl disable firewalld Next, let\u0026rsquo;s disable swap. Kubeadm will check to make sure that swap is disabled when we run it, so lets turn swap off and disable it for future reboots.\nswapoff -a sed -i.bak -r \u0026#39;s/(.+ swap .+)/#\\1/\u0026#39; /etc/fstab Now we\u0026rsquo;ll need to disable SELinux if we have that enabled on our servers. I\u0026rsquo;m leaving it on, but setting it to Permissive mode.\nsetenforce 0 sed -i \u0026#39;s/^SELINUX=enforcing$/SELINUX=permissive/\u0026#39; /etc/selinux/config Next, we\u0026rsquo;ll add the kubernetes repository to yum so that we can use our package manager to install the latest version of kubernetes. To do this we\u0026rsquo;ll create a file in the /etc/yum.repos.d directory. The code below will do that for you.\ncat \u0026lt;\u0026lt;EOF \u0026gt; /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF Now that we\u0026rsquo;ve updated our repos, lets install some of the tools we\u0026rsquo;ll need on our servers including kubeadm, kubectl, kubelet, and docker.\nyum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes After installing docker and our kubernetes tools, we\u0026rsquo;ll need to enable the services so that they persist across reboots, and start the services so we can use them right away.\nsystemctl enable docker \u0026amp;\u0026amp; systemctl start docker systemctl enable kubelet \u0026amp;\u0026amp; systemctl start kubelet Before we run our kubeadm setup we\u0026rsquo;ll need to enable iptables filtering so proxying works correctly. To do this run the following to ensure filtering is enabled and persists across reboots.\ncat \u0026lt;\u0026lt;EOF \u0026gt; /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system Setup Kubernetes Cluster on Master Node We\u0026rsquo;re about ready to initialize our kubernetes cluster but I wanted to take a second to mention that we\u0026rsquo;ll be using Flannel as the network plugin to enable our pods to communicate with one another. You\u0026rsquo;re free to use other network plugins such as Calico or Cillium but this post focuses on the Flannel plugin specifically.\nLet\u0026rsquo;s run kubeadm init on our master node with the \u0026ndash;pod-network switch needed for Flannel to work properly.\nkubeadm init --pod-network-cidr=10.244.0.0/16 When you run the init command several pre-flight checks will take place including making sure swap is off and iptables filtering is enabled.\nAfter the initialization is complete you should have a working kubernetes master node setup. The last few lines output from the kubeadm init command are important to take note of since it shows the commands to run on the worker nodes to join the cluster.\nIn my case, it showed that I should use the following command to join this cluster which we\u0026rsquo;ll use in the following section.\nkubeadm join 10.10.50.50:6443 --token zzpnsn.jaf69dl0fb0z8yop --discovery-token-ca-cert-hash sha256:8f98d7e7fc701b2d686da84d7708376f4b59d83c0173b68a459a86cbe4922562 Before we start working on the other nodes, lets make sure we can run the kubectl commands from the master node, in case we want to use it later on.\nexport KUBECONFIG=/etc/kubernetes/admin.conf After setting our KUBECONFIG variable, we can run kubectl get nodes just to verify that the cluster is serving requests to the API. You can see that I have 1 node of type master setup now.\nConfigure Member Nodes Now that the master is setup, lets focus on joining our two member nodes to the cluster and setup the networking. To begin, lets join the nodes to the cluster using the command that was output from the kubeadm init command we ran on the master. This is the command that I used, but your cluster will have a different value. Please use your own and not the one shown below as an example.\nkubeadm join 10.10.50.50:6443 --token zzpnsn.jaf69dl0fb0z8yop --discovery-token-ca-cert-hash sha256:8f98d7e7fc701b2d686da84d7708376f4b59d83c0173b68a459a86cbe4922562 Once you run the command on all of your nodes you should get a success message that lets you know that you\u0026rsquo;ve joined the cluster successfully.\nAt this point I\u0026rsquo;m going to copy the admin.conf file over to my worker nodes and set KUBECONFIG to use it for authentication so that I can run kubectl commands on the worker nodes as well. This is step is optional if you are going to run kubectl commands elsewhere, like from your desktop computer.\nscp root@[MASTERNODEIP]:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf export KUBECONFIG=/etc/kubernetes/admin.conf Now if you run the kubectl get nodes command again, we should see that there are three nodes in our cluster.\nIf you\u0026rsquo;re closely paying attention you\u0026rsquo;ll notice that the nodes show a status of \u0026ldquo;NotReady\u0026rdquo; which is a concern. This is because we haven\u0026rsquo;t finished deploying our network components for the Flannel plugin. To do that we need to deploy the flannel containers in our cluster by running:\nkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml Give your cluster a second and re-run the kubectl get nodes command. You should see the nodes are all in a Ready status now.\nConfigure a Mac Client You\u0026rsquo;ve got your cluster up and running and are ready to go now. The last step might be to setup your laptop to run the kubectl commands against your k8s cluster. To do this, be sure to install the kubectl binaries on your mac and then copy the admin.conf file to your laptop by running this in a terminal window. Be sure to replace the IP with the master node IP address.\nscp root@[MasterNodeIP]:/etc/kubernetes/admin.conf ~/.kube/admin.conf export KUBECONFIG=~/.kube/admin.conf Once done, you should be able to run kubectl get nodes from your laptop and start deploying your configurations.\n","permalink":"https://theithollow.com/2019/01/14/deploy-kubernetes-using-kubeadm-centos7/","summary":"\u003cp\u003eI\u0026rsquo;ve been wanting to have a playground to mess around with Kubernetes (k8s) deployments for a while and didn\u0026rsquo;t want to spend the money on a cloud solution like \u003ca href=\"https://aws.amazon.com/eks/?nc2=h_m1\"\u003eAWS Elastic Container Service for Kubernetes\u003c/a\u003e or \u003ca href=\"https://cloud.google.com/kubernetes-engine/\"\u003eGoogle Kubernetes Engine\u003c/a\u003e . While these hosted solutions provide additional features such as the ability to spin up a load balancer, they also cost money every hour they\u0026rsquo;re available and I\u0026rsquo;m planning on leaving my cluster running. Also, from a learning perspective, there is no greater way to learn the underpinnings of a solution than having to deploy and manage it on your own. Therefore, I set out to deploy k8s in my vSphere home lab on some CentOS 7 virtual machines using Kubeadm. I found several articles on how to do this but somehow I got off track a few times and thought another blog post with step by step instructions and screenshots would help others. Hopefully it helps you. Let\u0026rsquo;s begin.\u003c/p\u003e","title":"Deploy Kubernetes Using Kubeadm - CentOS7"},{"content":"Okay, I\u0026rsquo;m scared of change just like everyone else. I have been building Visios for a pretty long time and know where all the menus are so I\u0026rsquo;m pretty fast with it. But I do use a Macbook when I travel and firing up Fusion just to run Visio is frustrating. I thought since it\u0026rsquo;s a new year I should try Lucidchart and see what I though. Now I\u0026rsquo;m still kind of fond of Visio, but the Integrations feature with Lucidchart on top of the web interface allowing me to use it anywhere, is enough to make me drop Visio for the long haul.\nSo what are integrations? Well as you might guess its a way for a third party to use Lucidchart to build diagrams. The example that I\u0026rsquo;ll show in this post is using the Amazon Web Services integration to automatically build a diagram of my environment but there are a whole list of integrations available for you to utilize. Simple go to the Integrations link in your Lucidchart account, select and install the integration you\u0026rsquo;re interested in and you\u0026rsquo;re ready to get started.\nAWS Integration Example To get started create a new diagram from the \u0026ldquo;More Templates\u0026rdquo; option in Lucidchart account.\nSince I installed the AWS integration I\u0026rsquo;m going to build a network diagram of my AWS account(s) by selecting the AWS Network Diagram template which is found under the \u0026ldquo;Network\u0026rdquo; category.\nOnce your new template is open, go to File \u0026ndash;\u0026gt; Import Diagram and then choose AWS Architecture.\nOn the import AWS Architecture screen that opens, you\u0026rsquo;ll want to click the option to import your AWS Data, or if you just want to see a sample, choose the sample data to give you an idea of what it might be able to display.\nNext, you\u0026rsquo;ll need to give Lucidchart permissions to read data from your AWS account. As you can see there are several ways to accomplish this including the preferred method of using a Cross-Account Role. This screen will show you the permissions needed and generate the policy code to deploy these permissions in your AWS account. At this point you should go setup this Cross-Account Role in your AWS environment based on the instructions from Lucidchart.\nThe role permissions can be directly copied and pasted into the JSON tab in the AWS IAM console to make setting up the IAM role a quick and easy procedure.\nWhen you\u0026rsquo;re done, you\u0026rsquo;ll need to enter the Role ARN that you created and an account name for description purposes.\nYou can perform this Cross-Account role task several times so that you can add multiple accounts to your LucidChart portal. You only need to complete this one time (per account) and the credentials can be reused. This is particularly handy if you want to periodically refresh your network diagrams. You can see frommy screenshot below that I\u0026rsquo;ve added five AWS accounts to my Lucidchart portal. Now, when you\u0026rsquo;re ready to build a diagram, select the account you want to connect to and then the region from the drop down.\nThe import process will begin and you\u0026rsquo;ll see a progress bar.\nWhen the import is completed, you can then select how you want them to be diagrammed. I chose the \u0026ldquo;Auto Layout\u0026rdquo; so that it creates the relationships for my automatically.\nThe result is that my AWS VPCs, subnets, Availability zones, etc are all diagrammed for me including the name tags I\u0026rsquo;ve configured. This is a really simple diagram since my account is mostly a landing zone for new workloads to test out, but if you had more complicated networking in place you might see additional resources listed. If you\u0026rsquo;re looking really close at the picture below, you can see that I added tabs and re-did this process for each of my accounts, giving me a full view of my AWS account network infrastructure.\nSummary If you haven\u0026rsquo;t tried Lucidchart\u0026rsquo;s integrations, you might want to jump on and give it a whirl. It\u0026rsquo;s really nice to have some of these diagrams built for you automatically because it may very well point out things you didn\u0026rsquo;t realize were in your environment. Building a Visio diagram manually will show you exactly what you \u0026ldquo;think\u0026rdquo; the environment looks like. Having an integration like this may point out some things to you as well as being great for cutting down on your time documenting. If you also use multiple operating systems like I do, having a browser based diagramming tool is an added bonus.\n","permalink":"https://theithollow.com/2019/01/08/lucidchart-integrations-with-aws/","summary":"\u003cp\u003eOkay, I\u0026rsquo;m scared of change just like everyone else. I have been building Visios for a pretty long time and know where all the menus are so I\u0026rsquo;m pretty fast with it. But I do use a Macbook when I travel and firing up Fusion just to run Visio is frustrating. I thought since it\u0026rsquo;s a new year I should try Lucidchart and see what I though. Now I\u0026rsquo;m still kind of fond of Visio, but the Integrations feature with Lucidchart on top of the web interface allowing me to use it anywhere, is enough to make me drop Visio for the long haul.\u003c/p\u003e","title":"Lucidchart Integrations with AWS"},{"content":"A primary concern for companies moving to the cloud is whether or not their workloads will remain secure. While that debate still happens, AWS has made great strides to assuage customer\u0026rsquo;s concerns by adding services to ensure workloads are well protected. At re:Invent 2018 another service named AWS Security Hub was added. Security Hub allows you to setup some basic security guardrails and get compliance information for multiple accounts within a single service. Amazon seems to have realized that enabling customers to very easily see their security recommendations for all environments in a single place has great value to their businesses.\nSetup AWS Security Hub for Multiple Accounts To setup AWS Security Hub, we first have to pick an account where our portal will live. Login to your AWS account of choice and navigate to \u0026ldquo;Security Hub\u0026rdquo; in the AWS console. Once you\u0026rsquo;ve logged in, you\u0026rsquo;ll need to enable the security hub service by clicking the button on the splash screen.\nOnce you click that button, you\u0026rsquo;ll be asked again to add Security Hub which will update some policies to give AWS permission to aggregate findings and read information from your accounts.\nOnce enabled, you\u0026rsquo;ll see a summary screen with some very uninteresting information on it at this point. To make Security Hub work really well, you\u0026rsquo;ll need to enable some things which will then be aggregated into the Security Hub console. To begin, we\u0026rsquo;ll enable the CIS Benchmarks which are a good baseline for how your cloud should be protected. Now, the important thing here is that CIS benchmarks are going to use AWS Config rules to ensure that specific cloud security metrics are monitored. Before you enable CIS Standards on the standards menu, be sure to enable AWS Config in the account you\u0026rsquo;re monitoring. This can be done via CloudFormation or through the console but be sure to enable AWS Config to record events for the region and globally.\nOnce Config has been enabled, it\u0026rsquo;s OK to enable the CIS Standards from the standards menu.\nOnce the CIS Standards have been enabled, a series of AWS Config Rules will be deployed. These might take a few minutes to show any data, but do note that these config rules cost $2 per account per region to use. Once Config has evaluated the rules you should see some data in the AWS Config console if you look at that service. You can see from my screenshot below that there are AWS Config Rules with a prefix of \u0026ldquo;securityhub\u0026rdquo; listed in my compliance rules list. You\u0026rsquo;ll also notice that I have some noncompliant resources, which were intentionally left noncompliant for demonstration purposes ;) .\nIf we look back in Security Hub the Summary screen will now start showing some useful data about our compliance metrics.\nWe\u0026rsquo;ll also see that these benchmarks now show up in my security hub findings with a status of either FAILED or PASSED. It also shows the CIS benchmark title for which benchmark has been missed if any of them are failed.\nThere are also additional providers that can be added to the Security Hub to make it more extensible. Out of the box there are three more services that AWS will aggregate in your findings list which are:\nGuardDuty Inspector Macie You can set those three services up in your AWS Account to have their findings aggregated within the Security Hub service console. There are also third party services that can be added to the console and this can be done by going into the \u0026ldquo;Settings\u0026rdquo; menu and enabling them from the providers screen. This makes Security Hub a \u0026ldquo;single pane of glass\u0026rdquo; for aggregating your compliance and security findings.\nAdd Additional Accounts So far we\u0026rsquo;ve done all the configuration within a single account, but what if we\u0026rsquo;ve designed our AWS environments with multiple accounts for billing or security reasons? Not a problem, we can go back into the settings of our Security Hub console and we can invite other accounts. To do this go to the Accounts tab and invite another account. When you invite another account nothing will happen until the member accounts accept the invitation.\nTo accept, we\u0026rsquo;ll login to the member account and go into the Security Hub console just as we did with our master account. Then under settings, we\u0026rsquo;ll see an invitation from the master. Click the Accept slider button to accept the invitation. Once this is complete, the results for the member account will be displayed in the master account\u0026rsquo;s Security Hub console. Be sure to enable config, GuardDuty, Inspector, etc on the member accounts too so that all your findings are being sent along correctly.\nSummary AWS Security Hub is a really nice to have service to bring all the individual compliance and security tools AWS offers into a single view for administrators. As of the time of this writing, the Security Hub service pricing is not available yet, but you will be charged for the services it relies on such as AWS Config and GuardDuty. If you\u0026rsquo;re setting up a production AWS environment, Security Hub should be part of your basic deployment routine.\n","permalink":"https://theithollow.com/2018/12/17/aws-security-hub/","summary":"\u003cp\u003eA primary concern for companies moving to the cloud is whether or not their workloads will remain secure. While that debate still happens, AWS has made great strides to assuage customer\u0026rsquo;s concerns by adding services to ensure workloads are well protected. At re:Invent 2018 another service named \u003ca href=\"https://aws.amazon.com/security-hub/\"\u003eAWS Security Hub\u003c/a\u003e was added. Security Hub allows you to setup some basic security guardrails and get compliance information for multiple accounts within a single service. Amazon seems to have realized that enabling customers to very easily see their security recommendations for all environments in a single place has great value to their businesses.\u003c/p\u003e","title":"AWS Security Hub"},{"content":"Amazon announced a new service at re:Invent 2018 in Las Vegas, called the AWS Transit Gateway. The Transit Gateway allows you to connect multiple VPCs together as well as VPN tunnels to on-premises networks through a single gateway device. As a consultant, I talk with customers often, about how they will plan to connect their data center with the AWS cloud, and how to interconnect all of those VPCs. In the past a solution like Aviatrix or a Cisco CSR transit gateway was used which leveraged some EC2 instances that lived within a VPC. You\u0026rsquo;d then connect spoke VPCs together via the use of VPN tunnels. With this new solution, there is a native service from AWS that allows you to do this without the need for VPN tunnels between spoke VPCs and you can use the AWS CLI/CloudFormation or console to deploy everything you need. This post takes you through an example of the setup of the AWS Transit Gateway in my own lab environment.\nArchitecture Before we talk about the setup steps, lets take a look at what we\u0026rsquo;ll be building. The diagram below shows the overall configuration. At the bottom of the diagram we have my on-premises home lab which will be inter-connected to the AWS cloud through a VPN connection terminating a on the Transit Gateway. The Transit Gateway will live within Account 1 and will need to be attached not only to a VPN tunnel, but also to a VPC within the same account and another VPC in a second (spoke) account. The rest of the setup depends on route tables to ensure that EC2 instances will direct traffic destined between VPCs or Data Centers through this transit gateway service.\nSetup Transit Gateway Between AWS VPC First, lets take a look at setting up the Transit Gateway so that machines in different VPCs can communicate with each other through it. To begin, login to the AWS console under the account you want your Transit Gateway to be owned, and look for the Transit Gateways menu under the VPCs window. Click the \u0026ldquo;Create Transit Gateway\u0026rdquo; button.\nGo through the create transit gateway wizard and fill in the information. Enter a name and description so that its easily identifiable. After that you can specify the Amazon ASN or leave it the default. Next, decide if you want to allow DNS to be used through the Transit Gateway, as well as multi-pathing and route options. Lastly, select whether you want to automatically accept any shared attachments. I\u0026rsquo;ve disabled this so that you can see what happens if you don\u0026rsquo;t select this option in your environment. Click the \u0026ldquo;Create Transit Gateway\u0026rdquo; button.\nWhen complete, you should get a success message. Click close.\nOnce the Gateway is created we\u0026rsquo;ll need to attach that gateway to one or more Virtual Private Clouds (VPCs), and then later one or more VPN connections. We\u0026rsquo;ll start with the VPCs. Now attaching the gateway to a VPC within the same account is pretty simple, but before you attach it to a VPC in another account, you must first share the Transit Gateway resource with the other account(s). To do this, follow the instructions in a previous post where I covered the new Resource Access Manager (RAM) service. The steps to share a Transit Gateway can be located here on this post.\nNow, assuming you\u0026rsquo;ve shared your Transit Gateway to at least one more account, lets continue with the attachment of the the gateway to a pair of VPCs. To do this, we\u0026rsquo;ll stay logged into the account we deployed the gateway with and navigate to the \u0026ldquo;Transit Gateway Attachments\u0026rdquo; menu under the VPCs console. Click \u0026ldquo;Create Transit Gateway Attachment\u0026rdquo; button.\nFirst, Select which transit gateway that should be attached. If you\u0026rsquo;re just getting started, select the one we built earlier. Under attachment type, select VPC. For the attachment section, give the attachment a name and identify if you want to allow DNS and IPv6 over this gateway. Then select which VPC this Transit Gateway should be connected. Once you select the VPC from the drop down, you\u0026rsquo;ll also need to select which subnets it will be attached to. It\u0026rsquo;s best to pick one subnet in each Availability Zone to provide better uptime, but you you may ONLY select one subnet from each AZ. Don\u0026rsquo;t worry, it can be accessed from the rest of the subnets within the same AZ, but you can only attach it to one subnet per AZ. Click the \u0026ldquo;Create Attachment\u0026rdquo; button.\nNow, we\u0026rsquo;ll switch over to the other account where we have another VPC. This should be the account you shared the TGW with through the Resource Access Manager. In this account we need to go through the same steps as above, but notice that your only attachment type here is \u0026ldquo;VPC\u0026rdquo;. You may only attach a Transit Gateway with a VPN that exists within the same account as of the time of this blog post. Fill out your attachment information with the secondary VPC as you did before.\nIf you selected the option to \u0026ldquo;auto accept shared attachments\u0026rdquo; when we created the Transit Gateway, the attachments are done. If you didn\u0026rsquo;t select that, then you need to go back to the account where you deployed the TGW and accept the attachment from the \u0026ldquo;Transit Gateway Attachment\u0026rdquo; menu. You can see below that the attachment isn\u0026rsquo;t valid until it\u0026rsquo;s been accepted from this account.\nThe Transit Gateway should now be ready to go for your workloads to transport data between VPCs. The last step you need to do to make that work, is to update your subnet route tables so that the traffic destined for the opposite VPC has a target of the Transit Gateway that was attached.\nSetup Transit Gateway with VPN Now, our VPC should be connected properly, lets connect a VPN to the Transit Gateway as well. To do this, we\u0026rsquo;ll go back to the account where the Transit Gateway was created and navigate to the \u0026ldquo;Transit Gateway Attachments\u0026rdquo; menu. Within this screen we\u0026rsquo;ll create another attachment.\nThis time, when we create the transit gateway attachment, we\u0026rsquo;ll select VPN. When you select VPN, you\u0026rsquo;ll then see options for setting up the VPN with an existing Customer Gateway (public data center endpoint for the other end of the VPN) or to create a brand new CGW. You then have the option of using dynamic routing via BGP, or adding static routes. After this, you can specify some specifics for the Tunnel IP or pre-shared keys if you would like but it\u0026rsquo;s not necessary. Click the \u0026ldquo;Create attachment\u0026rdquo; button.\nYou\u0026rsquo;ll be able to see the VPN Connection information in the same place you would if you were using a VGW which is under the VPN connection menu. You\u0026rsquo;ll need the Public endpoint etc to create the VPN connection from the data center side. While the tunnel is coming up though, we\u0026rsquo;ll also need to add a static route if you didn\u0026rsquo;t use BGP (like I did), so we\u0026rsquo;ll go to the \u0026ldquo;Transit Gateway Route Tables\u0026rdquo; menu and click the \u0026ldquo;Create route\u0026rdquo; button after selecting our Transit Gateway.\nNow, I\u0026rsquo;ll add a route for my on-premises infrastructure so that my VPC workloads can access my data center through the Transit Gateway VPN connection.\nSummary It may seem like there were a bunch of steps here, but this is really pretty simple to get setup. Especially considering this can be done through CloudFormation and not having to build a VPN tunnel for every spoke VPC that is deployed. It\u0026rsquo;s a really nice solution over VPC Peering and over the old Transit VPC solution provided by third parties. Good luck with your configurations.\n","permalink":"https://theithollow.com/2018/12/12/setup-aws-transit-gateway/","summary":"\u003cp\u003eAmazon announced a new service at re:Invent 2018 in Las Vegas, called the \u003ca href=\"https://aws.amazon.com/transit-gateway/\"\u003eAWS Transit Gateway\u003c/a\u003e. The Transit Gateway allows you to connect multiple VPCs together as well as VPN tunnels to on-premises networks through a single gateway device. As a consultant, I talk with customers often, about how they will plan to connect their data center with the AWS cloud, and how to interconnect all of those VPCs. In the past a solution like Aviatrix or a Cisco CSR transit gateway was used which leveraged some EC2 instances that lived within a VPC. You\u0026rsquo;d then connect spoke VPCs together via the use of VPN tunnels. With this new solution, there is a native service from AWS that allows you to do this without the need for VPN tunnels between spoke VPCs and you can use the AWS CLI/CloudFormation or console to deploy everything you need. This post takes you through an example of the setup of the AWS Transit Gateway in my own lab environment.\u003c/p\u003e","title":"Setup AWS Transit Gateway"},{"content":"At AWS re:Invent this year in Las Vegas, Amazon announced a ton of services, but one that caught my eye was the AWS Resource Access Manager. This is a service that facilitates the sharing of some resources between AWS accounts so that they can be used or referenced across account boundaries. Typically, an AWS account is used as a control plane boundary (or billing boundary) between environments, but even then resources will need to communicate with each other occasionally. Now with AWS Resource Access Manager (RAM) we can shared Hosted DNS zones, Transit Gateways and other objects. This list will undoubtedly grow over time. This post will show you how you can share another new service, the AWS Transit Gateway, across multiple accounts within your organization.\nEnable Resource Access Manager for your Organization Before you can start sharing any resources in your accounts, you\u0026rsquo;ll need to login to your root account within your organization. In the organizations screen, you will have to enable the AWS Resource Access Manager by clicking the \u0026ldquo;Enable access\u0026rdquo; button.\nOnce you click enable access, you\u0026rsquo;ll get an \u0026ldquo;are you sure\u0026rdquo; message where you\u0026rsquo;ll again need to click the \u0026ldquo;Enable access for AWS Resource Access Manager\u0026rdquo; button.\nShare your Resources with Another Account Now that you\u0026rsquo;ve enabled access to the service for your organization, you can start sharing your resources. To begin, I\u0026rsquo;ve logged into one of my accounts where I\u0026rsquo;ve created a Transit Gateway already. I want to share this Transit Gateway with another account so I can do some transitive routing between the two accounts. First, I\u0026rsquo;ll find the Resource Access Manager Service from the AWS console.\nWhen I first login, we\u0026rsquo;ll want to go to the Shared resources menu.\nUnder the Shared resource menue, click the \u0026ldquo;Create resource share\u0026rdquo; button.\nIn the resource sharing screen, give the resource a name that will make it easily identifiable. Next, under the resources section, select the type of resource that you want to share. I\u0026rsquo;ve selected \u0026ldquo;Transit Gateways\u0026rdquo; and after selecting that, a list of our Transit Gateways that already exist are displayed. Select the Transit Gateway that you want to share and add it to the list of resources. Below this section, is the Principals. Here you can enter the account numbers that you wish to share the resources with, or you can also add the Organization ID so that all accounts in a specific OU or Organization are automatically shared this resource. Lastly, you can add any tags that you\u0026rsquo;d like to add for this resource share, these tags can be used for IAM permissions later if you\u0026rsquo;d like.\nWhen you\u0026rsquo;re done, you\u0026rsquo;ll notice that the resource is shared with a status of Active. You\u0026rsquo;ll also see the owner (identified by account number) and whether its available to other accounts.\nAt this point, we can login to one of the other accounts where the resource has been shared. We\u0026rsquo;ll again login to the AWS Resource Access Manager and in the menu, we\u0026rsquo;ll see that under the \u0026ldquo;Shared with me\u0026rdquo; menu, we have a resource share with an invitation pending. Click on the resource that was shared.\nIn the resource select the \u0026ldquo;Accept resource share\u0026rdquo; button to accept the sharing of this resource.\nOnce done, you\u0026rsquo;ll see that a resource has been shared with this account and is available for use.\nTo prove that this is working, go to the area where the resource should exist if it was deployed within this account. In my case I am looking in the VPC Dashboard under Transit Gateways. I can see a Transit Gateway listed and if I select it will see that there is an Owner Account listed and that it\u0026rsquo;s shared. I can now take actions on this resource from the shared account. For example, I might add routes to this Transit Gateway from my VPC subnets.\nSummary I think we\u0026rsquo;ll see some cool things that you can do with this new service. I\u0026rsquo;m excited that we can now share resources very simply between accounts without having to use complicated IAM permissions with multiple principals listed. I expect that the list of resources that can be shared between accounts will continue to grow as this service matures.\n","permalink":"https://theithollow.com/2018/12/10/aws-resource-access-manager/","summary":"\u003cp\u003eAt AWS re:Invent this year in Las Vegas, Amazon announced a ton of services, but one that caught my eye was the AWS Resource Access Manager. This is a service that facilitates the sharing of some resources between AWS accounts so that they can be used or referenced across account boundaries. Typically, an AWS account is used as a control plane boundary (or billing boundary) between environments, but even then resources will need to communicate with each other occasionally. Now with AWS Resource Access Manager (RAM) we can shared Hosted DNS zones, Transit Gateways and other objects. This list will undoubtedly grow over time. This post will show you how you can share another new service, the AWS Transit Gateway, across multiple accounts within your organization.\u003c/p\u003e","title":"AWS Resource Access Manager"},{"content":"If you\u0026rsquo;re getting started with VMware Cloud on AWS then you should be aware of all the points in which you can block traffic with a firewall. Or, if you look at it another way, the places where you might need to create allow rules for traffic to traverse your cloud. This post is used to show where those choke points live both within your VMware Cloud on AWS SDDC, as well as the Amazon VPC in which your SDDC lives.\nThe diagram below shows each of the firewalls that might live between a virtual machine within your VMware Cloud on AWS environment and an Amazon EC2 instance in a subnet within the same VPC.\nLets take a look at each of the items to discuss what they are and where they are configured.\n1. VMC - Distributed Firewall The first firewall is within the VMware Cloud on AWS environment and is the distributed firewall provided by NSX-T. Much like NSX on premises, you can create firewall rules at the NIC level of the virtual machines in the VMware environment. This firewall is optional and doesn\u0026rsquo;t need to be configured to allow traffic by default. It also requires that you have the NSX Advanced Add-On feature for your VMware Cloud on AWS environment. Again this firewall is optional.\nTo configure these firewall rules you would use your SDDC Console for VMC. 2. VMC - Gateway Firewall The second firewall we\u0026rsquo;ll discuss is the gateway firewall. This is like a perimeter firewall for your VMware Cloud on AWS environment. If you want to be able to access your vCenter and your workloads, this firewall will need to be configured. The gateway firewall consists of two parts, a management gateway firewall and the compute gateway firewall. The rules created for vCenter, SRM, and any of the components that VMware manages for you in the cloud are built in the management gateway firewall. Any of your workloads that would get deployed in the VMC would be done in the compute gateway firewall.\nEach of these are configured in the SDDC console.\n3. AWS - Elastic Network Interface for VMC Security Group VMware Cloud on AWS lives within an AWS Virtual Private Cloud (VPC). The gateway we discussed in the previous section is connected to the AWS VPC with an elastic network interface. Think of this as a VM\u0026rsquo;s network adapter that lives in AWS. This is the network bridge between your VMware environment and your AWS environment. We discussed that we need to create firewall rules in the gateway, but we also need to create the security group rules on the network interface on the AWS side.\nThis is configured in the AWS console under the EC2 screen. It should also be noted that the VMC may create several of these ENIs when its first built, so you\u0026rsquo;ll need a security group attached to all of them with your configured rules. By default, VMware creates the rules for you but it uses a default security group from AWS. Be aware of this because many customers don\u0026rsquo;t want to use a default security group and want to create their own.\n4. AWS - Network Access Control Lists This item is another optional component. An AWS Network Access Control List (NACL) is a stateless firewall that is applied at the subnet level of an AWS VPC. These NACLs provide an extra level of protection within a VPC to block traffic for resources within the entire subnet.\nAWS NACLs are configured in the AWS Console under the VPC screen.\n5. AWS - EC2 Workloads The last one is an example of a resource in AWS that is trying to communicate to VMC resources. AWS requires that a security group (even an allow all group) be assigned to each EC2 instance that is deployed. In fact as of the time of this post, you can add five security groups to an EC2 instance at a time. This is the same type of security group that was applied earlier to the ENI for the VMware Cloud Gateway. This firewall rule is optional unless you want your EC2 instances to be able to communicate with your VMC environment. If you require VM to EC2 instance connectivity it must be configured.\nAgain security groups can be added in the AWS console under the EC2 screen.\nSummary and Final Thoughts Hopefully this post gave you a good idea of the different firewalls that might be in place in your VMware Cloud on AWS environment and AWS VPC. My suggestion would be to carefully plan out if you need all of these firewalls including the optional ones and then afterwards decide if you need granular rules in each choke point or if you will just allow all traffic through some. For instance maybe your compute gateway firewall allows all traffic by default and you leave the firewall responsibilities to the distributed firewall. That may be an extreme case, but it is likely that some of these firewalls will have more open rules than others.\nOh, if your having connection issues still even after reading this post, don\u0026rsquo;t forget about your client firewalls as well. A Windows firewall can ruin your whole day sometimes\u0026hellip; or so I\u0026rsquo;ve heard.\n","permalink":"https://theithollow.com/2018/11/28/vmware-cloud-on-aws-firewalls-overview/","summary":"\u003cp\u003eIf you\u0026rsquo;re getting started with VMware Cloud on AWS then you should be aware of all the points in which you can block traffic with a firewall. Or, if you look at it another way, the places where you might need to create allow rules for traffic to traverse your cloud. This post is used to show where those choke points live both within your VMware Cloud on AWS SDDC, as well as the Amazon VPC in which your SDDC lives.\u003c/p\u003e","title":"VMware Cloud on AWS Firewalls Overview"},{"content":"Today, AWS announced the release of the long anticipated drift detection feature for CloudFormation. This feature has been a common feature request for many of the AWS customers that I speak with to ensure their deployments are configured as expected. This post will take you through why this is an important feature and how you can use it.\nWhats the Big Deal? If you\u0026rsquo;re not familiar with it already, CloudFormation is a free service from AWS that lets you describe your infrastructure through a YAML or JSON file and deploy the configuration. Simply define your desired state and CloudFormation will deploy the resources and arrange them so that dependent services are (usually) deployed in the right order. If you\u0026rsquo;re familiar with Ansible, Chef, or Puppet, this concept of a desired state shouldn\u0026rsquo;t be new.\nAs a consultant I\u0026rsquo;m often trying to drive the point home that this tool is great and some sort of IaC should be used to manage the environment. Thats usually easy at the start, but inevitably something will happen or an emergency will happen where a manual change gets made to the environment and the code doesn\u0026rsquo;t get updated. Maybe that isn\u0026rsquo;t a big deal, but what happens when someone then updates the code for another change and applies a change set. That\u0026rsquo;s right, that manual change that was made in the middle probably got wiped out and whatever issue happened before is probably back.\nIntroducing Drift Detection Now that the new feature has been released, lets take a look at it in a lab. The screenshot below is a list of CloudFormation templates that have been applied in one of my lab environments. Obviously, I\u0026rsquo;m always updating my infrastructure through code so I should have nothing to worry about, but lets test this new feature out anyway, just for fun.\nI\u0026rsquo;ve selected one of my stacks and then hit the Actions drop down and selected the \u0026ldquo;Detect Drift\u0026rdquo; option. Notice that this is also a nested stack and it will work just fine.\nAfter you select \u0026ldquo;Detect drift\u0026rdquo; a notification bar will be displayed letting you know that drift detection was initiated.\nSelect the stack again and then click the \u0026ldquo;View drift results\u0026rdquo; dropdown from the Action button.\nThe drift detection page opens to show\u0026hellip; wait a second! Apparently I\u0026rsquo;d made a manual change to a security group somewhere along the way and the drift detection capability has shown me what doesn\u0026rsquo;t match my template any more.\nIf we select the resource that has drift, we can select the \u0026ldquo;View drift details\u0026rdquo; button to get more information about the drift. As we can see from the following screen, I have replaced one of my source addresses with an anywhere (0.0.0.0/0) rule.\nIf we scroll further down, we\u0026rsquo;ll get to see the current configuration vs the template used to deploy the resource. This might be a great tool to get some manual changes that were made to the environment, into the CloudFormation templates stored in your version control repo.\nSummary CloudFormation has been a great tool to manage your AWS infrastructure and I\u0026rsquo;ve often said that it should be the only way that you interact with the AWS infrastructure. The ability to keep a desired state is enough reason to use it over the CLI or the AWS web console. Now that we can review the infrastructure with what we deployed, this tool is even more powerful. I hope you will implement drift detection as part of your AWS operations.\n","permalink":"https://theithollow.com/2018/11/14/using-aws-cloudformation-drift-detection/","summary":"\u003cp\u003eToday, AWS announced the release of the long anticipated drift detection feature for CloudFormation. This feature has been a common feature request for many of the AWS customers that I speak with to ensure their deployments are configured as expected. This post will take you through why this is an important feature and how you can use it.\u003c/p\u003e\n\u003ch1 id=\"whats-the-big-deal\"\u003eWhats the Big Deal?\u003c/h1\u003e\n\u003cp\u003eIf you\u0026rsquo;re not familiar with it already, CloudFormation is a free service from AWS that lets you describe your infrastructure through a YAML or JSON file and deploy the configuration. Simply define your desired state and CloudFormation will deploy the resources and arrange them so that dependent services are (usually) deployed in the right order. If you\u0026rsquo;re familiar with Ansible, Chef, or Puppet, this concept of a desired state shouldn\u0026rsquo;t be new.\u003c/p\u003e","title":"Using AWS CloudFormation Drift Detection"},{"content":"If you\u0026rsquo;ve been doing application development for long, having tools in place to check the health of your code is probably not a new concept. However, if you\u0026rsquo;re jumping into something like Cloud and you\u0026rsquo;ve been an infrastructure engineer, this may be a foreign concept to you. Isn\u0026rsquo;t it bad enough that you\u0026rsquo;ve started learning Git, JSON, YAML, APIs etc on top of your existing skill sets? Well, take some lessons from the application teams and you may well find that you\u0026rsquo;re improving your processes and reducing the technical debt and time to provision infrastructure as code resources as well.\nWe need to house our Infrastructure-as-Code (IaC) as part of a version control repository. Code in a file server isn\u0026rsquo;t going to cut it in the new world. Version control will be a great step, but how about we test our code after we make a new commit? How about we ensure the code is validated before we merge it to our release branch? How about we review our code to make sure that it is properly formatted so that everyone working on the code base is using the same techniques?\nOverview This post is a very basic solution to test your AWS CloudFormation templates after you commit them to your git repository. The diagram below shows what will be the pieces that are part of the build.\nMy code is stored in the Atlassian Bitbucket Cloud for source control. I\u0026rsquo;ll be making commits directly to the Develop branch for this post and when I\u0026rsquo;m satisfied with the outcomes, will merge it with the Master branch through a pull request and approval by an administrator. Before that happens, my code in the develop branch will be validated by a Jenkins server. Jenkins is going to perform two very basic tests just to make sure that the code is in working order.\nThe first test Jenkins will run is to execute the validate-template from the AWS CLI. This tests the code to make sure that it could run if I deployed it to AWS. The second test will be to lint the code through a plugin in SonarQube for YAML files. SonarQube community edition was used to get a view of how the code looks, or better said, \u0026ldquo;Smells\u0026rdquo;. SonarQube might not know if the template could be deployed to AWS, but it will shows us some information about the quality of the code that has been written.\nHere is my Jenkisfile which triggers the builds. Again this is a very basic pipeline. The pieces in the \u0026ldquo;Code Review\u0026rdquo; stage are specific to the SonarQube setup including a login which I\u0026rsquo;m happy to share with you for this post. It won\u0026rsquo;t work if you try to use it.\npipeline { agent any stages { stage(\u0026#39;CloudFormation Validate\u0026#39;) { steps { script { if (env.BRANCH_NAME == \u0026#39;develop\u0026#39;) { sh \u0026#39;aws cloudformation validate-template --template-body file://./ec2sample.yml\u0026#39; } } } } stage(\u0026#39;Code Review\u0026#39;) { steps { script { if (env.BRANCH_NAME == \u0026#39;develop\u0026#39;) { sh \u0026#39;sonar-scanner \\ -Dsonar.projectKey=SampleCFN \\ -Dsonar.sources=. \\ -Dsonar.host.url=http://sonarqube.hollow.local:9000 \\ -Dsonar.login=7876126e70638d8e575eb2a892c41c5437ed20c6\u0026#39; } } } } } } The First Run I took a very basic CloudFormation Template and ran it through my Jenkins pipeline. The CloudFormation template that I used is here for reference.\nAWSTemplateFormatVersion: 2010-09-09 Description: Standard Template to Deploy EC2 Instances with proper tagging methodologies. Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \u0026#34;AWS EC2 Instance\u0026#34; Parameters: - Ami - InstanceType - KeyPair - Subnet - SecurityGroup - SnapshotSchedule - SnapshotRetention - InstanceStartupTime - InstanceShutdownTime - Label: default: \u0026#34;AWS EC2 Tagging\u0026#34; Parameters: - Name - Account - Environment - Application - Role - CostCenter - BusinessUnit - Contact Parameters: # EC2 Properties Ami: Type: String Description: Amazon Image Name to Deploy. AllowedValues: - Amazon-Linux - Amazon-Linux-2 - Red-Hat-Enterprise-Linux-7.5 - Ubuntu-16.04 - Windows-Server-2016 - Windows-Server-2012-r2 InstanceType: Type: String Description: Instance size to be used by EC2 Instance AllowedValues: - t2.micro - t2.small - t2.medium - t2.large - m4.large - m4.xlarge - m4.2xlarge - m5.large - m5.xlarge - m5.2xlarge - c4.large - c4.xlarge - c5.large - c5.xlarge KeyPair: Type: AWS::EC2::KeyPair::KeyName Description: AWS EC2 Key Pair name for instance. Subnet: Type: AWS::EC2::Subnet::Id Description: Subnet to place EC2 Instance in. SecurityGroup: Type: AWS::EC2::SecurityGroup::Id Description: Security Group to add EC2 Instance to. Must be located in the same VPC as the Subnet. # EC2 Tagging Name: Type: String Description: EC2 Instance Name Tag. Account: Type: String Description: AWS Account Name for the EC2 instance. AllowedValues: - Production - Non-Production Environment: Type: String Description: AWS Environment for the EC2 instance. AllowedValues: - Production - Non-Production Application: Type: String Description: Application running on EC2 Instance. Role: Type: String Description: Roles running on EC2 Instance. Mappings: # RegionMap maps AWS::Region in Us-East-1 format to a useable format for the Images mapping. RegionMap: us-east-1: \u0026#34;shortname\u0026#34;: \u0026#34;useast1\u0026#34; us-east-2: \u0026#34;shortname\u0026#34;: \u0026#34;useast2\u0026#34; us-west-1: \u0026#34;shortname\u0026#34;: \u0026#34;uswest1\u0026#34; us-west-2: \u0026#34;shortname\u0026#34;: \u0026#34;uswest2\u0026#34; # Latest Images as of 7/10 Images: Amazon-Linux: \u0026#34;useast1\u0026#34;: \u0026#34;ami-cfe4b2b0\u0026#34; \u0026#34;useast2\u0026#34;: \u0026#34;ami-40142d25\u0026#34; \u0026#34;uswest1\u0026#34;: \u0026#34;ami-0e86606d\u0026#34; \u0026#34;uswest2\u0026#34;: \u0026#34;ami-0ad99772\u0026#34; Amazon-Linux-2: \u0026#34;useast1\u0026#34;: \u0026#34;ami-b70554c8\u0026#34; \u0026#34;useast2\u0026#34;: \u0026#34;ami-8c122be9\u0026#34; \u0026#34;uswest1\u0026#34;: \u0026#34;ami-e0ba5c83\u0026#34; \u0026#34;uswest2\u0026#34;: \u0026#34;ami-a9d09ed1\u0026#34; Red-Hat-Enterprise-Linux-7.5: \u0026#34;useast1\u0026#34;: \u0026#34;ami-6871a115\u0026#34; \u0026#34;useast2\u0026#34;: \u0026#34;ami-03291866\u0026#34; \u0026#34;uswest1\u0026#34;: \u0026#34;ami-18726478\u0026#34; \u0026#34;uswest2\u0026#34;: \u0026#34;ami-28e07e50\u0026#34; Ubuntu-16.04: \u0026#34;useast1\u0026#34;: \u0026#34;ami-a4dc46db\u0026#34; \u0026#34;useast2\u0026#34;: \u0026#34;ami-6a003c0f\u0026#34; \u0026#34;uswest1\u0026#34;: \u0026#34;ami-8d948ced\u0026#34; \u0026#34;uswest2\u0026#34;: \u0026#34;ami-db710fa3\u0026#34; Windows-Server-2016: \u0026#34;useast1\u0026#34;: \u0026#34;ami-0327667c\u0026#34; \u0026#34;useast2\u0026#34;: \u0026#34;ami-6a003c0f\u0026#34; \u0026#34;uswest1\u0026#34;: \u0026#34;ami-b236d2d1\u0026#34; \u0026#34;uswest2\u0026#34;: \u0026#34;ami-3703414f\u0026#34; Windows-Server-2012-r2: \u0026#34;useast1\u0026#34;: \u0026#34;ami-b8f3b5c7\u0026#34; \u0026#34;useast2\u0026#34;: \u0026#34;ami-da003ebf\u0026#34; \u0026#34;uswest1\u0026#34;: \u0026#34;ami-832acee0\u0026#34; \u0026#34;uswest2\u0026#34;: \u0026#34;ami-aeffbcd6\u0026#34; Resources: AwsEc2Instance: Type: \u0026#34;AWS::EC2::Instance\u0026#34; Properties: ImageId: !FindInMap [ Images, !Ref Ami, !FindInMap [RegionMap, !Ref \u0026#34;AWS::Region\u0026#34;, shortname ] ] InstanceType: !Ref InstanceType SecurityGroupIds: - !Ref SecurityGroup SubnetId: !Ref Subnet KeyName: !Ref KeyPair Tags: - Key: Name Value: !Ref Name - Key: Account Value: !Ref Account - Key: Environment Value: !Ref Environment - Key: Application Value: !Ref Application - Key: Role Value: !Ref Role Outputs: Ec2Id: Value: !Ref AwsEc2Instance Description: AWS EC2 Instance ID Ec2PrivateIp: Value: !GetAtt AwsEc2Instance.PrivateIp Description: AWS EC2 Instance Private IP Address Ec2AvailabilityZone: Value: !GetAtt AwsEc2Instance.AvailabilityZone Description: AWS EC2 Instance Availability Zone Through a Bitbucket web hook, my Jenkins server ran the newly commit code through my pipeline. First validating the CFn against the AWS CLI. Since it passed the first gate, the code was also then reviewed through SonarQube. If the first stage failed, the second stage wouldn\u0026rsquo;t run. This is by design to keep unnecessary processes from running. If your stage fails, all efforts to fix that code should be prioritized until its working again.\nObviously, I\u0026rsquo;m ecstatic that the code checked out, but now that it did, I can also take a peak in the SonarQube console to see how it did from a quality perspective. You can see from the screenshot below that there aren\u0026rsquo;t any bugs, or vulnerabilities, so thats great news. But if you look under \u0026ldquo;Code Smells\u0026rdquo; you\u0026rsquo;ll see that it believes that I have 48 mins of technical debt that I\u0026rsquo;ve acquired from this code. This comes from 10 \u0026ldquo;code smells\u0026rdquo;.\nIf I look into the code smells I\u0026rsquo;ll see which lines were causing the problem and why they were reported. This makes it pretty simple to go clean up my code and make it less stinky. Keeping a clean code base will make it much easier for a team to work on it since they\u0026rsquo;ll all be judged based on the same nose if you\u0026rsquo;ll pardon the analogy. Anytime anyone commits code, these tests can be run to see how poorly formatted that code is.\nWhats better, is if the code smells really bad, a quality gate can be added to prevent the code from being merged to the bitbucket master branch. This takes some extra configuration which isn\u0026rsquo;t shown here but I\u0026rsquo;ve got an \u0026ldquo;A\u0026rdquo; rating here anyway. You can view the things that affect your quality gates in a different SonarQube tab. You can also create your own Quality Gates that match your own rules.\nWhat\u0026rsquo;s Next Well, I\u0026rsquo;ve got an \u0026ldquo;A\u0026rdquo; rating and you can see my \u0026ldquo;Quality Gate\u0026rdquo; shows passed so my code could be merged to my release branch, but that isn\u0026rsquo;t good enough. Lets cleanup that code and make is smell less. I ignored one of the recommendations from SonarQube since it was a false positive, and then I went in to update my code and re-commit it. I fixed all the issues but two lines that were longer than the SonarQube YAML plugin, which I\u0026rsquo;m ignoring. You can see that the second run only had 2 code smells and equals about 4 mins of technical debt.\nI also went in and updated my code one more time and made a big oops (don\u0026rsquo;t worry, it was on purpose for this post) and of course it did not get through my validate test that was run on my Jenkins server. The result was that this code didn\u0026rsquo;t even go through the SonarQube tests, so they won\u0026rsquo;t show up.\nFinal Thoughts The first step in moving to using Infrastructure as Code is to get that code into version control. If you\u0026rsquo;re serious about managing lots of code, its worth setting up a process to test that code for validity and quality. The more manual tasks related to testing your code, the more time you\u0026rsquo;re wasting on technical debt. Take the time with new builds to setup a framework where any code being updated will pass through some gates before released, and set it up so the process can be done automatically. If your IaC is broken, how successful will your application deploys be if they depend on it? It\u0026rsquo;s time to take a few lessons from the development teams and implement our own tool chain for infrastructure builds.\n","permalink":"https://theithollow.com/2018/11/05/quality-checking-infrastructure-as-code/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been doing application development for long, having tools in place to check the health of your code is probably not a new concept. However, if you\u0026rsquo;re jumping into something like Cloud and you\u0026rsquo;ve been an infrastructure engineer, this may be a foreign concept to you. Isn\u0026rsquo;t it bad enough that you\u0026rsquo;ve started learning Git, JSON, YAML, APIs etc on top of your existing skill sets? Well, take some lessons from the application teams and you may well find that you\u0026rsquo;re improving your processes and reducing the technical debt and time to provision infrastructure as code resources as well.\u003c/p\u003e","title":"Quality Checking Infrastructure-as-Code"},{"content":"I recently attended the Devops Enterprise Summit in Las Vegas so that I could keep up to date on the latest happenings around integrating devops for companies. This conference was nothing short of amazing, but what I wasn\u0026rsquo;t anticipating was a theme around IT burnout. The IT Revolutions team who puts on the conference started one of the keynotes on the topic of burnout, from Dr. Christina Maslach who is Professor of Psychology, Emerita University of California, Berkeley. In addition to this powerful session, there was another panel group that happened on Wednesday, that went further into the discussion including the ultimate consequence of burnout, which is suicide.\nIt\u0026rsquo;s obvious that burnout can affect many industries but we\u0026rsquo;re seeing higher numbers of burnout coming from the Information Technology industry and it\u0026rsquo;s time to address it as a group.\nWhat is Burnout? I firmly believe that sunshine is the best disinfectant. We have to know what we\u0026rsquo;re talking about and if we\u0026rsquo;re dealing with a real situation or if we\u0026rsquo;re imagining it. So, lets start by defining burnout.\nBurnout is prolonged response to chronic situational stressors on the job. -Maslach\nBurnout isn\u0026rsquo;t just this term we\u0026rsquo;ve made up either. It has its own ICD-10 code (International Statistical Code of Diseases from the medical industry). This is important to know because medical professionals have identified it as a real problem. This isn\u0026rsquo;t in something we\u0026rsquo;ve made up in our heads or a name we\u0026rsquo;ve given to a describe a fake phenomenon. Its a medical condition.\nSure, everyone has a tough day from time to time and that will be the case across every industry. But some industries such as healthcare, and now we\u0026rsquo;re finding IT workers, have stressors on a consistent basis. Think about it, mistakes as a healthcare worker may cause patients to be injured. Along those lines, mistakes in the IT industry can cause millions of dollars of damage by a simple misconfiguration.\nWhat Causes Burnout? When we think about burnout the first thing that comes to my mind is how much work is being piled up on you. But according to Dr. Maslach, there are six key categories that contribute to burnout and its not always the individual\u0026rsquo;s workload. You can get burned out even without a heavy amount of work to do. Maybe you work a steady 40 hour week but you can still be burned out due to other factors in your enviornment. Here are six key areas that can contribute to burnout with your job:\nWorkload - We\u0026rsquo;re working too hard or too many hours. Control - Do we have autonomy in our work? Reward - Salary, benefits, and appreciation. Community - Our relationships in the workplace Fairness - Do our policies affect everyone in our company fairly? Values - Do we feel a purpose or meaning in our work? The real key is when your body is unable to cope with the stress created by job demands. For short periods of time, your body deals with stress by releasing cortisol for that \u0026ldquo;fight or flight\u0026rdquo; reaction. Over a longer period of time your body has a difficult time sustaining that reaction, and when it does you can feel the effects of burnout.\nThe Effects of Burnout Burnout has some serious side effects including suicide. John Willis shared some his experiences with people within the IT industry which had affected him personally, and its worth a read from the itrevolution blog. This is a serious issue. Burnout can cause exhaustion, cynicism, depression and suicide if left for too long. If you\u0026rsquo;re experiencing burnout, you can begin to care less about job outcomes, lose confidence in your abilities, and feeling hopeless about your situation. In my opinion, workload might be the easiest of the categories to fix because its more easily identified.\nIt should become more evident that these effects aren\u0026rsquo;t going to just effect an employee (and their family), but the organization they work for will feel the effects as well. Think about what would happen to an IT organization when the members stop caring about outcomes. How about when they lose confidence in their own abilities? Those don\u0026rsquo;t seem like the types of competitive advantages that companies are trying to build. The point is that burnout is not a problem just for an individual but employers need to start taking notice of this as well.\nAm I Experiencing Burnout? Another problem I see with burnout is that you might not realize that you\u0026rsquo;re burned out. Remember that burnout happens over a long time so it can sneak up on you. A difficult workload or having little control over your situation may be something you\u0026rsquo;ve always dealt with at a job. So, if you start to feel depressed thoughts, you might not be able to pinpoint why you feel that way. Even if you identify that you\u0026rsquo;re feeling depressed, you probably won\u0026rsquo;t be able to figure out when it happened or whats changed because its been this way for so long. There isn\u0026rsquo;t a single event that you can point to as the reason for your feelings.\nIf you\u0026rsquo;re afraid you\u0026rsquo;re experiencing burnout, its worth talking to a support system. Talk to a friend, a co-worker or someone you trust. They might be able to help you identify whats happening. In addition, you might want to go take a survey from Dr. Maslach that helps you to identify if you\u0026rsquo;re experiencing the effects of burnout.\nThe Maslach Burnout Inventory (MBI) report is a survey of about twenty-two questions to help identify if you\u0026rsquo;re experiencing burnout. I took the report myself and it provides helpful information and a graphed scale of where you might be feeling burnout. This report costs $15 but well worth the money to find out how you\u0026rsquo;re doing. Employers, note that you can purchase these licenses and send them to employees if you want to know how your own employees are doing with burnout. Remember that they probably won\u0026rsquo;t tell you if they feel this way on their own.\nCall to Action for Employers Burnout can\u0026rsquo;t always be addressed by the employee. Managers must look out for their employees safety in the workplace and this includes their mental health state. Things like fairness and control aren\u0026rsquo;t usually things that an employee can tackle on their own.\nAs an example, some companies provide a reward program to identify people doing a great job. But if that reward must come from the employees manager and some managers can\u0026rsquo;t take the time to submit their own employees, they may feel undervalued and that the process is unfair.\nEmployees who feel the effects of burnout may begin doing the bare minimum to keep their jobs which ultimately hurts the company. If the effects get worse, those employees could hurt themselves. During our DOES roundtable, this is exactly what happened to two team members at one company. Imagine how it must feel to be the manager for those employees. So take care of your workers. Maybe you should rollout the Maslach Burnout Inventory to your employees to get a pulse of the burnout in your own organization.\nCall to Action for Employees Take care of yourselves. If at any point you\u0026rsquo;re having bad thoughts please seek help immediately. From a co-worker, a friend, a doctor, a family member or anyone who might listen. No one wants to see bad things happen to you and this can be fixed. It is not hopeless and we can heal.\nMental health days are not just a fun thing to do to blow off work. They may be an important tool to staving off burnout. You have to take care of yourself as well.\nDon\u0026rsquo;t think that if you\u0026rsquo;re not working 60 hour weeks or pulling all nighters, that you\u0026rsquo;re not working hard enough. We\u0026rsquo;ve had a bad habit in the IT industry to think that we need to work so hard we\u0026rsquo;re falling asleep at our desk. You\u0026rsquo;ve probably even seen this in movies before. But, it\u0026rsquo;s ok to work a normal week and make time for yourself. It does not make you week to do this. It probably makes you better. Rest is just as important for your body as exercise is so don\u0026rsquo;t ignore it when you feel like you need a break.\nBe Vulnerable with Each Other This is a call to action for all of us in the industry. Be willing to admit when you made a mistake.\nThink of this, if an employee always sees an executive being a strong leader with all the right answers, how often do you suppose their employees will be willing to say they need help? If the leaders never make mistakes, the employees may feel like they are inadequate and need to hide their own issues for fear that they may seem weak or stupid. Employees may feel that they don\u0026rsquo;t know how to accomplish objectives laid out by their perfect leaders, so you\u0026rsquo;ll work twice as many hours and tell no one about this. This sort of behavior can be detrimental. Co-workers, be honest with your colleagues about your struggles as well. You\u0026rsquo;re probably not alone in your plight and if you socialize this others may come forward as well to discuss a problem affecting the whole company.\nDevops processes tell us that when we see a problem, the group stops and swarms on that problem until we resolve it. And then we move on. This is a group issue that we all need to be involved in and when we see an employee struggling, its our duty to the process to fix it. A burned out employee is not functioning efficiently in our system and we need to fix that issue before moving on.\nSummary I AM NOT A MEDICAL PROFESSIONAL! IF YOU FEEL DARK THOUGHTS, SEEK PROFESSIONAL HELP!\nI want people to start understanding burnout so that they can get the help they need. Understanding the problem is a big step into resolving it and I hope I\u0026rsquo;m helping this cause if only a little bit. Please add your own comments if you\u0026rsquo;d like to share and take the opportunity to watch the session from Dr. Maslach from the DOES \u0026lsquo;18 conference to get some background. I promise you that if you\u0026rsquo;re in the IT Industry, you\u0026rsquo;ll find it fascinating. Let\u0026rsquo;s figure out how to improve the lives of the IT industry workers.\nI\u0026rsquo;d also like to thank my friends and family for helping me talk through this post as well. You are just as important to my career as any training classes, or work I\u0026rsquo;ve done to get where I am today. Thank you for being my support system.\n","permalink":"https://theithollow.com/2018/10/25/this-is-not-fine/","summary":"\u003cp\u003eI recently attended the Devops Enterprise Summit in Las Vegas so that I could keep up to date on the latest happenings around integrating devops for companies. This conference was nothing short of amazing, but what I wasn\u0026rsquo;t anticipating was a theme around IT burnout. The \u003ca href=\"http://itrevolution.com\"\u003eIT Revolutions\u003c/a\u003e team who puts on the conference started one of the keynotes on the topic of burnout, from \u003ca href=\"https://psychology.berkeley.edu/people/christina-maslach\"\u003eDr. Christina Maslach\u003c/a\u003e who is Professor of Psychology, Emerita University of California, Berkeley. In addition to this powerful session, there was another panel group that happened on Wednesday, that went further into the discussion including the ultimate consequence of burnout, which is suicide.\u003c/p\u003e","title":"This is Not Fine!"},{"content":"A transit VPC is a pretty common networking pattern in an AWS environment. [Transit VPCs](http://Should I use a Transit VPC in AWS?) can limit the number of peering connections required to connect all your VPCs by switching from a mesh topology of peers to a hub and spoke method with transit. While transit VPCs offer some nice features, it also requires a bit more management overhead since you need to manage your own routers. Cisco makes the deployment of transit routers very easy but sometimes you need to make some changes to the routers after they\u0026rsquo;re deployed like if you need to resize them. Also, sometimes bad things happen and those routers can be destroyed by accident. This post shows how you can resize your Cisco CSRs and/or restore an old configuration from snapshot.\nTransit VPC Setup This post assumes that you\u0026rsquo;ve already gotten your Transit VPC and AWS environment setup and created. As you can see from my screenshot below, I\u0026rsquo;ve got a pair of Cisco CSRs up and running in my AWS environment. I\u0026rsquo;ve also got an EC2 instance in two different VPCs which are connected together though the transit network and subsequently these CSRs. These instances are setup to ping each other just to ensure connectivity exists between the two hosts.\nIn preparation for the upgrade to a larger EC2 instance type, I\u0026rsquo;m going to create an image of one of my CSRs that will be upgraded. I\u0026rsquo;m doing this manually and only once, but it might be a good idea to build in an image process on a regular schedule as a device backup solution.\nIf you\u0026rsquo;re going the manual route, select your CSR in the EC2 console and the click Actions \u0026ndash;\u0026gt; Image \u0026ndash;\u0026gt; Create Image.\nGive the Image a name and description. Then click the \u0026ldquo;Create Image\u0026rdquo; button.\nAfter a few minutes you\u0026rsquo;ll see that your image has been created which should be visible in your images link in the EC2 portal.\nBefore we do anything else destructive, take a look at your elastic IP Addresses in the \u0026ldquo;Elastic IPs\u0026rdquo; link in the EC2 console. Pay attention to which EIP is associated with the router that is to be upgraded. We\u0026rsquo;ll need this information later.\nCreate new CSR We\u0026rsquo;re ready to build a new CSR from the image we created in the previous section. This could be to do our upgrade, or to recover from some sort of catastrophic configuration change. Navigate to the images where we created our CSR image. Select the image and click Actions \u0026ndash;\u0026gt; Launch to create a new EC2 instance from the snapshot. At this point the EC2 Launch wizard will start. Select the instance size that you prefer as well as names, security groups etc. For this demo I\u0026rsquo;ve selected an c4.xlarge to show that the size can be altered from the default c4.large sizing.\nOnce the images are launched, go back into your Elastic IPs portal and disassociate the old EIP with the CSR that was deleted, shutoff or broken. You can do this by selecting the EIP and going to Actions \u0026ndash;\u0026gt; Disassociate address.\nA window will open asking you to confirm where you\u0026rsquo;ll click the \u0026ldquo;Disassociate address\u0026rdquo; button.\nNow select that EIP again and go to Actions \u0026ndash;\u0026gt; Associate address so that we can attach that EIP to the new instance that we created from the image.\nIn the window that opens select the instance radio button and then select your new CSR from the instance drop down. Then select the private IP Address of the CSR that should get the new association. When you\u0026rsquo;re done click the \u0026ldquo;Associate\u0026rdquo; button.\nWhen done you\u0026rsquo;ll get a confirmation window. Click the \u0026ldquo;Close\u0026rdquo; button.\nResults Now that the EIP has been re-associated to our new CSR we should be able to use our Transit connections again with both CSRs. In the EC2 instance portal of my transit account I can now see my original Transit CSRs (one of them stopped which Is the one we replaced) as well as the CSR that I created from image. Notice that there are two CSRs running and the restored CSR is a larger size than the originals.\nAs I hop over to one of my spoke VPCs with my test EC2 instances running (and pinging each other) I can navigate to the VPNs. There I see that I have two VPNs available and are routing packets through my transit VPC.\nAs I went through this process I saw a small drop in connectivity. Also, in case you\u0026rsquo;re wondering I powered off the remaining CSR from the initial configuration to ensure that traffic was flowing through the transit and across the new router that was deployed.\nSummary This scenario showed that you can restore your Cisco CSRs used for transitive routing by snapshotting and restoring an image. This process can be used not only for Cisco CSRs but for any types of instances that are deployed in AWS. Maybe its an EC2 instance running a webapp, or maybe its a Palo Alto transit setup. In any case, this process should work to help you manage your EC2 instances. Thanks for reading.\n","permalink":"https://theithollow.com/2018/10/22/restore-or-resize-an-aws-transit-router/","summary":"\u003cp\u003eA transit VPC is a pretty common networking pattern in an AWS environment. [Transit VPCs](http://Should I use a Transit VPC in AWS?) can limit the number of peering connections required to connect all your VPCs by switching from a mesh topology of peers to a hub and spoke method with transit. While transit VPCs offer some nice features, it also requires a bit more management overhead since you need to manage your own routers. Cisco makes the deployment of transit routers very easy but sometimes you need to make some changes to the routers after they\u0026rsquo;re deployed like if you need to resize them. Also, sometimes bad things happen and those routers can be destroyed by accident. This post shows how you can resize your Cisco CSRs and/or restore an old configuration from snapshot.\u003c/p\u003e","title":"Restore or Resize an AWS Transit Router"},{"content":"Upgrading your vRealize Automation instance has some times been a painful exercise. But this was in the early days after VMware purchased the product from DynamicOps. It\u0026rsquo;s taken a while, but the upgrade process has improved for each and every version, in my opinion, and 7.5 is no exception. If you\u0026rsquo;re on a previous version, here is a quick rundown on the upgrade process from 7.4 to 7.5.\nNote: As always, please read the the official upgrade documentation. It includes prerequisites and steps that should always be followed. https://docs.vmware.com/en/vRealize-Automation/7.5/vrealize-automation-7172732to75upgrading.pdf\nUpgrade Prerequisites There are a few things that should commonly be checked before these upgrades. I\u0026rsquo;ve seen prerequisites listed before for software that mention making sure you have free space available and the right hardware components, blah blah. But you really should check these. I don\u0026rsquo;t know how many times I\u0026rsquo;ve gone to do a vRA upgrade only to find out the disk sizes have changed or i used up all the free space, so do yourself a favor and check to be sure you\u0026rsquo;ve covered these.\nvRealize Automation requirements:\n18 GB RAM 4 CPUs Disk1=50 GB Disk3=25 GB Disk4=50 GB IaaS Servers and SQL Database must also have at least 5 GB of free space available.\nBeyond these requirements, I highly recommend checking the Java version on your IaaS servers. vRA has required that the java version be upgraded between my vRA upgrades on the past few occasions and if you don\u0026rsquo;t upgrade them, it can bite you. Be sure that for 7.5 your Java version is version 8 update 161 or higher.\nAlso, it should make sense to you that your vRA 7.4 version is in good working condition before you upgrade to 7.5. If it isn\u0026rsquo;t, it\u0026rsquo;s unlikely that your upgrade will magically make everything work again. A couple of good things to test would be to check the services are all registered in your vRA VAMI console.\nAnother good thing to check is to make sure that your IaaS Management agent is communicating properly. I found out that changing my vRA root password (because I couldn\u0026rsquo;t remember it) caused my management agent to stop communicating. Check to make sure this works so that the upgrade process can not only update your vRA appliance, but then also seamlessly update your IaaS servers. Lastly, and I can\u0026rsquo;t stress this enough, make sure that you have proper backups and snapshots. In my lab I prefer to keep my SQL database on my IaaS Server so that snapshotting this server and the vRA appliance is all that I need to do. I\u0026rsquo;ve frequently had errors during upgrades (almost always because I didn\u0026rsquo;t thoroughly review the documentation) and the snapshots instantly get me back to my starting point.\nPerform the Upgrade To run the upgrade, login to the vRA appliance\u0026rsquo;s VAMI console and go to the update tab. From there click the check for updates (assuming the default repository is set under settings) and wait until you get a notification that 7.5 is available. After that click the \u0026ldquo;Install Updates\u0026rdquo; button.\nYou\u0026rsquo;ll be asked a second time if you\u0026rsquo;re ready to do this upgrade. Consider this the \u0026ldquo;Do you have valid backups?\u0026rdquo; message box. Click OK when you\u0026rsquo;re ready.\nYou\u0026rsquo;ll get a dialog box that tells you to please wait. And it may stay this way for quite some time.\nIf you\u0026rsquo;d like to get more details on what\u0026rsquo;s actually happening, I highly recommend SSHing into the vRA appliance and running a tail -f on /opt/vmware/var/log/vami/updatecli.log. Eventually, you\u0026rsquo;ll see that the upgrade is finished.\nThe VAMI console will also show that the upgrade is complete but you\u0026rsquo;ll need to reboot the vRA appliance before it\u0026rsquo;s finished.\nAfter the reboot, you should be able to log back into your tenant and will see the new HTML 5 interface and your \u0026ldquo;Services\u0026rdquo; menu should be gone.\nGood luck on your upgrade, and thanks for reading!\n","permalink":"https://theithollow.com/2018/10/08/upgrade-to-vra-7-5/","summary":"\u003cp\u003eUpgrading your vRealize Automation instance has some times been a painful exercise. But this was in the early days after VMware purchased the product from DynamicOps. It\u0026rsquo;s taken a while, but the upgrade process has improved for each and every version, in my opinion, and 7.5 is no exception. If you\u0026rsquo;re on a previous version, here is a quick rundown on the upgrade process from 7.4 to 7.5.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eNote:\u003c/strong\u003e As always, please read the the official upgrade documentation. It includes prerequisites and steps that should always be followed. https://docs.vmware.com/en/vRealize-Automation/7.5/vrealize-automation-7172732to75upgrading.pdf\u003c/p\u003e","title":"Upgrade to vRA 7.5"},{"content":"Amazon has released yet another Simple Systems Manager service to improve the management of EC2 instances. This time, it\u0026rsquo;s AWS Session Manager. Session Manager is a nifty little service that lets you assign permissions to users to access an instances\u0026rsquo;s shell. Now, you might be thinking, \u0026ldquo;Why would I need this? I can already add SSH keys to my instances at boot time to access my instances.\u0026rdquo; You\u0026rsquo;d be right of course, but think of how you might use Session Manager. Instead of having to deal with adding SSH keys, and managing access/distribution of the private keys, we can manage access through AWS Identity and Access Management permissions.\nSetup Session Manager As with the other System Manager services, you\u0026rsquo;ll need the instances to have the correct permissions by assigning a Systems Manager instance profile role.\nIf you\u0026rsquo;ve been following along with the rest of this series, you may need to add the following policy to your EC2SystemsManagerRole. Session Manager came out much later than some of the other services we\u0026rsquo;ve talked about already. So add these additional permissions to your SystemsManagerRole before we add the instance profile to the instance. I\u0026rsquo;d also mention that my user has Full Administrator permissions but if yours doesn\u0026rsquo;t, you\u0026rsquo;ll need to add more permissions to your user to use the Session Manager service on your EC2 instances.\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;ssmmessages:CreateControlChannel\u0026#34;, \u0026#34;ssmmessages:CreateDataChannel\u0026#34;, \u0026#34;ssmmessages:OpenControlChannel\u0026#34;, \u0026#34;ssmmessages:OpenDataChannel\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; }, { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:GetEncryptionConfiguration\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; } ] } Once you\u0026rsquo;ve setup the appropriate instance profile permissions, you\u0026rsquo;ll need to spin up an instance. I\u0026rsquo;ve spun up one of my Linux instances that has the SSM Agent installed and assigned my EC2SystemsManagerRole attached.\nYou can also see that my security group that I\u0026rsquo;ve attached to my instance only has port 80 open. SSH is not required with this Session Manager service which is another benefit to your security profile.\nTest A Shell Session Once your instance has been spun up, you can look in the Systems Manager Service. Many of the EC2 Simple Systems Manager services are available from the EC2 console, but this one is not. To access it you\u0026rsquo;ll need to go to the Systems Manager service directly. On the session manager screen, click the \u0026ldquo;Start a Session\u0026rdquo; button. You\u0026rsquo;ll notice that from my screenshot, the version of my SSM Agent is not current. You need version 2.3.68.0 or later for it to work with Session Manager. Luckily, my Instance profile gives enough permissions so that I can use the Run Command service to update the agent. Session Manager lets you do this directly from its own interface. Click \u0026ldquo;Update SSM Agent\u0026rdquo; button if you see this screen.\nYou\u0026rsquo;ll be asked if you\u0026rsquo;re sure that you want to complete this operation. Click the \u0026ldquo;update SSM Agent\u0026rdquo; button again.\nNow we can go back to our Session Manager and click \u0026ldquo;Start session\u0026rdquo;. You\u0026rsquo;ll see a shell open in a new web browser window. Form there, I ran a pair of commands just to show it working. First, notice that the user that you login with on your EC2 instance is \u0026ldquo;ssm-user\u0026rdquo; and not ec2-user or root.\nWhen you\u0026rsquo;re done with your configurations, click the \u0026ldquo;Terminate\u0026rdquo; button on the top right hand corner. NOTE: it means terminate the session and not the instance. Just to show that the solution also works with Windows, you\u0026rsquo;ll get a PowerShell session open if using Windows.\nSummary Simple Systems Manager has a bunch of great tools to manage your EC2 instance fleet. Adding Session Manager can dramatically make your instances more easy to manage by removing the need for SSH key management, and increase your security posture by removing the need to provide port 22 access. Try it out in your environment.\n","permalink":"https://theithollow.com/2018/10/01/aws-session-manager/","summary":"\u003cp\u003eAmazon has released yet another \u003ca href=\"/2017/10/02/aws-ec2-simple-systems-manager-reference/\"\u003eSimple Systems Manager\u003c/a\u003e service to improve the management of EC2 instances. This time, it\u0026rsquo;s AWS Session Manager. Session Manager is a nifty little service that lets you assign permissions to users to access an instances\u0026rsquo;s shell. Now, you might be thinking, \u0026ldquo;Why would I need this? I can already add SSH keys to my instances at boot time to access my instances.\u0026rdquo; You\u0026rsquo;d be right of course, but think of how you might use Session Manager. Instead of having to deal with adding SSH keys, and managing access/distribution of the private keys, we can manage access through AWS Identity and Access Management permissions.\u003c/p\u003e","title":"AWS Session Manager"},{"content":"Opening an AWS account is very easy to do. AWS makes it possible to create an account with an email address and a credit card. Even better, if you\u0026rsquo;re setting up a multi-account structure, you can use the API through organizations and you really only need an email address as an input. But closing an account is slightly more difficult. While closing accounts doesn\u0026rsquo;t happen quite as often as opening new ones, it does happen. Especially if you\u0026rsquo;re trying to fail fast and have made some organizational mistakes. When you want to clean those accounts up, you\u0026rsquo;ll need to jump through a couple of small hoops to do so. This post hopes to outline how to remove an account from an AWS Organization and then close it.\nRemove a Member Account from Organizations Login to the member account that you wish to remove as the root user.\nNote: You may need to reset the password if you haven\u0026rsquo;t done this already. Creating a new account from organizations does not require the password to be set.\nOnce logged in to the console, select the account name drop down and then select \u0026ldquo;My Account\u0026rdquo;.\nGo to Payment methods and add a Credit Card. Finish filling out the Credit Card Details and contact information associated with the card.\nNext, go back to the account drop down and select “My Organization”.\nYou can now select “Leave organization” where you’ll likely receive an error message about some steps that aren’t completed. Click the “Leave organization” button.\nYou’ll get a warning message asking if you’re sure you want to leave. Select the “Leave organization” link at the bottom of the warning message.\nYou’ll likely get a message preventing you from leaving the organization. Luckily the link at the bottom of this warning will show you the steps needed to finish setting up the member account and prep it for removal. Click the “Complete the account sign-up steps” link.\nThe first step is to verify your phone number. Enter your phone number and the captcha code and then click “Call me now”.\nThe screen will change and display a four digit number. You’ll also receive a call from AWS at the number you entered and will ask you to submit that number through your touch tone phone.\nAfter you enter the code, the screen will change again stating that your identity has been verified. Click Continue.\nAfter you verify the account, you’ll need to select a support plan. You can select the basic plan, which is free, if you plan to close the account.\nWhen you’re done you’ll see a message stating that sign-in steps have been completed.\nAt this point you can select “Leave organization”.\nAgain you’ll get a warning. Select “Leave organization”.\nYou should get a message that the account was removed from your organization.\nClose the Member Account Go back to the Account drop down and select “My Account”.\nScroll to the bottom of that page and click the check box under “Close Account” stating that you understand the consequence of closing the account. Then click the “Close Account” button.\nVerify once again that you’re ready to close the account by clicking the “Close Account” button.\nYou should get a message that the account we removed. You can then sign out of the account.\nSummary Removing an account seems easy enough to accomplish but I\u0026rsquo;ve seen strange issues from time to time with this process. Usually its something simple like a support plan or credit card hasn\u0026rsquo;t been added. Other times you\u0026rsquo;ll see odd messages about a waiting period such as this one. If you see something similar to this or are having issues with closing your account. Contact AWS Support. You should get a response within 24 hours even with the free plan.\n","permalink":"https://theithollow.com/2018/09/17/close-an-aws-account-belonging-to-an-organization/","summary":"\u003cp\u003eOpening an AWS account is very easy to do. AWS makes it possible to create an account with an email address and a credit card. Even better, if you\u0026rsquo;re setting up a multi-account structure, you can use the API through organizations and you really only need an email address as an input. But closing an account is slightly more difficult. While closing accounts doesn\u0026rsquo;t happen quite as often as opening new ones, it does happen. Especially if you\u0026rsquo;re trying to fail fast and have made some organizational mistakes. When you want to clean those accounts up, you\u0026rsquo;ll need to jump through a couple of small hoops to do so. This post hopes to outline how to remove an account from an AWS Organization and then close it.\u003c/p\u003e","title":"Close an AWS Account Belonging to an Organization"},{"content":"In a previous post, we covered how to use an AWS Custom Resource in a CloudFormation template to deploy a very basic Lambda function. To expand upon this ability, lets use this knowledge to deploy something more useful than a basic Lambda function. How about we use it to create an AWS account? To my knowledge, the only way to create a new AWS account is to use the CLI or manually through the console. How about we use a custom resource to deploy a new account for us in our AWS Organization? Once this ability is available in a CloudFormation template, we could even publish it in the AWS Service Catalog and give our users an account vending machine capability.\nCreate the Lambda Function Just as we did in the previous post, we\u0026rsquo;ll create a Lambda function, zip it up and place it into our S3 bucket. My function is Python 2.7 and can be found below.\n#Import modules import json, boto3, logging from botocore.vendored import requests #Define logging properties log = logging.getLogger() log.setLevel(logging.INFO) #Main Lambda function to be excecuted def lambda_handler(event, context): #Initialize the status of the function status=\u0026#34;SUCCESS\u0026#34; responseData = {} client = boto3.client(\u0026#39;organizations\u0026#39;) #Read and log the input values acctName = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;AccountName\u0026#39;] ouName = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;OUName\u0026#39;] emailAddress = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;Email\u0026#39;] log.info(\u0026#34;Account name is: \u0026#34; + acctName) log.info(\u0026#34;Organizational Unit name is: \u0026#34; + ouName) log.info(\u0026#34;Email Address is: \u0026#34; + emailAddress) #create a new Organizational Unit orgResponse = client.create_organizational_unit( ParentId=\u0026#34;ou-3hvv-jqwq89r0\u0026#34;, #My Parent OU. Change for your environment Name=ouName ) log.info(orgResponse[\u0026#39;OrganizationalUnit\u0026#39;][\u0026#39;Id\u0026#39;]) OUID=str(orgResponse[\u0026#39;OrganizationalUnit\u0026#39;][\u0026#39;Id\u0026#39;]) #Create a new Account in the OU Just Created acctResponse = client.create_account( Email=emailAddress, AccountName=acctName ) #Check Account Status acctStatusID = acctResponse[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;Id\u0026#39;] log.info(acctStatusID) while True: createStatus = client.describe_create_account_status( CreateAccountRequestId=acctStatusID ) log.info(createStatus[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;State\u0026#39;]) if str(createStatus[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;State\u0026#39;]) != \u0026#39;IN_PROGRESS\u0026#39;: newAccountId = str(createStatus[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;AccountId\u0026#39;]) break #Move Account to new OU moveResponse = client.move_account( AccountId=newAccountId, SourceParentId=\u0026#39;r-3hvv\u0026#39;, #My root OU. Change for your environment DestinationParentId=OUID ) #Set Return Data responseData = {\u0026#34;Message\u0026#34; : newAccountId} #If you need to return data use this json object #return the response back to the S3 URL to notify CloudFormation about the code being run response=respond(event,context,status,responseData,None) #Function returns the response from the S3 URL return { \u0026#34;Response\u0026#34; :response } def respond(event, context, responseStatus, responseData, physicalResourceId): #Build response payload required by CloudFormation responseBody = {} responseBody[\u0026#39;Status\u0026#39;] = responseStatus responseBody[\u0026#39;Reason\u0026#39;] = \u0026#39;Details in: \u0026#39; + context.log_stream_name responseBody[\u0026#39;PhysicalResourceId\u0026#39;] = context.log_stream_name responseBody[\u0026#39;StackId\u0026#39;] = event[\u0026#39;StackId\u0026#39;] responseBody[\u0026#39;RequestId\u0026#39;] = event[\u0026#39;RequestId\u0026#39;] responseBody[\u0026#39;LogicalResourceId\u0026#39;] = event[\u0026#39;LogicalResourceId\u0026#39;] responseBody[\u0026#39;Data\u0026#39;] = responseData #Convert json object to string and log it json_responseBody = json.dumps(responseBody) log.info(\u0026#34;Response body: \u0026#34; + str(json_responseBody)) #Set response URL responseUrl = event[\u0026#39;ResponseURL\u0026#39;] #Set headers for preparation for a PUT headers = { \u0026#39;content-type\u0026#39; : \u0026#39;\u0026#39;, \u0026#39;content-length\u0026#39; : str(len(json_responseBody)) } #Return the response to the signed S3 URL try: response = requests.put(responseUrl, data=json_responseBody, headers=headers) log.info(\u0026#34;Status code: \u0026#34; + str(response.reason)) status=\u0026#34;SUCCESS\u0026#34; return status #Defind what happens if the PUT operation fails except Exception as e: log.error(\u0026#34;send(..) failed executing requests.put(..): \u0026#34; + str(e)) status=\u0026#34;FAILED\u0026#34; return status As before, lets break down a few of the relevant sections of the code so you can see whats happening. To begin, lets look at the main lambda_handler. First we\u0026rsquo;ll initialize some of our variables and set our boto3 client to organizations so that we can create our accounts. After this, we\u0026rsquo;re going to set some variables in our Lambda function that will be passed in from our CloudFormation template (shown later in this post). After we set our variables, we\u0026rsquo;ll log them so that we can see what CloudFormation actually passed to our function.\n#Main Lambda function to be excecuted def lambda_handler(event, context): #Initialize the status of the function status=\u0026#34;SUCCESS\u0026#34; responseData = {} client = boto3.client(\u0026#39;organizations\u0026#39;) #Read and log the input values acctName = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;AccountName\u0026#39;] ouName = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;OUName\u0026#39;] emailAddress = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;Email\u0026#39;] log.info(\u0026#34;Account name is: \u0026#34; + acctName) log.info(\u0026#34;Organizational Unit name is: \u0026#34; + ouName) log.info(\u0026#34;Email Address is: \u0026#34; + emailAddress) Next, we\u0026rsquo;ll use boto3 to create an Organizational Unit on the fly. We\u0026rsquo;ll pass in the name of this OU from our CloudFormation template. To do this we\u0026rsquo;ll use the create_organizational_unit method and we\u0026rsquo;ll need to pass in the parent OU and the name of our new OU. When we\u0026rsquo;re done, we\u0026rsquo;ll log the ID of this OU and I\u0026rsquo;m setting a variable with the ID of this OU as well for later on in the function.\n#create a new Organizational Unit orgResponse = client.create_organizational_unit( ParentId=\u0026#34;ou-3hvv-jqwq89r0\u0026#34;, #My Parent OU. Change for your environment Name=ouName ) log.info(orgResponse[\u0026#39;OrganizationalUnit\u0026#39;][\u0026#39;Id\u0026#39;]) OUID=str(orgResponse[\u0026#39;OrganizationalUnit\u0026#39;][\u0026#39;Id\u0026#39;]) Now that we\u0026rsquo;ve created an OU, lets create the account. Again, we\u0026rsquo;ll use boto3 to call the create_account method. We\u0026rsquo;ll pass in an email address to be used for the new account and an account name. Again, when this is done, we\u0026rsquo;ll log the response which is the account status. After this, we\u0026rsquo;ll initiate a loop to check on the status of the account while it\u0026rsquo;s being created. Account creation isn\u0026rsquo;t an immediate thing, so its good to check on it until its either Successful or Failed. The loop checks the status, logs it and waits until its no longer IN_PROGRESS.\n#Create a new Account in the OU Just Created acctResponse = client.create_account( Email=emailAddress, AccountName=acctName ) #Check Account Status acctStatusID = acctResponse[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;Id\u0026#39;] log.info(acctStatusID) while True: createStatus = client.describe_create_account_status( CreateAccountRequestId=acctStatusID ) log.info(createStatus[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;State\u0026#39;]) if str(createStatus[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;State\u0026#39;]) != \u0026#39;IN_PROGRESS\u0026#39;: newAccountId = str(createStatus[\u0026#39;CreateAccountStatus\u0026#39;][\u0026#39;AccountId\u0026#39;]) break With any luck, our account has been created and we\u0026rsquo;ve got one more thing to do. Lets move the new account, into the new OU we created. Again, we\u0026rsquo;ll use boto3, but this time with the move_account method. I\u0026rsquo;m passing in the new AccountId we stored from our new create_account method, and the OUID we stored from our create_organizational_unit. I\u0026rsquo;m also specifying my root OU which will be different in your case. Fill it in, or do a search to find it in the Lambda function.\n#Move Account to new OU moveResponse = client.move_account( AccountId=newAccountId, SourceParentId=\u0026#39;r-3hvv\u0026#39;, #My root OU. Change for your environment DestinationParentId=OUID ) The account stuff is done, now we\u0026rsquo;re just setting some return data to be sent back to CloudFormation. The info I want sent back to CFn as an output is the accountID of our new account we created. After which we\u0026rsquo;ll send the data back to our signed S3 URL as we explained in the previous post about Custom Resorces.\n#Set Return Data responseData = {\u0026#34;Message\u0026#34; : newAccountId} #If you need to return data use this json object #return the response back to the S3 URL to notify CloudFormation about the code being run response=respond(event,context,status,responseData,None) #Function returns the response from the S3 URL return { \u0026#34;Response\u0026#34; :response } Create the CloudFormation Template If you\u0026rsquo;re following from the previous post, the only changes to the CloudFormation template are the variables being passed back and forth. For that reason we won\u0026rsquo;t go into much detail here, but the full template I used is found below.\n--- AWSTemplateFormatVersion: \u0026#39;2010-09-09\u0026#39; Description: Account Creation Stack Parameters: ModuleName: #Name of the Lambda Module Description: The name of the Python file Type: String Default: \u0026#34;create-account\u0026#34; S3Bucket: #S3 bucket in which to retrieve the python script with the Lambda handler Description: The name of the bucket that contains your packaged source Type: String Default: \u0026#34;hollow-acct\u0026#34; S3Key: #Name of the zip file Description: The name of the ZIP package Type: String Default: \u0026#34;create-account.zip\u0026#34; AccountName: #Account Name Description: Account Name To Be Created Type: String Default: \u0026#34;HollowTest1\u0026#34; OUName: #Organizational Unit Name Description: Organizational Unit Name To Be Created Type: String Default: \u0026#34;HollowTest1\u0026#34; Email: #Email Address Description: Email Address used for the Account Type: String Resources: CreateAccount: #Custom Resource Type: Custom::CreateAccount Properties: ServiceToken: Fn::GetAtt: - TestFunction #Reference to Function to be run - Arn #ARN of the function to be run AccountName: Ref: AccountName OUName: Ref: OUName Email: Ref: Email TestFunction: #Lambda Function Type: AWS::Lambda::Function Properties: Code: S3Bucket: Ref: S3Bucket S3Key: Ref: S3Key Handler: Fn::Join: - \u0026#39;\u0026#39; - - Ref: ModuleName - \u0026#34;.lambda_handler\u0026#34; Role: Fn::GetAtt: - LambdaExecutionRole - Arn Runtime: python2.7 Timeout: \u0026#39;30\u0026#39; LambdaExecutionRole: #IAM Role for Custom Resource Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: \u0026#34;/\u0026#34; Policies: - PolicyName: root PolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: arn:aws:logs:*:*:* - PolicyName: Acct PolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Action: - organizations:* Resource: \u0026#34;*\u0026#34; Outputs: #Return output from the Lambda Function Run Message: Description: Message returned from Lambda Value: Fn::GetAtt: - CreateAccount #Output from the Custom Resource - Message #Return property Results Now that the coding is done, we can deploy the CloudFormation template. I\u0026rsquo;ve chosen to do this through the command line but you could do it through the console as well. My command line execution is as follows:\naws cloudformation create-stack --stack-name theITHollowCreateAccount1 --template-body file://create-account-CFn.yml --capabilities CAPABILITY_IAM --parameters ParameterKey=AccountName,ParameterValue=theITHollowAcct1 ParameterKey=OUName,ParameterValue=theITHollowOU1 ParameterKey=Email,ParameterValue=aws-temp4@theithollow.com ParameterKey=ModuleName,ParameterValue=create-account ParameterKey=S3Bucket,ParameterValue=hollow-acct ParameterKey=S3Key,ParameterValue=create-account.zip After a minute, you\u0026rsquo;ll see that the CFn stack has been deployed successfully, and that the output for the stack is the account number for the new AWS account.\nIf we open up the AWS Organizations console, we\u0026rsquo;ll see that the new account was created and the account number matches our output from the screenshot above.\nAs we look through the organizational units, we\u0026rsquo;ll see that a new OU was created and that our new account lives within that OU.\nSummary I hope that this post has shown you what kind of cool stuff you can do with a CloudFormation custom resource. Now think how neat this might be to put in AWS Service Catalog to deploy new accounts on demand. I will admit that this method does have some drawbacks, such as not being able to delete the stack and have the account deleted, but it is what it is.\n","permalink":"https://theithollow.com/2018/09/10/create-aws-accounts-with-cloudformation/","summary":"\u003cp\u003eIn a \u003ca href=\"/2018/09/04/aws-custom-resources/\"\u003eprevious post\u003c/a\u003e, we covered how to use an AWS Custom Resource in a CloudFormation template to deploy a very basic Lambda function. To expand upon this ability, lets use this knowledge to deploy something more useful than a basic Lambda function. How about we use it to create an AWS account? To my knowledge, the only way to create a new AWS account is to use the CLI or manually through the console. How about we use a custom resource to deploy a new account for us in our AWS Organization? Once this ability is available in a CloudFormation template, we could even publish it in the AWS Service Catalog and give our users an account vending machine capability.\u003c/p\u003e","title":"Create AWS Accounts with CloudFormation"},{"content":"We love to use AWS CloudFormation to deploy our environments. Its like configuration management for our AWS infrastructure in the sense that we write a desired state as code and apply it to our environment. But sometimes, there are tasks that we want to complete that aren\u0026rsquo;t part of CloudFormation. For instance, what if we wanted to use CloudFormation to deploy a new account which needs to be done through the CLI, or if we need to return some information to our CloudFormation template before deploying it? Luckily for us we can use a Custom Resource to achieve our goals. This post shows how you can use CloudFormation with a Custom Resource to execute a very basic Lambda function as part of a deployment.\nSolution Overview To demonstrate our Custom Resource, we\u0026rsquo;ll need a Lambda function that we can call. CloudFormation will deploy this function from a Zip file and after deployed, will execute this function and return the outputs to our CloudFormation template. The diagram below demonstrates the process of retrieving this zip file form an existing S3 bucket, deploying it, executing it and having the Lambda function return data to CloudFormation.\nThe CloudFormation Template First, lets take a look at the CloudFormation template that we\u0026rsquo;ll be using to deploy our resources.\n--- AWSTemplateFormatVersion: \u0026#39;2010-09-09\u0026#39; Description: Example of a Lambda Custom Resource that returns a message Parameters: ModuleName: #Name of the Lambda Module Description: The name of the Python file Type: String Default: helloworld S3Bucket: #S3 bucket in which to retrieve the python script with the Lambda handler Description: The name of the bucket that contains your packaged source Type: String Default: hollow-lambda1 S3Key: #Name of the zip file Description: The name of the ZIP package Type: String Default: helloworld.zip Message: #Message input for you to enter Description: The message to display Type: String Default: Test Resources: HelloWorld: #Custom Resource Type: Custom::HelloWorld Properties: ServiceToken: Fn::GetAtt: - TestFunction #Reference to Function to be run - Arn #ARN of the function to be run Input1: Ref: Message TestFunction: #Lambda Function Type: AWS::Lambda::Function Properties: Code: S3Bucket: Ref: S3Bucket S3Key: Ref: S3Key Handler: Fn::Join: - \u0026#39;\u0026#39; - - Ref: ModuleName - \u0026#34;.lambda_handler\u0026#34; Role: Fn::GetAtt: - LambdaExecutionRole - Arn Runtime: python2.7 Timeout: \u0026#39;30\u0026#39; LambdaExecutionRole: #IAM Role for Custom Resource Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: \u0026#34;/\u0026#34; Policies: - PolicyName: root PolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: arn:aws:logs:*:*:* Outputs: #Return output from the Lambda Function Run Message: Description: Message returned from Lambda Value: Fn::GetAtt: - HelloWorld #Output from the HelloWorld Custom Resource - Message #Return property In the parameters section you can see we\u0026rsquo;re looking for the S3 bucket with our module in it, the name of the module and a generic input for the CloudFormation template to pass to Lambda as a string.\nParameters: ModuleName: #Name of the Lambda Module Description: The name of the Python file Type: String Default: helloworld S3Bucket: #S3 bucket in which to retrieve the python script with the Lambda handler Description: The name of the bucket that contains your packaged source Type: String Default: hollow-lambda1 S3Key: #Name of the zip file Description: The name of the ZIP package Type: String Default: helloworld.zip Message: #Message input for you to enter Description: The message to display Type: String Default: Test In the resources section we have a HelloWorld object which is our custom resource of type Custom::DESCRIPTIONHERE. We need to pass a ServiceToken along, which tells the stack which Custom Resource to be executed. We\u0026rsquo;re also adding an input which will be passed to Lambda named \u0026ldquo;Input1\u0026rdquo; and we\u0026rsquo;ll reference the parameter seen earlier.\nHelloWorld: #Custom Resource Type: Custom::HelloWorld Properties: ServiceToken: Fn::GetAtt: - TestFunction #Reference to Function to be run - Arn #ARN of the function to be run Input1: Ref: Message Below this, is the Lambda function deployment. This piece of the resources section of the template shows where the Lambda module comes from, the runtime, timeout and which role will have permissions be used for it.\nTestFunction: #Lambda Function Type: AWS::Lambda::Function Properties: Code: S3Bucket: Ref: S3Bucket S3Key: Ref: S3Key Handler: Fn::Join: - \u0026#39;\u0026#39; - - Ref: ModuleName - \u0026#34;.lambda_handler\u0026#34; Role: Fn::GetAtt: - LambdaExecutionRole - Arn Runtime: python2.7 Timeout: \u0026#39;30\u0026#39; Next, there is a section for setting up permissions for the Lambda function to write to CloudWatch. Depending on your environment, you may need to provide access to other resources. For example if your function reads EC2 data, then you\u0026rsquo;d need to ensure it had the appropriate permissions to read those properties.\nLambdaExecutionRole: #IAM Role for Custom Resource Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: \u0026#34;/\u0026#34; Policies: - PolicyName: root PolicyDocument: Version: \u0026#39;2012-10-17\u0026#39; Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: arn:aws:logs:*:*:* We won\u0026rsquo;t look at the next piece until our Lambda function finishes running, but we\u0026rsquo;re going to get some return information from the function and print it as a CloudFormation output.\nOutputs: #Return output from the Lambda Function Run Message: Description: Message returned from Lambda Value: Fn::GetAtt: - HelloWorld #Output from the HelloWorld Custom Resource - Message #Return property Lambda Function Let\u0026rsquo;s look at the Lambda Function we\u0026rsquo;ll be using for this example. This is a python 2.7 function that takes a basic string input from CloudFormation, concatenates it with another string and returns it. Nothing too crazy here for the example, but there are some important pieces that must be in your Lambda function so that CloudFormation knows that the function is done running, and if it executed correctly.\n#Import modules import json, boto3, logging from botocore.vendored import requests #Define logging properties log = logging.getLogger() log.setLevel(logging.INFO) #Main Lambda function to be excecuted def lambda_handler(event, context): #Initialize the status of the function status=\u0026#34;SUCCESS\u0026#34; responseData = {} #Read and log the input value named \u0026#34;Input1\u0026#34; inputValue = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;Input1\u0026#39;] log.info(\u0026#34;Input value is:\u0026#34; + inputValue) #transform the input into a new value as an exmaple operation data = inputValue + \u0026#34;Thanks to AWS Lambda\u0026#34; responseData = {\u0026#34;Message\u0026#34; : data} #If you need to return data use this json object #return the response back to the S3 URL to notify CloudFormation about the code being run response=respond(event,context,status,responseData,None) #Function returns the response from the S3 URL return { \u0026#34;Response\u0026#34; :response } def respond(event, context, responseStatus, responseData, physicalResourceId): #Build response payload required by CloudFormation responseBody = {} responseBody[\u0026#39;Status\u0026#39;] = responseStatus responseBody[\u0026#39;Reason\u0026#39;] = \u0026#39;Details in: \u0026#39; + context.log_stream_name responseBody[\u0026#39;PhysicalResourceId\u0026#39;] = context.log_stream_name responseBody[\u0026#39;StackId\u0026#39;] = event[\u0026#39;StackId\u0026#39;] responseBody[\u0026#39;RequestId\u0026#39;] = event[\u0026#39;RequestId\u0026#39;] responseBody[\u0026#39;LogicalResourceId\u0026#39;] = event[\u0026#39;LogicalResourceId\u0026#39;] responseBody[\u0026#39;Data\u0026#39;] = responseData #Convert json object to string and log it json_responseBody = json.dumps(responseBody) log.info(\u0026#34;Response body: \u0026#34; + str(json_responseBody)) #Set response URL responseUrl = event[\u0026#39;ResponseURL\u0026#39;] #Set headers for preparation for a PUT headers = { \u0026#39;content-type\u0026#39; : \u0026#39;\u0026#39;, \u0026#39;content-length\u0026#39; : str(len(json_responseBody)) } #Return the response to the signed S3 URL try: response = requests.put(responseUrl, data=json_responseBody, headers=headers) log.info(\u0026#34;Status code: \u0026#34; + str(response.reason)) status=\u0026#34;SUCCESS\u0026#34; return status #Defind what happens if the PUT operation fails except Exception as e: log.error(\u0026#34;send(..) failed executing requests.put(..): \u0026#34; + str(e)) status=\u0026#34;FAILED\u0026#34; return status Lets look at the main function that will be executed. First we\u0026rsquo;ll initialize some of our variables. Next, we want to retrieve our input parameter (named Input1 and passed from CloudFormation), and then log it. After this there is a simple operation to concatenate the input with another string just to do something simple in our function. The next step is to provide some return data that will end up being our CloudFormation output. This is a JSON object so if you don\u0026rsquo;t need to return any custom info to CloudFormation, use an empty JSON object {}.\nBelow this, we\u0026rsquo;re calling a respond function (also located in our Lambda script which will retrieve info to send back to CloudFormation about the state of the Lambda script run. After this we return our data from the function.\ndef lambda_handler(event, context): #Initialize the status of the function status=\u0026#34;SUCCESS\u0026#34; responseData = {} #Read and log the input value named \u0026#34;Input1\u0026#34; inputValue = event[\u0026#39;ResourceProperties\u0026#39;][\u0026#39;Input1\u0026#39;] log.info(\u0026#34;Input value is:\u0026#34; + inputValue) #transform the input into a new value as an exmaple operation data = inputValue + \u0026#34;Thanks to AWS Lambda\u0026#34; responseData = {\u0026#34;Message\u0026#34; : data} #If you need to return data use this json object #return the response back to the S3 URL to notify CloudFormation about the code being run response=respond(event,context,status,responseData,None) #Function returns the response from the S3 URL return { \u0026#34;Response\u0026#34; :response } Lets look closer at the respond function. When we\u0026rsquo;re executing this function we need to send certain items back to CloudFormation so that the stack knows if it worked or not. Specifically it must return a response of SUCCESS or FAILED to a pre-signed URL. There are a list of response objects, specifically:\nStatus (Required) Reason (Required if FAILED) PhysicalResourceId (Required) StackId (Required) RequestId (Required) LogicalResourceId (Required) NoEcho Data The first part of this function builds the JSON object so that we can send it back to CloudFormation. We are then converting it to a string and logging the data for later reference. We set our responseURL which is passed to us from CloudFormation in the event parameter. After that we set the headers and then we try to do a PUT REST call with our return data. To do all of this, we had to import certain modules for our function which are seen in the full script, but none of these need to be provided in your zip file. It should be noted that this can also be done if you add your Lambda function in-line within your CFn template. if you use that method, there is a \u0026ldquo;cfn-response\u0026rdquo; module that can be called which eliminates the need to use the requests module.\ndef respond(event, context, responseStatus, responseData, physicalResourceId): #Build response payload required by CloudFormation responseBody = {} responseBody[\u0026#39;Status\u0026#39;] = responseStatus responseBody[\u0026#39;Reason\u0026#39;] = \u0026#39;Details in: \u0026#39; + context.log_stream_name responseBody[\u0026#39;PhysicalResourceId\u0026#39;] = context.log_stream_name responseBody[\u0026#39;StackId\u0026#39;] = event[\u0026#39;StackId\u0026#39;] responseBody[\u0026#39;RequestId\u0026#39;] = event[\u0026#39;RequestId\u0026#39;] responseBody[\u0026#39;LogicalResourceId\u0026#39;] = event[\u0026#39;LogicalResourceId\u0026#39;] responseBody[\u0026#39;Data\u0026#39;] = responseData #Convert json object to string and log it json_responseBody = json.dumps(responseBody) log.info(\u0026#34;Response body: \u0026#34; + str(json_responseBody)) #Set response URL responseUrl = event[\u0026#39;ResponseURL\u0026#39;] #Set headers for preparation for a PUT headers = { \u0026#39;content-type\u0026#39; : \u0026#39;\u0026#39;, \u0026#39;content-length\u0026#39; : str(len(json_responseBody)) } #Return the response to the signed S3 URL try: response = requests.put(responseUrl, data=json_responseBody, headers=headers) log.info(\u0026#34;Status code: \u0026#34; + str(response.reason)) status=\u0026#34;SUCCESS\u0026#34; return status #Defind what happens if the PUT operation fails except Exception as e: log.error(\u0026#34;send(..) failed executing requests.put(..): \u0026#34; + str(e)) status=\u0026#34;FAILED\u0026#34; return status See It In Action Just so we can show some screenshots of the process, here is the input for my CloudFormation template as I\u0026rsquo;m deploying it through the AWS Console. See that I\u0026rsquo;ve got an input message of \u0026ldquo;Test Message\u0026rdquo; and I\u0026rsquo;m specifying information about my Lambda Function\u0026rsquo;s location.\nYou can also see my Lambda function neatly zipped up in my S3 bucket below.\nOnce the Lambda function has been deployed, we can see it in the Lambda Functions console.\nIf we look at the CloudWatch Logs for our function, we can see the data being returned to CloudFormation.\nLastly, we can see that in the CloudFormation template that we deployed, it has finished the creation and in the outputs tab, we can see the message that was returned to the stack. This output could be used for other stacks or purely informational.\nSummary Custom Resources might not be necessary very often, but they can let you do virtually anything you want within AWS. Maybe they execute a Lambda function to gather data, or maybe they trigger a Step Function that has tons of logic built into it to do something else magical. The world is your oyster now, what will you build with your new knowledge?\n","permalink":"https://theithollow.com/2018/09/04/aws-custom-resources/","summary":"\u003cp\u003eWe love to use AWS CloudFormation to deploy our environments. Its like configuration management for our AWS infrastructure in the sense that we write a desired state as code and apply it to our environment. But sometimes, there are tasks that we want to complete that aren\u0026rsquo;t part of CloudFormation. For instance, what if we wanted to use CloudFormation to deploy a new account which needs to be done through the CLI, or if we need to return some information to our CloudFormation template before deploying it? Luckily for us we can use a Custom Resource to achieve our goals. This post shows how you can use CloudFormation with a Custom Resource to execute a very basic Lambda function as part of a deployment.\u003c/p\u003e","title":"AWS Custom Resources"},{"content":"Some things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with AWS we make sure we\u0026rsquo;re setting up security groups to limit our access to those resources. How about an extra level of protection from a Web Application Firewall. AWS WAF allows us to leverage some extra protections at the edge to protect us from those bad guys/girls.\nBackground on WAF The AWS Web Application Firewall (WAF) allows us to create custom rules to protect us from things like cross-site scripting, SQL injection, or just blocking traffic from certain geographies. If your site isn\u0026rsquo;t ready for GDPR maybe you block Europe from accessing your site altogether. Of course WAF can also do things like block specific IP addresses that have been identified as bots, etc but we expect all firewalls to be able to do this. WAF is billed at $5 per web ACL per month and another $1 per rule per ACL per month for the configuration. There are additional usage charges added based on the requests at $0.60 per million web requests.\nThe AWS WAF can be used with an AWS Application Load Balancer or a CloudFront distribution. If your application is hosted on-prem, you could still leverage the AWS WAF by integrating a CloudFront distribution with your application.\nThere are several parts to deploying a WAF for your application.\nConditions - You\u0026rsquo;ll build a condition to identify the type of traffic or web call that is being made. This could be a source IP Address, a regular expression, a SQL filter, etc. The job here is to identify the types of requests that an action will be take on. Rules - Rules will identify if you plan to allow or block traffic that is identified by a condition, or a catch-all. For example, a rule might block IP Addresses identified by Condition 1, block traffic from a specific geopgraphy identified by Condition 2, and allow any other traffic by default. Web ACL - A group of rules can be added to a Web ACL and the Web ACL is attached to a resource such as an Application Load Balancer or CloudFront Distribution. Setup Through the Console The examples below will use a very basic website behind an AWS application load balancer through the AWS console. To begin, navigate to the AWS WAF and Shield services. A familiar getting started screen will show up where you can add your WAF by clicking the \u0026ldquo;Go to AWS WAF\u0026rdquo; button.\nWhen the WAF screen opens, click the \u0026ldquo;Configure web ACL\u0026rdquo; button which will start the process of walking us through creating conditions and rules as well as the Web ACL.\nThe first screen gives you and idea of what will be created and how you might set it up. This screen is informational so read it and when you\u0026rsquo;re ready, click the \u0026ldquo;Next\u0026rdquo; button.\nLets create a Web ACL. I\u0026rsquo;ve named mine HollowACL and there is a CloudWatch metric that will be created as well that shows the statistics for this ACL in the CloudWatch console. Note: it may be useful to keep these names the same for tracking purposes.\nSelect the region that this will be available in. If you\u0026rsquo;re using CloudFront, the region should be \u0026ldquo;Global\u0026rdquo;, if you\u0026rsquo;re using an ALB, select the region your ALB is located. After you select the region, you should be able to select the ALB to associate the WAF ACL with and then click the \u0026ldquo;Next\u0026rdquo; button.\nNow it\u0026rsquo;s time to create the conditions. I\u0026rsquo;m keeping this simple and will geo-block requests coming from the United States for giggles and grins. Under the Geo match conditions type click the \u0026ldquo;Create condition\u0026rdquo; button to create a new condition. Depending on your own requirements, you may have to choose a different condition type which ultimately would ask for different things as part of the rule.\nGive the condition a name and again select a region. Since I selected a geo match condition I\u0026rsquo;ll need to identify which country to block. When done, be sure to click the \u0026ldquo;Add location\u0026rdquo; button to add it to the condition.\nNow that our condition(s) are created, lets move on to rules. Click the \u0026ldquo;Create rule\u0026rdquo; button.\nWhen you create the rule, give it a name and again a CloudWatch metric so we can review the activity later. Select either \u0026ldquo;regular\u0026rdquo; or \u0026ldquo;rate-based\u0026rdquo; rule depending on if this should always be active. Note: Rate-based rules would be good for brute force attacks where the first couple of times its allowed and then a block rule is triggered on too many attempts.\nUnder the conditions, select does or does not for a matching condition and then the type of condition (in this case its a geo rule) and then which condition of that type you\u0026rsquo;re matching. Add further conditions if needed.\nWe\u0026rsquo;re taken back to the web ACL screen where we will select whether traffic that matches that rule should be allowed, blocked or counted. Counted is used to monitor traffic, but not take actions on it. You should also select a default action of allow or deny. Click the \u0026ldquo;Review and create\u0026rdquo; button.\nReview the settings and then click the \u0026ldquo;Confirm and create\u0026rdquo; button.\nThe Results First things first, did it work? Lets try a request to access the web application from a US location (my desktop). We\u0026rsquo;ll notice that we get a 403 error meaning that we found a live service, but were denied access.\nIf we look back at the WAF Console, we can select our ACL and see a graph of the metrics we\u0026rsquo;re looking for. We can also see some samples of the requests that match the rule.\nBy looking at the CloudWatch portal, we can see even more details and we can create alarms (and subsequently take action on those alarms) if we see fit to do so.\nWAF Through Code As with most things AWS, you can deploy your WAF conditions, rules, and ACLs through CloudFormation. Below is an example of a simple block IP rule deployed through CloudFormation. The code below should get you started but only includes a sample load balancer, ACL, an IPSet, a rule and an association. You should be aware that if you\u0026rsquo;re working with rules for a load balancer they are denoted by a type of \u0026ldquo;AWS:: WAFRegional::SOMETHING\u0026rdquo; whereas WAF objects for CloudFront are just denoted by a type of \u0026ldquo;AWS:: WAF::SOMETHING\u0026rdquo;\n\u0026#34;Resources\u0026#34;: { \u0026#34;HollowWebLB1\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::ElasticLoadBalancingV2::LoadBalancer\u0026#34;, \u0026#34;Properties\u0026#34; : { \u0026#34;Name\u0026#34; : \u0026#34;HollowWebLB1\u0026#34;, \u0026#34;Scheme\u0026#34; : \u0026#34;internet-facing\u0026#34;, \u0026#34;SecurityGroups\u0026#34; : [{\u0026#34;Ref\u0026#34;: \u0026#34;InstanceSecurityGroups\u0026#34;}], \u0026#34;Subnets\u0026#34; : [{\u0026#34;Ref\u0026#34;:\u0026#34;Web1Subnet\u0026#34;}, {\u0026#34;Ref\u0026#34;:\u0026#34;Web2Subnet\u0026#34;}], \u0026#34;Type\u0026#34; : \u0026#34;application\u0026#34; } }, \u0026#34;HollowACL\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::WAFRegional::WebACL\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;Name\u0026#34;: \u0026#34;HollowACL\u0026#34;, \u0026#34;DefaultAction\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;ALLOW\u0026#34; }, \u0026#34;MetricName\u0026#34; : \u0026#34;HollowACL\u0026#34;, \u0026#34;Rules\u0026#34;: [ { \u0026#34;Action\u0026#34; : { \u0026#34;Type\u0026#34; : \u0026#34;BLOCK\u0026#34; }, \u0026#34;Priority\u0026#34; : 1, \u0026#34;RuleId\u0026#34; : { \u0026#34;Ref\u0026#34;: \u0026#34;HollowRule\u0026#34; } } ] } }, \u0026#34;WAFBlacklistSet\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::WAFRegional::IPSet\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;Name\u0026#34;: { \u0026#34;Fn::Join\u0026#34;: [\u0026#34; - \u0026#34;, [{ \u0026#34;Ref\u0026#34;: \u0026#34;AWS::StackName\u0026#34; }, \u0026#34;Blacklist Set\u0026#34;]] }, \u0026#34;IPSetDescriptors\u0026#34;: [ { \u0026#34;Type\u0026#34;: \u0026#34;IPV4\u0026#34;, \u0026#34;Value\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;MyIPSetBlacklist\u0026#34; } } ] } }, \u0026#34;HollowRule\u0026#34;: { \u0026#34;Type\u0026#34; : \u0026#34;AWS::WAFRegional::Rule\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;Name\u0026#34; : \u0026#34;HollowRule\u0026#34;, \u0026#34;MetricName\u0026#34; : \u0026#34;MyIPRule\u0026#34;, \u0026#34;Predicates\u0026#34; : [ { \u0026#34;DataId\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;WAFBlacklistSet\u0026#34; }, \u0026#34;Negated\u0026#34; : false, \u0026#34;Type\u0026#34; : \u0026#34;IPMatch\u0026#34; } ] } }, \u0026#34;ACLAssociation\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::WAFRegional::WebACLAssociation\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;ResourceArn\u0026#34;: {\u0026#34;Ref\u0026#34;: \u0026#34;HollowWebLB1\u0026#34;}, \u0026#34;WebACLId\u0026#34;: {\u0026#34;Ref\u0026#34; : \u0026#34;HollowACL\u0026#34;} } } } Summary The AWS WAF product should likely be part of your perimeter security strategy somehow. Applications build for the cloud should be behind a load balancer and in different AZs for availability purposes and if you\u0026rsquo;re using native AWS services that means an ALB probably. Why not add some additional protection by adding a WAF as well. If you aren\u0026rsquo;t sure about how to create the rules you need, checkout the marketplace where there are pre-defined rules for many applications. Try them yourself and see what you come up with.\n","permalink":"https://theithollow.com/2018/08/20/add-aws-web-application-firewall-to-protect-your-apps/","summary":"\u003cp\u003eSome things change when you move to the cloud, but other things are very much the same. Like protecting your resources from outside threats. There are always no-gooders out there trying to steal data, or cause mayhem like in those Allstate commercials. Our first defense should be well written applications, requiring authentication, etc and with AWS we make sure we\u0026rsquo;re setting up security groups to limit our access to those resources. How about an extra level of protection from a Web Application Firewall. AWS WAF allows us to leverage some extra protections at the edge to protect us from those bad guys/girls.\u003c/p\u003e","title":"Add AWS Web Application Firewall to Protect your Apps"},{"content":"Getting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, we\u0026rsquo;ve changed this conversation quite a bit as well. But what about a plain old server that needs a new version of code deployed on it? AWS CodeDeploy can help us to manage our software versions and rollbacks so that we have a consistent method to update our apps across multiple instances. This post will demonstrate how to get started with AWS CodeDeploy so that you can manage the deployment of new versions of your apps.\nSetup IAM Roles Before we start, I\u0026rsquo;ll assume that you\u0026rsquo;ve got a user account with administrator permissions so that you can deploy the necessary roles, servers and tools. After this, we need to start by setting up some permissions for CodeDeploy. First, we need to create a service role for CodeDeploy so that it can read tags applied to instances and take some actions for us. Go to the IAM console and click on the Roles tab. Then click \u0026ldquo;Create role\u0026rdquo;.\nChoose AWS service for the trusted entity and then choose CodeDeploy.\nAfter this, select the use case. For this post we\u0026rsquo;re deploying code on EC2 instances and not Lambda code, so select the \u0026ldquo;CodeDeploy\u0026rdquo; use case.\nOn the next screen choose the AWSCodeDeployRole policy.\nOn the last screen give it a descriptive name.\nNow that we have a role, we need to add a new policy. While still in the IAM console, choose the policies tab and then click the \u0026ldquo;Create policy\u0026rdquo; button.\nIn the create policy screen, choose the JSON tab and enter the JSON seen below. This policy allows the assumed role to read from all S3 buckets. We\u0026rsquo;ll be attaching this policy to an instance profile and eventually our servers.\nOn the last screen, enter a name for the policy and then click the \u0026ldquo;Create policy\u0026rdquo; button.\nLet\u0026rsquo;s create a second role now for this new policy.\nThis time select the \u0026ldquo;EC2\u0026rdquo; service so that our servers can access the S3 buckets.\nOn the attach permissions policies screen, select the policy we just created.\nOn the last screen give the role a name and click the \u0026ldquo;Create role\u0026rdquo; button.\nDeploy Application Servers Now that we\u0026rsquo;ve got that pesky permissions thing taken care of, it\u0026rsquo;s time to deploy our servers. You can deploy some EC2 instances any way you want, (I prefer CloudFormation personally) but for this post I\u0026rsquo;ll show you the important pieces in the AWS Console. Be sure to deploy your resources with the IAM role we created in the section prior to this. This instance profile gives the EC2 instance permissions to read from your S3 bucket where your code is stored.\nAs part of your server build, you\u0026rsquo;ll need to install the CodeDeploy agent. You can do this manually, or a better way might be to add the code below to the EC2 UserData field during deployment. NOTE: the [bucket-name] field comes from a pre-set list of buckets from AWS and is based on your region. See the list from here: https://docs.aws.amazon.com/codedeploy/latest/userguide/resource-kit.html#resource-kit-bucket-names\n#!/bin/bash yum -y update yum install -y ruby cd /home/ec2-user curl -O https://bucket-name.s3.amazonaws.com/latest/install chmod +x ./install ./install auto Also when you\u0026rsquo;re deploying your servers, you\u0026rsquo;ll want to have a tag that can be referenced by the CodeDeploy service. This tag will be useful to identify which servers should receive the updates that we\u0026rsquo;ll push later. For this example, I\u0026rsquo;m using a tag named \u0026ldquo;App\u0026rdquo; and a value of \u0026ldquo;HollowWeb\u0026rdquo;. I\u0026rsquo;m deploying a pair of servers in different AZs behind a load balancer to ensure that I\u0026rsquo;ve got excellent availability. Each of the servers will have this tag.\nOnce, the servers are deployed, you\u0026rsquo;ll want to deploy an app to make sure its all up and running correctly. NOTE: you could deploy the app for the first time through CodeDeploy if you\u0026rsquo;d like. I\u0026rsquo;m purposefully not doing this so that I can show how updates work and the first deployment isn\u0026rsquo;t as interesting so I\u0026rsquo;ve omitted it to keep this blog post to a reasonable length.\nYou can see my application is deployed by hitting the instance from a web browser. (Try not to be too impressed by version 0 here)\nYou can see from the EC2 console that I\u0026rsquo;ve created a target group for my load balancer and my two EC2 instances are associated with it and in a health state.\nCreate the App in CodeDeploy Now it\u0026rsquo;s finally time to get to the meat of this post and talk through CodeDeploy. The first thing we\u0026rsquo;ll do is create an application within the CodeDeploy console. When you first open the CodeDeploy console from the AWS portal, you\u0026rsquo;ll probably see the familiar getting started page. Click that get started button and let\u0026rsquo;s get down to business.\nYou can do a sample deployment if you want to, but that\u0026rsquo;d hide some of the good bits, so we\u0026rsquo;ll choose a custom deployment instead. Click the \u0026ldquo;Skip Walkthrough\u0026rdquo; button.\nGive your application a name that you\u0026rsquo;ll recognize. Then in the dropdown, select EC2/On-premises. This tells CodeDeploy that we\u0026rsquo;ll be updating servers, but we could also use this for Lambda functions if we wished. Then give the deployment group a name. This field will identify the group of servers that are part of the deployment.\nNext up, you\u0026rsquo;ll select your deployment type. I\u0026rsquo;ve chosen an in-place deployment meaning that my servers will stay put, but my code will be copied on top of the existing server. Blue/green deployments are also available and would redeploy new instances during the deployment.\nNext, we configure our environment. I\u0026rsquo;ve selected the Amazon EC2 instances tab and then entered that key/value pair from earlier in this post that identifies my apps. Remember the \u0026ldquo;App:HollowWeb\u0026rdquo; tag from earlier? Once you enter this, the console should show you the instances associated with this tag.\nI\u0026rsquo;ve also checked the box to \u0026ldquo;Enable load balancing.\u0026rdquo; This is an optional setting for In-Place upgrades but mandatory for Blue/Green deployments. With this checked, CodeDeploy will block traffic to the instances currently being deployed until they are done updating and then they\u0026rsquo;ll be re-added to the load balancer.\nNow you must select a deployment configuration. This tells CodeDeploy how to update your servers. Out of the box you can have it do:\nOne at a time Half at a time All at once You can also create your own configuration if you have custom requirements not met by the defaults. For this example, I\u0026rsquo;m doing one at a time. You\u0026rsquo;ll then need to select a service role that has access to the instances, which we created early on during this blog post. Click the \u0026ldquo;Create Application\u0026rdquo; button to move on.\nYou should get a nice green \u0026ldquo;Congratulations!\u0026rdquo; message when you\u0026rsquo;re done. This message is pretty helpful and shows you the next steps to pushing your application.\nPush your Code to S3 OK, now I\u0026rsquo;m going to push my code to S3 so that I can store it in a ready to go package. To do this, I\u0026rsquo;m opening up my development machine (my Mac laptop) and I\u0026rsquo;m updating my code. I\u0026rsquo;ve got a few changes to my website that I\u0026rsquo;m making including a new graphic and new index.html page. Also, within this repo, I\u0026rsquo;m going to create an appspec.yml file which is how we tell CodeDeploy what we want to do with our new code. Take a look at the repo with my files, and the appspec.yml file.\nOn my mac, I\u0026rsquo;ve got a directory with my appspec.yml in the root and two folders, content and scripts. I\u0026rsquo;ve placed my images and html files in the content directory, and I\u0026rsquo;ve put two bash scripts in the scripts directory. The scripts are very simple and tell the apache server to start or stop, depending on which script is called.\nNow take a look at the appspec.yml details. This file is broken down into sections. There is a \u0026ldquo;files\u0026rdquo; section that describes where the files should be placed on our web servers. For example, you an see that the content/image001.png file from my repo should be placed within the /var/www/html directory on the web server.\nversion: 0.0 os: linux files: - source: content/image001.png destination: /var/www/html/ - source: content/index.html destination: /var/www/html/ hooks: ApplicationStop: - location: scripts/stop_server.sh timeout: 30 runas: root ApplicationStart: - location: scripts/start_server.sh timeout: 30 runas: root Below this, you\u0026rsquo;ll see a \u0026ldquo;hooks\u0026rdquo; section. The hooks tell CodeDeploy what to do during each of the lifecycle events that occur during an update. There are a bunch of them as shown below.\nI don\u0026rsquo;t need to use each of the lifecycle events for this demonstration, so I\u0026rsquo;m only using ApplicationStop and ApplicationStart. In the appspec.yml file I\u0026rsquo;ve defined the user who should execute the scripts and the location of the script to run.\nTIP: You may find that the very first time you deploy your code, the ApplicationStop script won\u0026rsquo;t run. This is because the code has never been downloaded before so it can\u0026rsquo;t run yet. Subsequent runs use the previously downloaded script so if you change this code, it may take one run before it actually works again.\nSince our new application looks tip top, it\u0026rsquo;s time to push it to our S3 bucket in AWS. We\u0026rsquo;ll run the command shown to us in the console earlier and specify the source location of our files and a destination for our archive file.\naws deploy push \\ --application-name HollowWebApp \\ --description \u0026#34;Version1\u0026#34; \\ --ignore-hidden-files \\ --s3-location s3://mybucketnamehere/AppDeploy1.zip \\ --source . My exact command is show below, along with the return information. The information returned, tells you how you can push the deployment to your servers, and I recommend using this information to do this. However, in this post, we\u0026rsquo;ll show what happens when pushing code from the console so we can easily see what happens.\nOnce the code has been successfully pushed we\u0026rsquo;ll do a quick check to show that the zip file is in our S3 bucket, and it is!\nDeploy the New Version As mentioned, you can now deploy your code to your servers from the command line response you got from pushing your code to S3. To make it more obvious about what happened, lets take a look at the CodeDeploy console now though. You\u0026rsquo;ll notice that if you open up your application that there is a \u0026ldquo;Revision\u0026rdquo; listed in your list. As you push more versions to your S3 bucket, the list will grow here.\nWe\u0026rsquo;re ready to deploy, so click the arrow next to your revision to expand the properties. Click the \u0026ldquo;Deploy revision\u0026rdquo; button to kick things off.\nMost of the information should be filled out for you on the next page, but it does give you a nice opportunity to tweak something before the code gets pushed. I for example selected the \u0026ldquo;Overwrite Files\u0026rdquo; option so that when I push out a new index.html, it will overwrite the existing file and not fail the deployment because of an error.\nAs your deployment is kicked off you can watch the progress in the console. To get more information, click the Deployment ID to dig deeper.\nWhen we drill down into the deployment, we can see that one of my servers is \u0026ldquo;In progress\u0026rdquo; while the other is pending. Since I\u0026rsquo;m doing one at a time, only one of the instances will update for now. To see even more information about this specific instance deploy, click the \u0026ldquo;View events\u0026rdquo; link.\nWhen we look at the events, we can see each of the lifecycle events that the deployment goes through. I\u0026rsquo;ve waited for a bit to show you that each event was successful.\nWhen we got back to the deployment screen, we see that one server is done, and the next server has started its progression.\nWhen both servers have completed, I check my app again, and I can see that a new version has been deployed. (A slightly better, yet still awful version)\n","permalink":"https://theithollow.com/2018/08/06/using-aws-codedeploy-to-push-new-versions-of-your-application/","summary":"\u003cp\u003eGetting new code onto our servers can be done in a myriad of ways these days. Configuration management tools can pull down new code, pipelines can run scripts across our fleets, or we could run around with a USB stick for the rest of our lives. With container based apps, serverless functions, and immutable infrastructure, we\u0026rsquo;ve changed this conversation quite a bit as well. But what about a plain old server that needs a new version of code deployed on it? AWS CodeDeploy can help us to manage our software versions and rollbacks so that we have a consistent method to update our apps across multiple instances. This post will demonstrate how to get started with AWS CodeDeploy so that you can manage the deployment of new versions of your apps.\u003c/p\u003e","title":"Using AWS CodeDeploy to Push New Versions of your Application"},{"content":"We love Kubernetes. It\u0026rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.\nEKS Environment Setup To get started, we\u0026rsquo;ll need to deploy an IAM Role in our AWS account that has permissions to manage Kubernetes clusters on our behalf. Once that\u0026rsquo;s done, we\u0026rsquo;ll deploy a new VPC in our account to house our EKS cluster. To speed things up, I\u0026rsquo;ve created a CloudFormation template to deploy the IAM role for us, and to call the sample Amazon VPC template. You\u0026rsquo;ll need to fill in the parameters for your environment.\nNOTE: Be sure you\u0026rsquo;re in a region that supports EKS. Not all regions currently support EKS as of the time of this writing.\nAWSTemplateFormatVersion: 2010-09-09 Description: \u0026#39;EKS Setup - IAM Roles and Control Plane Cluster\u0026#39; Metadata: \u0026#34;AWS::CloudFormation::Interface\u0026#34;: ParameterGroups: - Label: default: VPC Parameters: - VPCCIDR - Label: default: Subnets Parameters: - Subnet01Block - Subnet02Block - Subnet03Block Parameters: VPCCIDR: Type: String Description: VPC CIDR Address AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Subnet01Block: Type: String Description: Subnet01 AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Subnet02Block: Type: String Description: Subnet02 AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Subnet03Block: Type: String Description: Subnet03 AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Resources: EKSRole: Type: \u0026#34;AWS::IAM::Role\u0026#34; Properties: AssumeRolePolicyDocument: Version: \u0026#34;2012-10-17\u0026#34; Statement: - Effect: \u0026#34;Allow\u0026#34; Principal: Service: - \u0026#34;eks.amazonaws.com\u0026#34; Action: - \u0026#34;sts:AssumeRole\u0026#34; ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy - arn:aws:iam::aws:policy/AmazonEKSServicePolicy Path: \u0026#34;/\u0026#34; RoleName: \u0026#34;EKSRole\u0026#34; EKSVPC: Type: \u0026#34;AWS::CloudFormation::Stack\u0026#34; Properties: Parameters: VpcBlock: !Ref VPCCIDR Subnet01Block: !Ref Subnet01Block Subnet02Block: !Ref Subnet02Block Subnet03Block: !Ref Subnet03Block TemplateURL: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml TimeoutInMinutes: 10 Outputs: EKSRole: Value: !Ref EKSRole StackRef: Value: !Ref EKSVPC EKSSecurityGroup: Value: !GetAtt EKSVPC.Outputs.SecurityGroups EKSVPC: Value: !GetAtt EKSVPC.Outputs.VpcId EKSSubnets: Value: !GetAtt EKSVPC.Outputs.SubnetIds Use the template above and go to the AWS CloudFormation console within your account. Deploy the template and fill in your VPC addressing for a new VPC and subnets.\nAfter you\u0026rsquo;ve deployed the template, you\u0026rsquo;ll see two stacks that have been created. An IAM role and the EKS VPC/subnets.\nOnce the stack has been completed take note of the outputs which will be used for creating the cluster.\nCreate the Amazon EKS Control Cluster Now that we\u0026rsquo;ve got an environment to work with, its time to deploy the Amazon EKS control cluster. To do this go to the AWS Console and open the EKS Service.\nWhen you open the EKS console, you\u0026rsquo;ll notice that you don\u0026rsquo;t have any clusters created yet. We\u0026rsquo;re about to change that. Click the \u0026ldquo;Create Cluster\u0026rdquo; button.\nFill out the information about your new cluster. Give it a name and select the version. Next select the IAM Role, VPC, Subnets and Security Groups for your Kubernetes Control Plane cluster. This info can be found in the outputs from your CloudFormation Template used to create the environment.\nYou will see that your cluster is being created. This may take some time, so you can continue with this post to make sure you\u0026rsquo;ve installed some of the other tools that you\u0026rsquo;ll need to manage the cluster.\nEventually your cluster will be created and you\u0026rsquo;ll see a screen like this:\nSetup the Tools You\u0026rsquo;ll need a few client tools installed in order to manage the Kubernetes cluster on EKS. You\u0026rsquo;ll need to have the following tools installed:\nAWS CLI v1.15.32 or higher Kubectl aws-iam-authenticator The instructions below are to install the tools on a Mac OS client.\nAWS CLI - The easiest way to install the AWS CLI on a mac is to use homebrew. If you\u0026rsquo;ve already got homebrew installed on your Mac, then skip over this. Otherwise run the following from a terminal in order to install homebrew. /usr/bin/ruby -e \u0026#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\u0026#34; Once homebrew is installed, you can use it to install the aws cli by running:\nbrew install awscli After the AWS CLI has been installed, you\u0026rsquo;ll need to configure it with your Access Keys, Secrete Keys, regions and outputs. You can start this process by running AWS Configure.\nkubectl - Installing the Amazon EKS-vended kubectl binary, download the kubectl executable for Mac through your terminal. curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/darwin/amd64/kubectl chmod +x ./kubectl cp ./kubectl $HOME/bin/kubectl \u0026amp;\u0026amp; export PATH=$HOME/bin:$PATH echo \u0026#39;export PATH=$HOME/bin:$PATH\u0026#39; \u0026gt;\u0026gt; ~/.bash_profile We should be able to check to make sure kubectl is working properly by checking the version from the terminal kubectl version -o yaml The result should look something like this:\nHeptio-authenticator-aws - The Heptio Authenticator is used to integrate your AWS IAM settings with your Kubernetes RBAC permissions. To install this, run the following from your terminal curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/darwin/amd64/aws-iam-authenticator chmod +x ./aws-iam-authenticator \u0026amp;\u0026amp; export PATH=$HOME/bin:$PATH echo \u0026#39;export PATH=$HOME/bin:$PATH\u0026#39; \u0026gt;\u0026gt; ~/.bash_profile Configure kubectl for EKS Now that the tools are setup and the control cluster is deployed, we need to configure our kubeconfig file to use with EKS. To do this, the first thing to do is determine the cluster endpoint. You can do this by clicking on your cluster in the GUI and noting the server endpoint and the certificate information.\nNote: you could also find this through the aws cli by:\naws eks describe-cluster --name [clustername] --query cluster.endpoint aws eks describe-cluster --name [clustername] --query cluster.certificateAuthority.data Next, create a new directory called .kube, if it doesn’t already exist. Once that’s done you’ll need to create a new file in that directory with the configuration info. After this, we can use the AWS cli to create a new kubeconfig file by running:\naws eks update-kubeconfig --name [cluster_name] After you\u0026rsquo;ve created the config file, you\u0026rsquo;ll want to add an environment variable to the kubectl will know where to find the cluster configuration. On windows this can be done by running the command below. Substitute your own file paths for the config file that you created.\nexport KUBECONFIG=$KUBECONFIG:~/.kube/admin.conf If things are working correctly, we can run kubectl config get-contexts so we can see the AWS authentication is working. I\u0026rsquo;ve also run kubectl get svc to show that we can read from the EKS cluster.\nDeploy EKS Worker Nodes Your control cluster is up and running, and we\u0026rsquo;ve got our clients connected through the Heptio-authenticator. Now its time to deploy some worker nodes for our containers to run on. To do this, go back to your cloudformation console and deploy the following CFn tempalate that is provided by AWS.\nhttps://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-nodegroup.yaml\nFill out the deployment information. You\u0026rsquo;ll need to enter a stack name, number of min/max nodes to scale within, an SSH Key to use, the VPC we deployed earlier, and the subnets. You will also need to enter a ClusterName which must be exactly the same as our control plane cluster we deployed earlier. Also, the NodeImageID must be one of the following, depending on your region:\nUS-East-1 (N. Virginia) – ami-0c24db5df6badc35a US-West-2 (Oregon) – ami-0a2abab4107669c1b US-East-2 (Ohio) – ami-0c2e8d28b1f854c68 Deploy your CloudFormation template and wait for it to complete. Once the stack completes, you\u0026rsquo;ll need to look at the outputs to get the NodeInstanceRole.\nThe last step is to ensure that our nodes have permissions and can join the cluster. Use the file format below and save it as aws-auth-cm.yaml.\napiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: \u0026lt;ARN of instance role (not instance profile)\u0026gt; username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes Replace ONLY the \u0026lt;ARN of instance role (not instance profile)\u0026gt; section with the NodeInstanceRole we got from the outputs of our CloudFormation Stack. Save the file and then apply the configmap to your EKS cluster by running:\nkubectl apply -f aws-auth-cm.yaml After we run the command our cluster should be fully working. We can run \u0026ldquo;get nodes\u0026rdquo; to see the worker nodes listed in the cluster.\nNOTE: the status of the cluster will initially show \u0026ldquo;NotReady\u0026rdquo; re-running the command or using the \u0026ndash;watch switch will let you see when the nodes are fully provisioned.\nDeploy Your Apps Congratulations, you\u0026rsquo;ve build your Kubernetes cluster on Amazon EKS. Its time to deploy your apps which is outside of this blog post. If you want to try out an app to prove that its working, try one of the deployments from the kubernetes.io tutorials such as their guestbook app.\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml From here on out, it\u0026rsquo;s up to you. Start deploying your replication controllers, pods, services, etc as you\u0026rsquo;d like.\n","permalink":"https://theithollow.com/2018/07/31/how-to-setup-amazon-eks-with-mac-client/","summary":"\u003cp\u003eWe love Kubernetes. It\u0026rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.\u003c/p\u003e","title":"How to Setup Amazon EKS with Mac Client"},{"content":"We love Kubernetes. It\u0026rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.\nEKS Environment Setup To get started, we\u0026rsquo;ll need to deploy an IAM Role in our AWS account that has permissions to manage Kubernetes clusters on our behalf. Once that\u0026rsquo;s done, we\u0026rsquo;ll deploy a new VPC in our account to house our EKS cluster. To speed things up, I\u0026rsquo;ve created a CloudFormation template to deploy the IAM role for us, and to call the sample Amazon VPC template to deploy a VPC. You\u0026rsquo;ll need to fill in the parameters for your environment.\nNOTE: Be sure that you\u0026rsquo;re in a region that supports EKS. As of the time of this writing the US regions that can use EKS are us-west-2 (Oregon) and us-east-1 (N. Virginia).\nAWSTemplateFormatVersion: 2010-09-09 Description: \u0026#39;EKS Setup - IAM Roles and Control Plane Cluster\u0026#39; Metadata: \u0026#34;AWS::CloudFormation::Interface\u0026#34;: ParameterGroups: - Label: default: VPC Parameters: - VPCCIDR - Label: default: Subnets Parameters: - Subnet01Block - Subnet02Block - Subnet03Block Parameters: VPCCIDR: Type: String Description: VPC CIDR Address AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Subnet01Block: Type: String Description: Subnet01 AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Subnet02Block: Type: String Description: Subnet02 AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Subnet03Block: Type: String Description: Subnet03 AllowedPattern: \u0026#34;(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})\\\\.(\\\\d{1,3})/(\\\\d{1,2})\u0026#34; ConstraintDescription: \u0026#34;Must be a valid IP CIDR range of the form x.x.x.x/x.\u0026#34; Resources: EKSRole: Type: \u0026#34;AWS::IAM::Role\u0026#34; Properties: AssumeRolePolicyDocument: Version: \u0026#34;2012-10-17\u0026#34; Statement: - Effect: \u0026#34;Allow\u0026#34; Principal: Service: - \u0026#34;eks.amazonaws.com\u0026#34; Action: - \u0026#34;sts:AssumeRole\u0026#34; ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy - arn:aws:iam::aws:policy/AmazonEKSServicePolicy Path: \u0026#34;/\u0026#34; RoleName: \u0026#34;EKSRole\u0026#34; EKSVPC: Type: \u0026#34;AWS::CloudFormation::Stack\u0026#34; Properties: Parameters: VpcBlock: !Ref VPCCIDR Subnet01Block: !Ref Subnet01Block Subnet02Block: !Ref Subnet02Block Subnet03Block: !Ref Subnet03Block TemplateURL: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-vpc-sample.yaml TimeoutInMinutes: 10 Outputs: EKSRole: Value: !Ref EKSRole StackRef: Value: !Ref EKSVPC EKSSecurityGroup: Value: !GetAtt EKSVPC.Outputs.SecurityGroups EKSVPC: Value: !GetAtt EKSVPC.Outputs.VpcId EKSSubnets: Value: !GetAtt EKSVPC.Outputs.SubnetIds Use the template above and go to the AWS CloudFormation console within your account. Deploy the template and fill in your VPC addressing for a new VPC and subnets.\nAfter you\u0026rsquo;ve deployed the template, you\u0026rsquo;ll see two stacks that have been created. An IAM role and the EKS VPC/subnets.\nOnce the stack has been completed take note of the outputs which will be used for creating the cluster.\nCreate the Amazon EKS Control Cluster Now that we\u0026rsquo;ve got an environment to work with, its time to deploy the Amazon EKS control cluster. To do this go to the AWS Console and open the EKS Service.\nWhen you open the EKS console, you\u0026rsquo;ll notice that you don\u0026rsquo;t have any clusters created yet. We\u0026rsquo;re about to change that. Enter a cluster name and then click the \u0026ldquo;next step\u0026rdquo; button. Similarly you could go into the clusters menu and click Create Cluster if the splash screen doesn\u0026rsquo;t show the screenshot below.\nFill out the information about your new cluster. Give it a name and select the version. Next select the IAM Role, VPC, Subnets and Security Groups for your Kubernetes Control Plane cluster. This info can be found in the outputs from your CloudFormation Template used to create the environment.\nYou will see that your cluster is being created. This may take some time, so you can continue with this post to make sure you\u0026rsquo;ve installed some of the other tools that you\u0026rsquo;ll need to manage the cluster.\nEventually your cluster will be created and you\u0026rsquo;ll see a screen like this:\nSetup the Tools You\u0026rsquo;ll need a few client tools installed in order to manage the Kubernetes cluster on EKS. You\u0026rsquo;ll need to have the following tools installed:\nAWS CLI v1.15.32 or higher Kubectl aws-iam-authenticator The instructions below are to install the tools on a Windows client.\nAWS CLI - To install the AWS CLI, download and run the installer for your version of windows. 64-bit version , 32-bit version. Once you\u0026rsquo;ve completed running the installer, you\u0026rsquo;ll need to configure your client with the appropriate settings such as region, access_keys, secret_keys and an output format. This can be accomplished by opening up a cmd prompt and running aws configure. Enter access keys and secret keys with permissions to your AWS resources. kubectl - Installing the Amazon EKS-vended kubectl binary, download the kubectl executable for Windows from here: https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/windows/amd64/kubectl.exe and place it in your file path. To test that its working, run: kubectl version -o yaml The result should look something like this:\naws-iam-authenticator - The IAM authenticator is used to integrate your AWS IAM settings with your Kubernetes RBAC permissions. To install this, download the executable for Windows from here: https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/windows/amd64/aws-iam-authenticator.exe and place it in your file path. To test that its working, run: aws-iam-authenticator --help The result of that command should return info about your options.\nConfigure kubectl for EKS Now that the tools are setup and the control cluster is deployed, we need to configure our kubeconfig file to use with EKS. To do this, the first thing to do is determine the cluster endpoint. You can do this by clicking on your cluster in the GUI and noting the server endpoint and the certificate information.\nNote: you could also find this through the aws cli by:\naws eks describe-cluster --name [clustername] --query cluster.endpoint aws eks describe-cluster --name [clustername] --query cluster.certificateAuthority.data Next, create a new directory called .kube, if it doesn\u0026rsquo;t already exist. Once that\u0026rsquo;s done you\u0026rsquo;ll need to create a new file in that directory named \u0026ldquo;config-\u0026rdquo;[clustername]. After this, we can use the AWS cli to create a new kubeconfig file by running:\naws eks update-kubeconfig --name [cluster_name] If things are working correctly, we can run kubectl config get-contexts so we can see the AWS authentication is working. I\u0026rsquo;ve also run kubectl get svc to show that we can read from the EKS cluster.\nDeploy EKS Worker Nodes Our control cluster is up and running, and we\u0026rsquo;ve got our clients connected through the aws-iam-authenticator. Now its time to deploy some worker nodes for our containers to run on. To do this, go back to your CloudFormation console and deploy the following CFn template that is provided by AWS.\nhttps://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml Fill out the deployment information. You\u0026rsquo;ll need to enter a stack name, number of min/max nodes to scale within, an SSH Key to use, the VPC we deployed earlier, and the subnets. You will also need to enter a ClusterName which must be exactly the same as our control plane cluster we deployed earlier. Lastly, the NodeImageID must be one of the following, depending on your region:\nUS-East-1 (N. Virginia) - ami-0c24db5df6badc35a US-West-2 (Oregon) - ami-0a2abab4107669c1b US-East-2 (Ohio) - ami-0c2e8d28b1f854c68 Deploy your CloudFormation template and wait for it to complete. Once the stack completes, you\u0026rsquo;ll need to look at the outputs to get the NodeInstanceRole.\nThe last step is to ensure that our nodes have permissions and can join the cluster. Use the file format below and save it as aws-auth-cm.yaml.\napiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: \u0026lt;ARN of instance role (not instance profile)\u0026gt; username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes Replace ONLY the \u0026lt;ARN of instance role (not instance profile)\u0026gt; section with the NodeInstanceRole we got from the outputs of our CloudFormation stack. Save the file and then apply the configmap to your EKS cluster by running:\nkubectl apply -f aws-auth-cm.yaml After we run the command our cluster should be fully working. We can run \u0026ldquo;get nodes\u0026rdquo; to see the worker nodes listed in the cluster.\nNOTE: the status of the cluster will initially show \u0026ldquo;NotReady\u0026rdquo;. Re-running the command or using the \u0026ndash;watch switch will let you see when the nodes are fully provisioned.\nDeploy Your Apps Congratulations, you\u0026rsquo;ve build your Kubernetes cluster on Amazon EKS. Its time to deploy your apps which is outside of this blog post. If you want to try out an app to prove that its working, try one of the deployments from the kubernetes.io tutorials such as their guestbook app.\nkubectl apply -f https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml From here on out, it\u0026rsquo;s up to you. Start deploying your replication controllers, pods, services, etc as you\u0026rsquo;d like.\n","permalink":"https://theithollow.com/2018/07/30/how-to-setup-amazon-eks-with-windows-client/","summary":"\u003cp\u003eWe love Kubernetes. It\u0026rsquo;s becoming a critical platform for us to manage our containers, but deploying Kubernetes clusters is pretty tedious. Luckily for us, cloud providers such as AWS are helping to take care of these tedious tasks so we can focus on what is more important to us, like building apps. This post shows how you can go from a basic AWS account to a Kubernetes cluster for you to deploy your applications.\u003c/p\u003e","title":"How to Setup Amazon EKS with Windows Client"},{"content":"Amazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window.\nIn the past, AWS customers might need to come up with their own solution for snapshots or backups. Some apps moving to the cloud might not even need backups based on their deployment method and architectures. For everything else, we assume we\u0026rsquo;ll need to at least snapshot the EBS volumes that the EC2 instances are running on. Prior to the Data Lifecycle Manager, this could be accomplished through some fairly simple Lambda functions to snapshot volumes on a schedule. Now with the new service, there is a solution right in the EC2 console.\nUsing the Data Lifecycle Manager To begin using the new service, open the EC2 console in your AWS account. If this is the first time using it, you\u0026rsquo;ll click the \u0026ldquo;Create Snapshot Lifecycle Policy\u0026rdquo; button to get started.\nWe\u0026rsquo;ll create a new policy which defines what volumes should be snapshotted and when to take these snapshots. First, give the policy a description so you\u0026rsquo;ll be able to recognize it later. The next piece is to identify which volume should be snapshotted. This is done using a tag on the volume (not the EC2 instance its connected to). I\u0026rsquo;ve used a method that snapshots EBS volumes with a tag key of \u0026ldquo;snap\u0026rdquo; and a tag value of \u0026ldquo;true\u0026rdquo;.\nNext, we\u0026rsquo;ll need to define the schedule in which the volumes will be snapshotted. Give that schedule a name and then specify how often the snapshots will be taken. In this example, I\u0026rsquo;m taking a snapshot every 12 hours. The first snapshots need to know when to be initiated. Be sure to note that this time is UTC time, so do your conversions before you start with this process. After this, you\u0026rsquo;ll need to specify how many of the snapshots to keep. Its a bad idea to start taking lots of snapshots and not deleting them ever, especially in the cloud where you can keep as many as you\u0026rsquo;d like if you can stomach the bill.\nNote: The snapshot start time is a general start time. The snapshots will be taken sometime within the hour you specify, but don\u0026rsquo;t expect that it will be immediately at this time.\nYou\u0026rsquo;ll also have the option to tag your snapshots. It probably makes sense to tag them somehow so that you know which ones might have been taken manually, and which were automated through the Data Lifecycle Manager. I\u0026rsquo;ve tagged mine with a key name LifeCycleManager and a value of true.\nLastly, you\u0026rsquo;ll need a role created that has permissions to create and delete these snapshots. Luckily, there is a \u0026ldquo;Default role\u0026rdquo; option in the console that will create this for you. Otherwise you can specify the role yourself.\nAfter you create the policy, you\u0026rsquo;ll see it listed in your console. Its also worth noting that you could have multiple policies affecting the same volumes. For instance if you wanted to take snapshots every 6 hours, maybe you create a pair of policies since the highest frequency of snapshots that is available, is currently twelve hours.\nThe Results If you wait for a bit, your snapshots should be taken you\u0026rsquo;ll notice that any of your EBS volumes that were properly tagged will be snapshotted. You can also see in the screenshot below, that the snapshot has the tag that I specified along with a few others that identify which policy created the snapshot.\nSummary The Data Lifecycle Manager service from AWS might not seem like a big deal, but its a lot nicer than having to write your own Lambda functions to snapshot and delete the snapshots on a schedule. Don\u0026rsquo;t worry though, you might still get some use out of your old Lambda code if you want to customize your snapshot methods or do something like create an AMI. If you\u0026rsquo;re just looking for the basics, try out the Data Lifecycle Manager. Right now you can test this out yourself in the N. Virginia, Oregon, and Ireleand regions through the AWS console or through the CLI. I expect this will be available in other regions and through CloudFormation shortly as well.\n","permalink":"https://theithollow.com/2018/07/23/easy-snapshot-automation-with-amazon-data-lifecycle-manager/","summary":"\u003cp\u003eAmazon has announced a new service that will help customers manage their EBS volume snapshots in a very simple manner. The Data Lifecycle Manager service lets you setup a schedule to snapshot any of your EBS volumes during a specified time window.\u003c/p\u003e\n\u003cp\u003eIn the past, AWS customers might need to come up with their own solution for snapshots or backups. Some apps moving to the cloud might not even need backups based on their deployment method and architectures. For everything else, we assume we\u0026rsquo;ll need to at least snapshot the EBS volumes that the EC2 instances are running on. Prior to the Data Lifecycle Manager, this could be accomplished through some fairly simple Lambda functions to snapshot volumes on a schedule. Now with the new service, there is a solution right in the EC2 console.\u003c/p\u003e","title":"Easy Snapshot Automation with Amazon Data Lifecycle Manager"},{"content":"A common question that comes up during AWS designs is, \u0026ldquo;Should I use a transit VPC?\u0026rdquo; The answer, like all good IT riddles is, \u0026ldquo;it depends.\u0026rdquo; There are a series of questions that you must ask yourself before deciding whether to use a Transit VPC or not. In this post, I\u0026rsquo;ll try to help formulate those questions so you can answer this question yourself.\nThe Basics Before we can ask those tough questions, we first should answer the question, \u0026ldquo;What is a Transit VPC?\u0026rdquo; Well, a transit VPC acts as an intermediary for routing between two places. Just like a transit network bridges traffic between two networks, a transit VPC ferries traffic between two VPCs or perhaps your data center.\nThere isn\u0026rsquo;t a product that you buy called a transit VPC, but rather a transit VPC is a reference architecture. Multiple products can be used to build this transit VPC, but the really good ones have a method to add some automation to the process. AWS\u0026rsquo;s website highlights Aviatrix and Cisco solutions, but I\u0026rsquo;ve also seen Palo Alto firewalls used as well. Really any virtual router should be able to be used with this process, so you pick your favorite solution.\nThe reference architecture uses a pair of virtual routers split between Availability zones. Routing between VPCs, etc would spin up a VPN tunnel to the transit routers so that routing can then be controlled through these routers installed on ec2 instances.\nWhy Might I want a Transit VPC? Now that we know what a Transit VPC is, what use cases might warrant me using a transit VPC?\nSimplify Networking If you\u0026rsquo;ve got a multi-account, multi-VPC strategy for your deployments, connecting all of those VPCs together can be a real mess. If you\u0026rsquo;re implementing peering connections for a full mesh, the formula to calculate that is: [ (n-1)*n]/2. Setting this up and managing it can be a real chore. Take a look at the below example to see how you can quickly get overwhelmed by the number of connections to maintain. Think how this changes every time you add a new VPC.\nNow we can avoid some of this complexity by moving to a hub and spoke model, where the Transit VPC is the hub. We still need to setup connections, but we can at least manage our connections to on-prem through a centralized location. Also, what would happen if we added a new VPC to the hub and spoke model? That\u0026rsquo;s right, one new connection back to transit instead of modifying all connections across every VPC.\nFunnel Traffic How do you want to manage your traffic out to the Internet, or in from the Internet? Do you allow traffic in and out of each of your VPCs? You can certainly do just that, but how comfortable are you with having multiple egress or ingress points to your environment? The diagram below shows a full mesh of VPCs again, this time adding a red line for an Internet connection and a connection back to your on-premises data center.\nNOTE: refusing to draw diagrams like the one below is another valid use case for moving to a hub/spoke model instead of mesh.\nMany times I hear the need to place restrictions on Internet ingress traffic such as packet inspection, IDS/IPS, etc. AWS provides a Web Application Firewall, which is nice, but some corporations will require something more than that. On the flip side, some companies require things like content filtering for all outbound Internet traffic. Does it make sense to deploy content filtering solutions in each of your VPCs, or should you centralize it in a single place, like a Transit VPC? The hub spoke model allows us to funnel all of our traffic through the Transit where a firewall or other device might be able to take action on the traffic. Then a single ingress/egress point can be managed in/out from the Internet and to the corporate data center.\nWhy Shouldn\u0026rsquo;t I use a Transit VPC? Costs A drawback to using a Transit VPC is costs. A Transit VPC will have a couple of different effects on your AWS cloud costs. The first is obviously the need to deploy a pair of EC2 instances in the transit VPC. You\u0026rsquo;ll be responsible for these two instances running as either On-Demand or Reserved Instances to save on costs. In addition to the EC2 costs, you might also need to purchase a license from Cisco, Aviatrix, etc., to run their software. These costs are pretty easy to calculate, once you size your instances appropriately.\nA more difficult cost consideration is around your network traffic. AWS charges you for any network traffic exiting (egress) your VPC. In the diagram below (left), you can see how this works with a single VPC directly accessing the Internet. On the right side, you can see what happens to egress costs when you have a transit VPC instead. With a transit VPC we\u0026rsquo;d get billed twice for the traffic because it exits two VPCs. Keep this in mind for traffic inter-VPC as well, between a Shared Services and Prod environment for example.\nOne other cost consideration is your VPN tunnels. AWS also charges for VPN tunnel connection hours. The transit VPC relies on VPN tunnels to spoke VPCs to provide the overlay networking. These VPN tunnels come with a small charge per month to have them up and running.\nIts Traditional Data Center Methodologies One of the bigger considerations to contemplate is how you want to operate your cloud. Some of the benefits of cloud are things like Infrastructure-as-Code (IAC). Developers may be looking to spin up their resources and have them access the Internet through their own means. Application code and infrastructure code is coupled together to provide a full solution all on their own. If you add a transit VPC to the mix to provide service chaining for example, the application developers may need to open a ticket to have a network engineer setup NAT on the transit routers. Or perhaps open up some ACLs within the router so that the app can talk to the outside. This is a common scenario in on-premises environments, so nothing new, but do we need to change how we think when moving to the cloud? This ideology is a conversation that should happen early on when developing a cloud strategy.\nSummary Transit VPCs are a pretty nifty solution to provide some controls to your AWS cloud. There are pros and cons for using a transit VPC but hopefully this post has shown you what sort of things should be discussed and considered before jumping in to your architecture designs. The table below should help formulate your decisions.\n[table id=9 /]\nA transit VPC should be considered early on so that a retrofit isn\u0026rsquo;t required for your cloud environment.\n","permalink":"https://theithollow.com/2018/07/16/should-i-use-a-transit-vpc-in-aws/","summary":"\u003cp\u003eA common question that comes up during AWS designs is, \u0026ldquo;Should I use a transit VPC?\u0026rdquo; The answer, like all good IT riddles is, \u0026ldquo;it depends.\u0026rdquo; There are a series of questions that you must ask yourself before deciding whether to use a Transit VPC or not. In this post, I\u0026rsquo;ll try to help formulate those questions so you can answer this question yourself.\u003c/p\u003e\n\u003ch1 id=\"the-basics\"\u003eThe Basics\u003c/h1\u003e\n\u003cp\u003eBefore we can ask those tough questions, we first should answer the question, \u0026ldquo;What is a Transit VPC?\u0026rdquo; Well, a transit VPC acts as an intermediary for routing between two places. Just like a transit network bridges traffic between two networks, a transit VPC ferries traffic between two VPCs or perhaps your data center.\u003c/p\u003e","title":"Should I use a Transit VPC in AWS?"},{"content":" There are a dozen new technologies being introduced every day that never amount to anything, while others move on to create completely new methodologies for how we interact with IT. Just like virtualization changed the way data centers operate, containers are changing how we interact with our applications and Kubernetes (K8s in short hand) seems to be a front runner in this space. However, with any new technology hitting the market, there is a bit of a lag before it takes off. People have to understand why it\u0026rsquo;s needed, who\u0026rsquo;s got the best solution, and how you can make it work with your own environment. Heptio is a startup company focusing on helping enterprises embrace Kubernetes through their open source tools and professional services. I\u0026rsquo;ve been hearing great things about Heptio, but when my good friend, Tim Carr, decided to go work for there, I decided that I better look into who they are, and figure out what he sees in their little startup.\nHeptio was co-founded by Craig McLuckie and Joe Beda who were founding engineers of the Kubernetes project for Google and probably understand Kubernetes better than most (I would assume). They\u0026rsquo;re taking their knowledge and building tools to help customers adopt Kubernetes in a stable, secure, reliable way across platforms.\nWhat Exactly Does Heptio Sell? Heptio currently sells services to customers who need help with their Kubernetes deployments. This might come in the form of consulting services, or their Heptio Kubernetes Subscription (HKS). In the grand scheme of things, Kubernetes is still fairly new. Companies have started developing code on the K8s platform and love how easy it is to run containers on it, but management is still difficult. There are a lot more questions that enterprises need to solve than just, \u0026ldquo;How do I spin up an application on Kubernetes?\u0026rdquo; For the enterprise, additional considerations need to be considered such as, How do we ensure our cluster is backed up to meet our RTO and RPO requirements? How do we ensure that we\u0026rsquo;ve got proper role based access controls in place and separation of duties with our cluster? How do we patch our K8s cluster, and get visibility into possible issues that arise?\nHeptio\u0026rsquo;s main product is their Heptio Kubernetes Subscription or HKS. You pick where you want your Kubernetes cluster to live, such as in AWS, GCE, Azure, on-prem, or whatever you\u0026rsquo;d like. Heptio will manage that cluster using many of the open source tools that they helped create. You make the choices that make sense to your organization for platform portability, and Heptio will make sure its backed up, conformant with CNCF standards, patched and updated.\nThe HKS solution also comes with advisory support so that you can ask questions as you are building your environment and of course, break-fix for when issues arise. If you\u0026rsquo;re new to managing an enterprise Kubernetes deployment, this is a great way to make sure you\u0026rsquo;ve got the basics covered.\nWhere have I heard of Heptio before? If you\u0026rsquo;ve worked with Kubernetes very much, you may have used some of their open sourced products, and there are a number of them.\nHeptio Ark - Ark is a tool to manage disaster recovery of K8s cluster resources. This tool creates a simple way to recover resources if something happens to your K8s clusters, or if you need a simple way to migrate your resources to a different cluster.\nHeptio Contour - Contour helps manage your Envoy (opensource project created and used by Lyft) load balancer for your K8s deployments. It helps deploy Envoy into your environment and keep it updated as new downstream containers change state.\nHeptio Gimbal - Gimbal is a neat tool that lets you route traffic to one or more of your K8s clusters from the internet, or to legacy systems such as an OpenStack cluster, or both.\nHeptio Sonobuoy - Sonobouy might be the most well known project. This tool makes sure that your K8s installation is conformant with the official Kubernetes specifications. The Cloud Native Computing Foundation (CNCF) has set standards for what each K8s deployment must provide to consumers. If two vendors both have Kubernetes distributions, you\u0026rsquo;d want to ensure that both of these allow the same set of minimum capabilities so that you can move your containers between the platforms. Sonobuoy is the defacto standard to report on these conformant metrics.\nksonnet - Ksonnet is a neat way to manage your k8s manifests. It provides a quick way to start with your templates by generating much of the code for you. It also has library so you an use tools like VSCode with auto-complete turned on. This is typically an easier way to manage code than YAML or JSON files.\nHeptio Authenticator for AWS - This was a project completed by both Heptio and AWS to allow Amazon\u0026rsquo;s IAM service to provide authentication to a Kubernetes cluster. If you\u0026rsquo;re an AWS customer running K8s, this is a big deal for you.\nWhat\u0026rsquo;s up with the Name? If you\u0026rsquo;re curious about the company name, you\u0026rsquo;re not alone. Geekwire was able to interview Mr. Beda in 2016 to find out that the name is in reference to the original Kubernetes build inside of Google before it was open sourced. Before Kubernetes was a thing, Google called it the Borg. When it was being named, it was pitched as 7 of 9 which is in reference to a character from Star Trek Voyager. \u0026ldquo;Hept\u0026rdquo; is a greek prefix for \u0026ldquo;seven\u0026rdquo; and thus Heptio is a nod to the original project that McLuckie and Beda helped to create during their time with Google.\nSummary In a nutshell, there seems to be a bunch of talent behind this little startup and there has been enough financial backing too, to make this company take off. Last September Heptio had a $25M Series B funding round and that\u0026rsquo;s nothing to shake a stick at. With their propensity to work with the open source communities to build new tools, and provide expert knowledge to companies on the Kubernetes deployments, there\u0026rsquo;s no telling how far this little start up could go. Good luck to them, and we\u0026rsquo;ll be watching to see where things lead.\n","permalink":"https://theithollow.com/2018/07/09/who-is-heptio/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2018/07/heptio-logo.jpeg\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2018/07/heptio-logo-300x171.jpeg\"\u003e\u003c/a\u003e There are a dozen new technologies being introduced every day that never amount to anything, while others move on to create completely new methodologies for how we interact with IT. Just like virtualization changed the way data centers operate, containers are changing how we interact with our applications and Kubernetes (K8s in short hand) seems to be a front runner in this space. However, with any new technology hitting the market, there is a bit of a lag before it takes off. People have to understand why it\u0026rsquo;s needed, who\u0026rsquo;s got the best solution, and how you can make it work with your own environment. Heptio is a startup company focusing on helping enterprises embrace Kubernetes through their open source tools and professional services. I\u0026rsquo;ve been hearing great things about Heptio, but when my good friend, \u003ca href=\"https://twitter.com/timmycarr\"\u003eTim Carr\u003c/a\u003e, decided to go work for there, I decided that I better look into who they are, and figure out what he sees in their little startup.\u003c/p\u003e","title":"Who is Heptio?"},{"content":"I took last week off from work to spend some time with my family and just relax. I\u0026rsquo;d never been to Disney World and have a six year old who is seriously into Star Wars, so this sounded like a great way to take a relaxing week off. During this vacation I found that it took several days before I even started to unwind. I ended the work week on a Friday and still felt the work stress through the weekend and into Monday. Maybe it\u0026rsquo;s a normal thing to still feel the stress through the weekend, but I had expected to feel an immediate release of tension when I was done with work on Friday when my vacation began. But all weekend I kept noticing that I couldn\u0026rsquo;t forget about work. In fact, I felt pretty sick one day and believe it was stress related. After a few days I started to pay attention to the activities of the day and didn\u0026rsquo;t pay as much attention, but it made me think that those two day weekends and how they certainly weren\u0026rsquo;t recharging me to be prepared for the next week of stress.\nZeigarnik Effect I did what any geek would do, and googled some stuff about why I felt like I did, and I found that there is a psychological phenomenon called the Zeigarnik Effect. This effect is a person\u0026rsquo;s ability to remember unfinished tasks much more easily than completed tasks. A Russian psychologist, Bluma Zeigarnik, started studying the phenomenon after seeing that waiters/waitresses could pretty accurately remember open orders, but couldn\u0026rsquo;t answer questions about orders that had been completed. The study concluded with Zeigarnik and her professor Kurt Lewin showing that unfinished tasks created a task-specific tension that allowed your brain to keep focusing on it and relieving that tension when it was complete.\nCoping with Zeigarnik For me, knowing why my brain was still clinging to work for a few days was helpful, since not knowing why I couldn\u0026rsquo;t de-stress was also stressing me out. Was I broken? Is there something wrong with me that I obsess over things from work even when I\u0026rsquo;m taking a break? I found myself actively trying to let go and relax, and the fact that I had to keep forcing myself to do this was also stressing me out. Knowing that there was a reason for my obsessiveness was really helpful. It wasn\u0026rsquo;t my fault exactly.\nNow that we know why we sometimes feel this way, maybe we can counteract the effect or hack our brains to short circuit it. It seems as though our brains can only hold so many tasks in our active memory at at time. So when you\u0026rsquo;re on vacation or at home on the weekend, try doing something that takes some focus. I\u0026rsquo;ve found that when I think I\u0026rsquo;m trying to relax, like watching the Cubs (the best baseball team on the planet) on TV I still feel stress, but if I am actually playing baseball or video games, I forget about work for a brief time. A focused activity might be the thing you need to stop thinking about all of those open tasks you\u0026rsquo;ve still got to get back to. During my vacation, I got busy doing Disney stuff, like meeting Kylo Ren, and I eventually forgot about work for a little while.\nWhat about when we\u0026rsquo;re not on vacation but need a break? Like after a long day of work? Try keeping a list of tasks in a journal or in Jira, Trello, Todoist, or whatever. The act of writing down your list of todos has always helped me feel less stressed. Perhaps the act of keeping all of my tasks straight is an unresolved task that my brain can\u0026rsquo;t stop thinking about. Write them down and that\u0026rsquo;s one task your brain doesn\u0026rsquo;t have to keep obsessing over. Also, if you break those bigger tasks into a bunch of smaller tasks, you\u0026rsquo;ll start crossing them off much faster and hopefully reliving some of your stress. Remember, your list of tasks will still be there for you tomorrow in your task list, so stop thinking about them for the evening.\nUse Zeigarnik to your Advantage As an alternative to letting go of your open tasks, you can use those open tasks to get them done. If you are a person who leaves things until the last minute before a deadline and want to break that routine, try starting the task right away, even if you don\u0026rsquo;t finish it. According to the Zeigarnik effect, you should keep thinking about it until you complete it so you\u0026rsquo;ll be incentivized to get it done and relieve that task tension.\nSchedule breaks in your studying. Remember, that if you\u0026rsquo;re in the middle of your studying and it isn\u0026rsquo;t finished, taking a break doesn\u0026rsquo;t mean that you\u0026rsquo;re not still thinking about stuff. Breaks have been shown to help the learning process and organize thoughts. Have you ever been working very hard on a something and finally had to stop, only to have the answer magically pop into your head while you\u0026rsquo;re doing something else? I know this happens to me all the time, so breaks don\u0026rsquo;t mean you\u0026rsquo;re weak, they\u0026rsquo;re good for you and the process.\nA Note to Employers/Managers I know that we can\u0026rsquo;t always help it, but you should know how this psychological phenomenon affects your employees. We learned from The Phoenix Project that work in progress (WIP) is bad for production lines, IT development, and flows in general. But, knowing about the Zeigarnik effect, we also know that having too much work in progress affects your employee\u0026rsquo;s stress levels. Difficult tasks, are one thing, but too many tasks at once may make your employees burn out or even quit to get rid of the task related tension. If you\u0026rsquo;re trying to retain employees and improve employee satisfaction, limit your work in progress and you should have faster turnover times and happier employees.\nSummary Decrease your stress levels by managing your open task items. Complete them quickly, write them down, cross them off, or eliminate them to begin with to reduce your task based tension. Remember to also add tasks for things you plan to study. Do you know, Cloud, DevOps, Product A, Virtualization, containers, etc? Pick what you plan to learn and add those to your task list, don\u0026rsquo;t leave them in your head.\nWhen you aren\u0026rsquo;t able to manage your own work in progress, try tricking your brain by engaging in activities that make you focus on another task for a while.\nI know I\u0026rsquo;m not the first person to write about this effect but really hope to help someone else within our field to cut back on the stress levels. This post from Eric Lee got my attention and I think there is plenty of stress in our industry and I hope this helps you to reduce yours. Thanks for reading.\n","permalink":"https://theithollow.com/2018/06/18/the-dark-side-of-stress/","summary":"\u003cp\u003eI took last week off from work to spend some time with my family and just relax. I\u0026rsquo;d never been to Disney World and have a six year old who is seriously into Star Wars, so this sounded like a great way to take a relaxing week off. During this vacation I found that it took several days before I even started to unwind. I ended the work week on a Friday and still felt the work stress through the weekend and into Monday. Maybe it\u0026rsquo;s a normal thing to still feel the stress through the weekend, but I had expected to feel an immediate release of tension when I was done with work on Friday when my vacation began. But all weekend I kept noticing that I couldn\u0026rsquo;t forget about work. In fact, I felt pretty sick one day and believe it was stress related. After a few days I started to pay attention to the activities of the day and didn\u0026rsquo;t pay as much attention, but it made me think that those two day weekends and how they certainly weren\u0026rsquo;t recharging me to be prepared for the next week of stress.\u003c/p\u003e","title":"The Dark Side of Stress"},{"content":"Passwords are a necessary evil to keep bandits from running away with your confidential data. We\u0026rsquo;ve come up with various strategies to manage these secrets, such as:\nUsing one password for all of your stuff so you don\u0026rsquo;t forget it. Use a password vault to store a unique password for each of your logins. Use a few passwords in a pattern you can remember. Write down your password on a sticky note and attach it to your monitor. Now, not all of these practices are good but you get the idea.\nWhat do we do about passwords in an enterprise? We should be using unique passwords for every login, but also every service account. This usually leads to a password vault of some sort, but wouldn\u0026rsquo;t it be more secure if you generated a new password with a set lifetime and only generated it when we needed? Hashicorp has a tool called \u0026quot; Vault\u0026quot; that lets us build these dynamic secrets at will so that we can use it with our applications or temporary user access. For this post, we\u0026rsquo;ll create dynamic logins to a mysql database so that a flask app will be able to use it for its database backend. In your lab, you could use this for anything that needed access to a mysql database including a user that just need temporary access.\nVault Prerequisites Before we can get started with this we\u0026rsquo;ve already deployed a few of the resources. First, I\u0026rsquo;ve deployed a Vault server and I\u0026rsquo;m using a Hashicorp Consul server as a backend for Vault. To be totally honest, I\u0026rsquo;ve deployed three Vault servers and have Consul installed on those same servers but your environment may vary depending on your availability and performance requirements. I\u0026rsquo;ve also unsealed Vault and logged in with a user with permissions. Next, I\u0026rsquo;ve deployed a mysql server with an admin user named \u0026ldquo;Vault\u0026rdquo;. Lastly, I\u0026rsquo;ve deployed a flask app on a server and connected it to the mysql server for its database instance. You can see the basic flask app below and that it accepted a login and a single \u0026ldquo;task\u0026rdquo; entry stored in the database.\nConfiguring Vault for Dynamic Secrets The boring infrastructure setup stuff is done and we\u0026rsquo;re ready to configure Vault to dynamically create mysql logins when we need them.\nThe first thing I\u0026rsquo;d want to do is to enable the database capabilities. I can do that by running the following command:\nvault secrete enable database If you\u0026rsquo;ve got the console open, you\u0026rsquo;ll notice that you can see this in your web browser:\nNow that we\u0026rsquo;ve enabled our database secrets, we need to configure vault to talk to our mysql database. To do that we need to tell the database engine which plugin to use, and the connection information. Remember, I created a \u0026ldquo;vault\u0026rdquo; user on the mysql database already so that the Vault software could log in for us.\nTo setup the configuration we\u0026rsquo;ll run this from the Vault command line:\nvault write database/config/hollowdb \\ plugin_name=mysql-database-plugin \\ connection_url=\u0026#34;{{username}}:{{password}}@tcp(mysql.hollow.local:3306)/\u0026#34; \\ allowed_roles=\u0026#34;mysqlrole\u0026#34; \\ username=\u0026#34;vault\u0026#34; \\ password=\u0026#34;QAZxswedc\u0026#34; The \u0026ldquo;write database/config/hollowdb\u0026rdquo; line is where we\u0026rsquo;ll store the config within the vault server. The name of my database is hollowdb so that\u0026rsquo;s where I\u0026rsquo;m storing it. Whats important is storing it within database/config. You\u0026rsquo;ll also notice that there is a connection url to the server/database and we\u0026rsquo;ve added a username and password to fill in there. Don\u0026rsquo;t worry, that password is garbage and has since been deleted. The allowed roles we\u0026rsquo;ll configure in a moment, for now just give it a name.\nNow, as you might have guessed, we\u0026rsquo;re going to configure the role. The role maps a name within Vault to a SQL statement to create the user within the mysql database. The code below is the role that I\u0026rsquo;ve created.\nvault write database/roles/mysqlrole \\ db_name=hollowdb \\ creation_statements=\u0026#34;CREATE USER \u0026#39;{{name}}\u0026#39;@\u0026#39;%\u0026#39; IDENTIFIED BY \u0026#39;{{password}}\u0026#39;;GRANT ALL PRIVILEGES ON hollowapp.* TO \u0026#39;{{name}}\u0026#39;@\u0026#39;%\u0026#39;;\u0026#34; \\ default_ttl=\u0026#34;1h\u0026#34; \\ max_ttl=\u0026#34;24h\u0026#34; in this command, we create a new role that matches our earlier configuration to the database. Then we add a SQL statement that takes our username and [in this case] creates the user in the mysql database and grants all permissions on the hollowapp database. Also take note that we have a default time-to-live of 1 hour which can be extended to 24 hours. After this, Vault will revoke the credentials.\nAt this point we\u0026rsquo;re ready to see some of the magic happen.\nTesting our Dynamic Secrets So to test out what we\u0026rsquo;ve built, we\u0026rsquo;ll first take a look at the database users that are currently on my mysql database. I\u0026rsquo;ve fot a few users that I\u0026rsquo;m using for my flask app, and some other admin type users in here.\nNow, we can tell Vault to give us a new login to our mysql database. This can be done from a vault client, or through the API of course. From the vault client, we would run:\nvault read database/creds/mysqlrole We call the vault read command against database/creds/[role configured]. You can see that when e do that we\u0026rsquo;re returned some data that includes a username and a password along with some other informative info.\nWe could run the same command through the API which I\u0026rsquo;ve demonstrated through the curl command.\nWhen we look at the list of mysql users, we can see that a new user has been created that we can use for our application to login with.\nApplying this Capability OK, we\u0026rsquo;ve demonstrated that we can use Vault to create these temporary passwords for us, but how do we integrate it with something more useful? Lets go back to the Flask app we discussed at the beginning of this post. We\u0026rsquo;ll leave that app alone, but this time, we\u0026rsquo;ll deploy a docker container and attach it to the same mysql database. However, this time we\u0026rsquo;ll build the docker container to use one of our Vault generated passwords.\nWhen I deploy the docker container, I\u0026rsquo;ll generate a new mysql login and pass it as an environment variable to the docker container to specify the Flask database connection.\nresponse=$(Curl --header \u0026#34;X-Vault-Token:6244742c-0f04-YyYy-XxXx-cf0fe3d813c7\u0026#34; http://hashi1.hollow.local:8200/v1/database/creds/mysqlrole) export DBPASSWORD=$(echo $response | jq -r .data.password) export DBUSERNAME=$(echo $response | jq -r .data.username) docker run --name hollowapp -d -p 8000:5000 --rm \\ -e DATABASE_URL=mysql+pymysql://hollowapp:Password123@mysql.hollow.local/hollowapp \\ hollowapp:latest When the docker image comes up on my local machine, I\u0026rsquo;m able to login and we see the same task entry at the beginning of the post. This means that the docker image and our web server are both communicating with the same mysql database.\nWell thats neat! We should remember though that this container will only work for one hour because thats how long our credentials will be available. This might seem bad for a web server, but what if we\u0026rsquo;re dynamically spinning up web servers to handle a task and then terminating the container. Then these temporary credentials would be pretty great right?\nAlso, in this example I\u0026rsquo;ve passed the credentials to the container through an environment variable. Perhaps it would be even more secure if the container itself, obtained the credentials when it was started. Then the container would be the only place the credentials would have been stored in memory.\nSummary This is just the beginning for Vault. There are a lot of ways you could make this solution work for you and I didn\u0026rsquo;t even discuss how Vault performs encryption or logs the sessions for an audit trail. How would you use Vault in your environment?\n","permalink":"https://theithollow.com/2018/06/04/use-hashicorps-vault-to-dynamically-create-mysql-credentials/","summary":"\u003cp\u003ePasswords are a necessary evil to keep bandits from running away with your confidential data. We\u0026rsquo;ve come up with various strategies to manage these secrets, such as:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eUsing one password for all of your stuff so you don\u0026rsquo;t forget it.\u003c/li\u003e\n\u003cli\u003eUse a password vault to store a unique password for each of your logins.\u003c/li\u003e\n\u003cli\u003eUse a few passwords in a pattern you can remember.\u003c/li\u003e\n\u003cli\u003eWrite down your password on a sticky note and attach it to your monitor.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eNow, not all of these practices are good but you get the idea.\u003c/p\u003e","title":"Use Hashicorp's Vault to Dynamically Create Mysql Credentials"},{"content":"Who we are The website Address is https://theithollow.com. Also accessed through www.theithollow.com and ithollow.com. This blog is for education purposes but as such access logs and subscription options may store your email address and username to provide notifications if you choose to provide it.\nWhat personal data we collect and why we collect it Comments When visitors leave comments on the site we collect the data shown in the comments form, and also the visitor’s IP address and browser user agent string to help spam detection.\nAn anonymized string created from your email address (also called a hash) may be provided to the Gravatar service to see if you are using it. The Gravatar service privacy policy is available here: https://automattic.com/privacy/. After approval of your comment, your profile picture is visible to the public in the context of your comment.\nMedia If you upload images to the website, you should avoid uploading images with embedded location data (EXIF GPS) included. Visitors to the website can download and extract any location data from images on the website.\nCookies If you leave a comment on this site you may opt-in to saving your name, email address and website in cookies. These are for your convenience so that you do not have to fill in your details again when you leave another comment. These cookies will last for one year.\nIf you have an account and you log in to this site, we will set a temporary cookie to determine if your browser accepts cookies. This cookie contains no personal data and is discarded when you close your browser.\nWhen you log in, we will also set up several cookies to save your login information and your screen display choices. Login cookies last for two days, and screen options cookies last for a year. If you select \u0026ldquo;Remember Me\u0026rdquo;, your login will persist for two weeks. If you log out of your account, the login cookies will be removed.\nIf you edit or publish an article, an additional cookie will be saved in your browser. This cookie includes no personal data and simply indicates the post ID of the article you just edited. It expires after 1 day.\nEmbedded content from other websites Articles on this site may include embedded content (e.g. videos, images, articles, etc.). Embedded content from other websites behaves in the exact same way as if the visitor has visited the other website.\nThese websites may collect data about you, use cookies, embed additional third-party tracking, and monitor your interaction with that embedded content, including tracing your interaction with the embedded content if you have an account and are logged in to that website.\nAkismit We collect information about visitors who comment on Sites that use our Akismet anti-spam service. The information we collect depends on how the User sets up Akismet for the Site, but typically includes the commenter\u0026rsquo;s IP address, user agent, referrer, and Site URL (along with other information directly provided by the commenter such as their name, username, email address, and the comment itself).\nHow long we retain your data If you leave a comment, the comment and its metadata are retained indefinitely. This is so we can recognize and approve any follow-up comments automatically instead of holding them in a moderation queue.\nFor users that register on our website (if any), we also store the personal information they provide in their user profile. All users can see, edit, or delete their personal information at any time (except they cannot change their username). Website administrators can also see and edit that information.\nWhat rights you have over your data If you have an account on this site, or have left comments, you can request to receive an exported file of the personal data we hold about you, including any data you have provided to us. You can also request that we erase any personal data we hold about you. This does not include any data we are obliged to keep for administrative, legal, or security purposes.\nWhere we send your data Visitor comments may be checked through an automated spam detection service.\n","permalink":"https://theithollow.com/about/privacy-policy/","summary":"\u003ch2 id=\"who-we-are\"\u003eWho we are\u003c/h2\u003e\n\u003cp\u003eThe website Address is \u003ca href=\"https://theithollow.com\"\u003ehttps://theithollow.com\u003c/a\u003e. Also accessed through \u003ca href=\"https://www.theithollow.com\"\u003ewww.theithollow.com\u003c/a\u003e and ithollow.com. This blog is for education purposes but as such access logs and subscription options may store your email address and username to provide notifications if you choose to provide it.\u003c/p\u003e\n\u003ch2 id=\"what-personal-data-we-collect-and-why-we-collect-it\"\u003eWhat personal data we collect and why we collect it\u003c/h2\u003e\n\u003ch3 id=\"comments\"\u003eComments\u003c/h3\u003e\n\u003cp\u003eWhen visitors leave comments on the site we collect the data shown in the comments form, and also the visitor’s IP address and browser user agent string to help spam detection.\u003c/p\u003e","title":"Privacy Policy"},{"content":"Hashicorp\u0026rsquo;s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you\u0026rsquo;ll store your state files and how they\u0026rsquo;ll be locked so that two team members or build servers aren\u0026rsquo;t stepping on each other. State can be stored in Terraform Enterprise (TFE) or with some cloud services such as S3. But if you want to store your state, within your data center, perhaps you should check out Hashicorp\u0026rsquo;s Consul product.\nConsul might not have been specifically designed to house Terraform state files but does its built in capabilities lend itself well to doing just this. Hashicorp\u0026rsquo;s Consul product can be used as a service discovery product or key/value store. The product can also perform health checks on services, so the combination of these tools can be a great benefit to teams trying to build microservices architectures.\nSetup a Consul Cluster We don\u0026rsquo;t want to risk storing all of our Terraform state files in a single server, so we\u0026rsquo;ll deploy three Consul servers in a cluster. To do this, I\u0026rsquo;ve deployed three CentOS servers and opened the appropriate firewall ports (Yeah, I turned off the host firewall. It\u0026rsquo;s a lab.) Once the basic OS deployments are done we\u0026rsquo;ll need to download the latest version of Consul from Hashicorp\u0026rsquo;s website. I\u0026rsquo;ve copied the application over the the /opt directory on each of the three hosts and set the permissions so I could execute the application. Next we need to make sure the binary is in our PATH. on my CentOS machine I added /opt/ to my PATH. Remember to do this on each of the three servers.\nexport PATH=$PATH:/opt/ I\u0026rsquo;ve also added a second environment variable that enables the new Consul UI. This is optional, but I wanted to use the latest UI and Consul looks for the environment variable to enable this at this point. I expect this to change in the future.\nexport CONSUL_UI_BETA=true Lastly, I\u0026rsquo;ve created a new directory that will house my config data for Consul.\nsudo mkdir /etc/consul.d Now we get to the business of setting up the cluster. On the first node, we\u0026rsquo;ll run the following command:\nconsul agent -server -bootstrap-expect=3 -data-dir=/tmp/consul -node=agent-one -bind=10.10.50.121 -enable-script-checks=true -ui -client 0.0.0.0 -config-dir=/etc/consul.d Lets explain some of the switches that are happening in that last command.\n-server - This tells Consul that this server will be acting as a server and not a client.\n-bootstrap-expect=3 - This switch explains how many servers are expected to be part of the cluster.\n-data-dir=/tmp/consul - This switch explains where Consul will store its data.\n-node=agent-one - This switch identifies each of the nodes. This should be unique for each of the servers in the cluster.\n-bind=10.10.50.121 - The address that should be bound to for internal cluster communications. This will be unique on each node in your cluster.\n-enable-script-checks=true - We could omit this for this post, but status checks could be added later where this would be necessary.\n-ui - This enables the UI\n-client 0.0.0.0 - The address to which Consul will bind client interfaces, including the HTTP and DNS servers.\n-config-dir - Determines where the config files will be located.\nWhen you run the commands on the first node, you\u0026rsquo;ll start seeing log messages.\nRepeat the commands on the other servers that should be part of the cluster. Be mindful to change the options that should be unique to the node such as bind and node. At this point the cluster should be up and running. To check this, open another terminal session and run consul members. You should see three members listed of type server.\nIf you used the -UI switch when you started up the nodes, you\u0026rsquo;ll also be able to navigate to http://node.domain.local:8500/ui and you\u0026rsquo;ll see the Consul UI. Notice that you\u0026rsquo;ll be directed to the Services where you can see node health. Again we see three nodes as healthy.\nWhile we\u0026rsquo;re in the UI, take a second to click on the Key/Value tab. This is where Terraform will be storing its state files. Notice that at this point we don\u0026rsquo;t see anything listed, which makes sense because we haven\u0026rsquo;t created any pairs yet.\nTerraform Build This post won\u0026rsquo;t go into building your Terraform configurations but an important first step to using Consul as a state store. To do this we create a backend.tf file for Terraform that defines Consul as our store. Create a file like the one below:\nterraform { backend \u0026#34;consul\u0026#34; { address = \u0026#34;consulnode.domain.local:8500\u0026#34; path = \u0026#34;tf/state\u0026#34; } } Be sure to update the address to your consul cluster. Also, your path will be where we store the state for your terraform build. For this example I\u0026rsquo;ve used a single node for my address and my path is a generic tf/state path.\nOnce created run the Terraform init . command to initialize the backend. When you\u0026rsquo;re done, go ahead and run your terraform build as usual.\nOnce your build completes, you\u0026rsquo;ll notice in the Consul UI that there is a Key/Value listed now.\nIf we drill down into that value, you\u0026rsquo;ll see our JSON data for our state file.\nIf we need to programmatically access this state file, we can use the API to return the data. I\u0026rsquo;ve done this with a simple curl command.\nSummary Storing your Terraform state files in a local directory is good for a starting point, but once you start building lots of infrastructure or many teams are all working on the same infrastructure you need a better way to manage state. Consul provides a solution to take care of this for you as well as providing plenty of other useful capabilities. Check it out if you\u0026rsquo;re looking to extend your Terraform deployments beyond a simple deployment.\n","permalink":"https://theithollow.com/2018/05/21/using-hashicorp-consul-to-store-terraform-state/","summary":"\u003cp\u003eHashicorp\u0026rsquo;s Terraform product is very popular in describing your infrastructure as code. One thing that you need consider when using Terraform is where you\u0026rsquo;ll store your state files and how they\u0026rsquo;ll be locked so that two team members or build servers aren\u0026rsquo;t stepping on each other. State can be stored in \u003ca href=\"https://www.terraform.io/\"\u003eTerraform Enterprise (TFE)\u003c/a\u003e or with some cloud services such as S3. But if you want to store your state, within your data center, perhaps you should check out \u003ca href=\"https://www.hashicorp.com/\"\u003eHashicorp\u0026rsquo;s\u003c/a\u003e Consul product.\u003c/p\u003e","title":"Using Hashicorp Consul to Store Terraform State"},{"content":"If you\u0026rsquo;re interested in visualizing your data in easy to display graphs, Amazon QuickSight may be your solution. Obviously, Amazon has great capabilities with big data, but sometimes even if you have \u0026ldquo;little\u0026rdquo; data you just need a dashboard or way of displaying that content. This post shows an example of how you can display data to tell a compelling story. For the purposes of this blog post, we\u0026rsquo;ll try to determine why the Chicago Cubs are the Major League\u0026rsquo;s favorite baseball team.\nCreating an Amazon QuickSight Account Amazon\u0026rsquo;s QuickSight can be accessed through your existing AWS console, but when you sign up for an account you\u0026rsquo;ll notice that it redirects you to a new portal. Login to your AWS Console and look for QuickSight.\nYou\u0026rsquo;ll notice that QuickSight requires you to sign up for a QuickSight account. So this is a bit different from the other services that AWS provides.\nThis you\u0026rsquo;ll pay on a monthly basis when you create an account. This isn\u0026rsquo;t an on-demand type service where you pay for what you use. There are two options when you create an account, Standard and Enterprise and the details for those are found in the screenshot below. This blog post uses Standard cause I\u0026rsquo;m an Architect on a budget!\nOnce you pick your edition you\u0026rsquo;ll setup your QuickSight account information. Give it a name and notification address as well as selecting regions. You can also allow QuickSight to look at your data across RedShift, S3 etc so that you have datasets that can immediately start helping you.\nOnce you\u0026rsquo;ve got your account setup, you\u0026rsquo;re ready to start uploading data.\nThe Data Sets Now, before you can visualize anything, it has to be based on some data. Duh, right? Amazon will give you some datasets and analysis to use right out of the starting gate, so you can see what\u0026rsquo;s possible. To do anything really useful though, you\u0026rsquo;ll want to use your own data sets to do some analysis.\nQuickSight gives you a few options for data sources such as using social media or public data sets. The data sets portal shows you an example list of data sources that can immediately get you started. How cool is it that you can connect QuickSight to your Github repo to get some analytics about whats happening?\nFor the purposes of this post, I\u0026rsquo;ve decided to upload my own file, which I\u0026rsquo;ve downloaded from data.world. This file includes information about MLB baseball games from 2016. I\u0026rsquo;ve uploaded the CSV file through QuickSight\u0026rsquo;s interface but you can also upload TSV, JSON, or XLSX files as well as ELF/CLF for log files.\nOnce the data has been uploaded, you can do your fancy visualizations.\nVisualizing your Data In the QuickSight console, you can click the \u0026ldquo;New analysis\u0026rdquo; button to get started.\nThe first step to creating an analysis is to select the data set. This should be the data that you just uploaded or configured in the previous section.\nAfter the data is imported, you can select the \u0026ldquo;Create Analysis\u0026rdquo; button.\nOnce you\u0026rsquo;re in the analysis dashboard you\u0026rsquo;ll see that on the left hand side, you can drag and drop your fields, filter the fields and change the visualization types for your analysis. Adding fields to your analysis is as easy as dragging and dropping your fields onto the graph.\nNow you can carve up your data in any way that you see fit, but I chose to look at some interesting data to see how beloved my Cubbies really were. To start, I looked at the attendance for the away teams. My theory was, that home attendance would give you some great information about teams that people liked, but also had a problem where the size of the stadium factored in, and the social aspects of baseball that had nothing to do with the teams playing. Going to the ball park for a business event or something to that effect. The attendance for the away teams might be a better representation of who the fans wanted to see play. I\u0026rsquo;m sure that no one here doubts the results of that visualization.\nPartially so I could use another visual type, and partially to put a common misconception to bed, I looked at the wind direction for the Cubs home games. It\u0026rsquo;s often been said that the Chicago Cubs hitters have a huge advantage because of how the Chicago winds carry the baseball out of the park (for a home run) more than other teams. So if we look at the wind direction per game, you\u0026rsquo;ll see that most of the time the wind is blowing in from Right field, or moving from right to left, which means that many times it would be harder to hit a home run at Wrigley Field. If you\u0026rsquo;re a left handed pull hitter, you\u0026rsquo;ll likely have to hit into the wind most days. Maybe Wrigley isn\u0026rsquo;t a hitters park after all???? I\u0026rsquo;m just kidding, it is a home run park due to the power alleys but this graph still seemed fun.\nAlso, if you have a bunch of graphs that you want to display at once, you can add multiple visuals and then share that out with your team. Here I\u0026rsquo;ve added three visualizations.\nAfter which I can share them with whomever I\u0026rsquo;d like.\nCreating a Story One of the coolest things about QuickSight is the ability to tell a story. You can add multiple visualizations and have them played in a specific order so that they explain a story. As you see below I\u0026rsquo;ve taken three different visualizations and saved them as a story.\nIf I play them, they show up like a slide show where my reviewers just click \u0026ldquo;Next\u0026rdquo; and they go from one slide to another. If I\u0026rsquo;ve done a great job with this, my reviewer should notice that the Chicago Cubs are clearly the worlds most favorite Major League Baseball Team.\nSummary OK, the Cubs are great, but the real point of this post was to get you familiar with just a few of the things that you can do with AWS QuickSight. Being able to visualize your data sets quickly can be a huge boost to many organizations. Are you profitable? Are you reaching your social media audience? Whatever your needs, QuickSight can show you some quickly digestible information about your data. Set it up with your data sets once and check in often to see how things change, or build it once for a report and share it with your teams. What will you do with this service?\n","permalink":"https://theithollow.com/2018/05/14/visualizing-the-chicago-cubs-via-amazon-quicksight/","summary":"\u003cp\u003eIf you\u0026rsquo;re interested in visualizing your data in easy to display graphs, Amazon QuickSight may be your solution. Obviously, Amazon has great capabilities with big data, but sometimes even if you have \u0026ldquo;little\u0026rdquo; data you just need a dashboard or way of displaying that content. This post shows an example of how you can display data to tell a compelling story. For the purposes of this blog post, we\u0026rsquo;ll try to determine why the Chicago Cubs are the Major League\u0026rsquo;s favorite baseball team.\u003c/p\u003e","title":"Visualizing the Chicago Cubs via Amazon QuickSight"},{"content":"Identity and Access Management (IAM) can be a confusing topic for people that are new to Amazon Web Services. There are IAM Users that could be used for authentication or solutions considered part of the AWS Directory Services such as Microsoft AD, Simple AD, or AD Connector. If none of these sound appealing, there is always the option to use Federation with a SAML 2.0 solution like OKTA, PING, or Active Directory Federation Services (ADFS). If all of these option have given you a case of decision fatigue, then hopefully this post and the associate links will help you to decide how your environment should be setup.\nIAM Users Your first option is the easiest to configure but comes with a few risks. An IAM user is a username and password that is created in the AWS IAM Console. You can give IAM Users access to login to the console or through the API, but it also means that its a separate login from the account you probably use in your corporate environment. For example, you login to your laptop every morning with a corporate Active Directory login and then go to login to the AWS console with a completely different username or password. Maybe you\u0026rsquo;ve even decided to use the same username and password that is used with your corporate AD but they aren\u0026rsquo;t sync\u0026rsquo;d so you still need to manage them separately. While IAM Users are easy to setup, they provide a few challenges for enterprises who\u0026rsquo;d like to use a single login. There are other solutions available to limit operational complexity and the number of logins managed meaning fewer attack vectors.\nDirectory Services AWS also provides a service called AWS Directory Service that provides several different options for authenticating both machines and users with your environment.\nSimple AD - Simple AD is an option that provides a subset of Microsoft Active Directory services and is based on Samba 4. This service deploys a pair of domain controllers, with DNS, in a VPC across a pair of subnets for availability. The solution allows you to use this new directory as a Kerberos authentication source, but be aware that this solution doesn\u0026rsquo;t allow you to create a trust relationship with your existing domain if you have one. Think of this if you plan to setup a new domain for your AWS servers to belong to, but will still be managed separate from your on-premises domain. Simple AD has two sizes where a small directory can handle around 500 users / 2000 objects and a large size can manage 5000 / 20,000 objects.\nMicrosoft AD - As you\u0026rsquo;d guess Microsoft AD provides a full blown Microsoft AD which is deployed in a similar fashion to Simple AD. A pair of Microsoft AD servers (2012 R2 as of now) are deployed across AZs to provide redundancy. Microsoft AD has an advantage over Simple AD where you can create a trust relationship with these new domain controllers to your existing Microsoft Active Directory environment. Be aware that your directory cannot be extended to this new Microsoft AD instance, a trust relationship can be created though. Microsoft AD also comes in two sizes where standard supports 5000 users / 30,000 objects and more than this would require the Enterprise option.\nAD Connector - If you have the need to extend your existing on-premises active directory then you could consider AD Connector. AD Connector doesn\u0026rsquo;t authenticate your users directly, but rather forwards the requests on to your on-prem AD instances. This requires network connectivity between your VPC and your on-prem domain controllers for this to work, and if you lose your connectivity, logins will fail to work.\nFederation If you want to user your existing Active Directory solution for a login method for the AWS console or CLI, then federation might be your best bet. With federation, you can continue to use your existing corporate logins to login to the AWS control plane. Be aware though, that AD Federation won\u0026rsquo;t do anything for your computer objects that need a domain to join when they are spun up. This just allows for console authentication only.\nBreakdown If you\u0026rsquo;re still unsure, perhaps this table will help illustrate the differences.\n[table id=8 /]\nResources The following resources may help you with some facets of the setup of the directory services, how federation can be used with or without role switching and general info.\nSetup of Simple AD Setup of Microsoft AD Setup of AD Connector Setup of ADFS with AWS AWS Federation with Role Switching ","permalink":"https://theithollow.com/2018/05/07/aws-iam-indecision/","summary":"\u003cp\u003eIdentity and Access Management (IAM) can be a confusing topic for people that are new to Amazon Web Services. There are IAM Users that could be used for authentication or solutions considered part of the AWS Directory Services such as Microsoft AD, Simple AD, or AD Connector. If none of these sound appealing, there is always the option to use Federation with a SAML 2.0 solution like OKTA, PING, or Active Directory Federation Services (ADFS). If all of these option have given you a case of decision fatigue, then hopefully this post and the associate links will help you to decide how your environment should be setup.\u003c/p\u003e","title":"AWS IAM Indecision"},{"content":"A pretty common question that comes up is how to manage multiple accounts within AWS from a user perspective. Multi-Account setups are common to provide control plane separation between Production, Development, Billing and Shared Services accounts but do you need to setup Federation with each of these accounts or create an IAM user in each one? That makes those accounts kind of cumbersome to manage and the more users we have the more chance one of them could get hacked.\nFirst, lets look at to different patterns that can be used to authenticate with multiple AWS Accounts. The first method we have either an IAM User (Username and Password stored in the AWS Account IAM Service) or a Federated User (Username and Password stored in a local Identity Provider) that can login to any of the accounts in the AWS environment. For this authentication pattern, Identity Federation would need to be setup for every account with the Identity Provider, or an IAM User would need to be created for each account which means many logins to keep track of and manage. The overall pattern would look something like this:\nIn the second method, we\u0026rsquo;re using a gateway account to handle all of our authentication into the AWS environment meaning that a single login is required. Federated users or IAM Users would login to this gateway account first and from there would use the Switch Role feature to assume a role in another account. This pattern would look similar to this:\nIf you prefer the first option, then you have what you need and just need to setup your authentication mechanisms with each account. If you prefer option two where you authenticate against a single gateway account and role switch to your desired destination, then we should look deeper about how that role switching takes place.\nRole Switching in the AWS Console To role switch in the AWS Web console, you would first login to your gateway account. This is usually a shared services or security related account where centralized management of users, groups and roles can take place. From there you\u0026rsquo;ll go to the login dropdown at the top of the console an select the option \u0026ldquo;Switch Role\u0026rdquo;. The Switch Role window will pop open and ask for an account number and a role to assume when you switch accounts. You can then give it a display name for the console and a color which I\u0026rsquo;ve found really valuable but your mileage may vary. When you\u0026rsquo;re done click switch role and you\u0026rsquo;ll be switched to your destination account. You can go back to your gateway account at any time by going back to the login dropdown and clicking the \u0026ldquo;back to [username]\u0026rdquo; and you\u0026rsquo;ll role switch back to the original login.\nOnce you\u0026rsquo;ve switched roles once, the browser will cache your last five roles that have been switched and from then on, you don\u0026rsquo;t need to re-enter your account number and role. If you navigate to the login dropdown and select one of your cached roles, you\u0026rsquo;ll be able to more quickly switch between accounts going forward, until you delete your browser cache or switch roles to more than five different accounts.\nSwitch Roles in the AWS CLI First, lets look at switching roles if we login to the AWS CLI as an IAM User. Once you setup your AWS CLI you\u0026rsquo;ll have your credentials stored in the .aws/credentials file which includes your access keys and secret keys to log you into your accounts. If you execute a command you\u0026rsquo;ll receive responses related to the default account that was setup.\nYou can also modify the .aws/config file to include any roles that you might want to role switch into. To do this, you would give the profile a name and then specify the role_arn of the role that you\u0026rsquo;d be switching into as well as the profile that would be allowed to switch from.\nWhen it\u0026rsquo;s all done, you can run the same commands you normally would, but specify the \u0026ndash;profile [profile name] command and the cli will run the command in the correct account. Below is an example of two identidal commands that are in different aws accounts specified by the profile switch.\nIf you\u0026rsquo;re company requires you to federate, this gets slightly more difficult because now you need to login to your federation server and receive a token, which is passed to AWS for authentication. There is a great tutorial on the AWS blog on how to use python to do this with ADFS 1.0, 2.0, and 3.0 found here: https://aws.amazon.com/blogs/security/how-to-implement-a-general-solution-for-federated-apicli-access-using-saml-2-0/ and if you need to do this, I urge you to read this thoroughly. When you\u0026rsquo;ve implemented the scripts, you\u0026rsquo;ll have a similar login process but the first federated login will update your .aws/credentials file to use your temporary token for login and then once thats complete, you can role switch like we did before.\nSetup Role Switching For cross account role switching to work properly, you must setup some configurations in both the source account (the account you\u0026rsquo;ll be logging into and switching from) and the destination account (the account that you\u0026rsquo;ll role switch into).\nFirst we\u0026rsquo;ll start with the destination account or the account that you\u0026rsquo;ll role switch into. The goal here is to create a role that other accounts have permissions to assume. In this example I\u0026rsquo;m creating a role named \u0026ldquo;Admins\u0026rdquo; and I\u0026rsquo;m going to allow my source account access to assume that role.\nOpen the IAM console and go to Roles. Click the \u0026ldquo;Create role\u0026rdquo; button.\nUnder the type of trusted identity select the \u0026ldquo;Another AWS account\u0026rdquo; option. From there you\u0026rsquo;ll need to enter in the source account number that will have access to assume this role in this destination account. You could also require MFA or an external ID but that is not covered in this blog post. External ID cannot currently be used through the console so be aware of that. Click the \u0026ldquo;Next:Permissions\u0026rdquo; button.\nOn the the attach permissions policies screen, select what permissions this role will have on the account. I\u0026rsquo;m assuming my administrators are using this account so I\u0026rsquo;ve given full access. Click the \u0026ldquo;Next:Review\u0026rdquo; button.\nOn the review screen, give the role a name. and complete the setup. Be sure to remember the name of this role as it will be needed in a future step.\nNext we\u0026rsquo;ll move to the source account or the gateway account that logins are funneled through. So login to the source account and go to the IAM console again. The main task here is that you must add a permission to the users who will have access to the destination accounts. This is done by creating a new policy allowing the assume role permission on the user, group or role that is being provided access.\nCreate a new policy and add the JSON code from the screenshot. The important part to edit here is the destination account name, and the role that was created.\nThe JSON starter is here so you can copy and paste to get started.\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: \u0026#34;sts:AssumeRole\u0026#34;, \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:iam::[YOURACCTNUMBER]:role/[YOURROLENAME]\u0026#34; ] } } The next step would be to attach this policy to a user (less preferred) or group (more preferred) that will have access to assume the role in the destination account. I prefer to create a group for each of my accounts and attach a policy specifically to that account as seen in the screenshots below.\nSummary There are several ways that your authentication mechanisms can be architected and you should consider the options from both a security and manageability perspective. Is it easier to manage multiple federated accounts or a single federated account that allows you to switch from another role? Is it more difficult to get new IAM roles created for new accounts or re-setup federation when you onboard a new account? Is it too much hassle to login and then role switch before doing work? These are all good questions that should be considered before building out your environment on AWS.\n","permalink":"https://theithollow.com/2018/04/30/manage-multiple-aws-accounts-with-role-switching/","summary":"\u003cp\u003eA pretty common question that comes up is how to manage multiple accounts within AWS from a user perspective. Multi-Account setups are common to provide control plane separation between Production, Development, Billing and Shared Services accounts but do you need to setup Federation with each of these accounts or create an IAM user in each one? That makes those accounts kind of cumbersome to manage and the more users we have the more chance one of them could get hacked.\u003c/p\u003e","title":"Manage Multiple AWS Accounts with Role Switching"},{"content":"Just because you\u0026rsquo;ve started moving workloads into the cloud, doesn\u0026rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you\u0026rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on AD Connector which makes a connection to your on-premises or EC2 installed domain controllers. AD Connector doesn\u0026rsquo;t run your Active Directory but rather uses your existing active directory intances within AWS. As such, in order to use AD Connector you would need to have a VPN connection or Direct Connect to provide connectivity back to your data center. Also, you\u0026rsquo;ll need to be prepared to have credentials to connect to the domain. Domain Admin credentials will work, but as usual you should use as few privileges as possible so delegate access to a user with the follow permissions:\nRead users and groups Create computer objects Join computers to the domain Deploy To deploy AD Connector within your existing AWS VPCs, go to the Directory Service from the services menu.\nWhen the Directory Service page opens up you\u0026rsquo;ll see several options available to you, but for this post, choose AD Connector.\nTo setup a new directory, first enter the AD DNS Name for the AD Domain you\u0026rsquo;ll be connecting with. You can optionally provide a NetBIOS name if necessary. Next, enter a username and a password for a user that has permissions that we discussed above. After this, you\u0026rsquo;ll need to specify the DNS address for your domain. This should be the IP Address of your DNS Servers which in my case are also my domain controllers. You\u0026rsquo;ll also need to decide which VPC your AD Connectors will live in, and which subnets. Rememeber that these subnets need to be able to communicate with your existing AD instances so if they are on-premises you\u0026rsquo;ll need a VPN or Direct Connect. If they live within your AWS environment, make sure that those subnets can communicate with the ones specified in this window.\nThe next screen shows you a review before you deploy. If it looks good, click the \u0026ldquo;Create AD Connector\u0026rdquo; button for the magic to happen in the background.\nYou should see a green status message stating that the magic is happening.\nIt will take a bit to deploy but when done you\u0026rsquo;ll see a new directory listed in your portal. Select the directory that was created and you\u0026rsquo;ll see some information needed for the rest of this post. Specifically, you\u0026rsquo;ll want to take note of the \u0026ldquo;On-premises DNS Address\u0026rdquo; listed in the details page for the following section.\nIf you were looking for how to do this through CloudFormation, then this post isn\u0026rsquo;t your friend. I also prefer to do everything through CloudFormation when possible, but found no documentation for completing this task through CFn. If you find the answers please post in the comments and I\u0026rsquo;ll update the post.\nModify DHCP Option Sets You\u0026rsquo;ve connected your domain controllers with AWS now, but your clients will need to be reconfigured to use these two domain controllers for their DNS resolution. To provide this for the entire VPC, we\u0026rsquo;ll want to create a new DHCP Option Set and assign it to the VPC that will use these Domain Controllers. Go to your VPC menu in the console and find the DHCP Options Set link. Create a new option set with the Domain name and DNS servers from your new on-premises AD servers that we just connected.\nOnce you\u0026rsquo;ve created the options set you\u0026rsquo;ll need to associate it with your VPC(s) so that new addresses are handed out with the appropriate settings. NOTE: You can only have one DHCP option set associated with at VPC at a time. To assign the new Option Set, select the VPC from the VPC menu and click the actions button, then select the Edit DHCP Options Set link. You\u0026rsquo;ll then have a drop down to select your preferred option set.\nConfigure Roles Before we start deploying member servers, we\u0026rsquo;ll need to create a role in the IAM console. This role will allow Simple Systems Manager (SSM or Systems Manager for short) the permission to join new EC2 instances to the new domain. To create this role, go to the IAM console and click on Roles. Click the \u0026ldquo;Create role\u0026rdquo; button.\nWhen the create role window opens up select \u0026ldquo;AWS service\u0026rdquo; and then select EC2 under the service that will use the role. Click the \u0026ldquo;Next:permissions\u0026rdquo; button to continue.\nIn the permissions screen search for AmazonEC2RoleforSSM and select it. Click the \u0026ldquo;Next:Review\u0026rdquo; button.\nReview the screen and give the role a name before click the \u0026ldquo;Create role\u0026rdquo; button.\nAuto-Join to the Domain Now that your directory is setup, you can have new Windows only EC2 instances automatically join your domain when they are created. To do this it uses the EC2DomainJoin role we created earlier. To test this deploy a new EC2 instance into the VPC you used with the AD Connector. When you get to the \u0026ldquo;Configure Instance\u0026rdquo; stage of deployment, you\u0026rsquo;ll need to ensure that a few new settings are configured. Ensure that the \u0026ldquo;Domain join directory\u0026rdquo; is your new directory service and you assign the EC2DomainJoin role to the instance at creation.\nAfter your done deploying you should see your computer object in your on-premises Active Directory console.\nUse AD Connector to Authenticate to the AWS Management Console You can use the AD Connector to do more things in AWS such as use your on-premises domain to authenticate to the console. This limits the number of IAM users needed to be crated in the AWS console and hopefully helps to protect the environment even further.\nFirst, we create an endpoint so that the AWS services can access the directory. Enter a name for the endpoint and click the \u0026ldquo;Create Access URL\u0026rdquo;.\nClick \u0026ldquo;Continue\u0026rdquo; to proceed with creating an endpoint. Notice that you can\u0026rsquo;t change it later. Click Continue. There are other services integrated with AWS Directory Services but for this example, we\u0026rsquo;ll just use the Management Console. Navigate back to your directory service details and look towards the bottom of the screen under AWS apps \u0026amp; services. Click the AWS Management Console. When the new window opens click the \u0026ldquo;Enable Access\u0026rdquo; button.\nBefore the users and groups within AD can login to the console with their AD credentials, another Role needs to be created to provide access to the console. Go to the IAM console again and create another role. This time when you create a new role, choose the Directory Service as the service that will use the role.\nYou don\u0026rsquo;t need to assign any additional permissions (at this time) since we\u0026rsquo;re only demonstrating that this role can be used to authenticate. If you plan to use this role for users to have permissions to use anything in the console, those permissions need to be added. On the last step, give the role a name.\nOnce you\u0026rsquo;ve created the role, go back to your directory and click the Management Console Access link.\nFrom here you\u0026rsquo;ll see a section for Users and Groups to Roles. A single Role will be listed which is what was just built in the previous few steps. Click the role to assign users from your on-premises domain.\nIn the Add Users and Groups to Role window type a name. I chose the my own AD user account.\nWhen done you\u0026rsquo;ll see your user(s) added to the directory.\nNow, if you go to your endpoint URL (hint, the link is located next to the Management Console in your directory) you\u0026rsquo;ll be taken to a login page. Enter the Username and Password of the user that you added, and you\u0026rsquo;ve used your new Microsoft AD service and your directory store for the AWS Management Console.\nSummary Congratulations on setting up AD Connector. You can how use your existing Active Directory environment to login to the AWS Console, and automatically have new Windows instances joined to the domain for you.\n","permalink":"https://theithollow.com/2018/04/23/aws-directory-service-ad-connector/","summary":"\u003cp\u003eJust because you\u0026rsquo;ve started moving workloads into the cloud, doesn\u0026rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you\u0026rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on AD Connector which makes a connection to your on-premises or EC2 installed domain controllers. AD Connector doesn\u0026rsquo;t run your Active Directory but rather uses your existing active directory intances within AWS. As such, in order to use AD Connector you would need to have a VPN connection or Direct Connect to provide connectivity back to your data center. Also, you\u0026rsquo;ll need to be prepared to have credentials to connect to the domain. Domain Admin credentials will work, but as usual you should use as few privileges as possible so delegate access to a user with the follow permissions:\u003c/p\u003e","title":"AWS Directory Service - AD Connector"},{"content":"Just because you\u0026rsquo;ve started moving workloads into the cloud, doesn\u0026rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you\u0026rsquo;re moving to AWS, there are also some great services you can take advantage of to provide similar functionality. This post focuses on Simple AD is based on Samba4 and handles a subset of the features that the Microsoft AD type Directory Service provides. This service still allows you to use Kerberos authentication and manage users and computers as well as provide DNS services. One of the major differences between this service and Microsoft AD is that you can\u0026rsquo;t create a trust relationship with your existing domain, so if you need that functionality look at Microsoft AD instead. Simple AD gives you a great way to quickly stand up new domains and cut down on the things you need to manage such as OS patches, etc.\nDeploy To deploy Simple AD within your existing AWS VPCs, go to the Directory Service from the services menu.\nWhen the Directory Service page opens up you\u0026rsquo;ll see several options available to you, but here we\u0026rsquo;ll stick with Simple AD. Locate Simple AD and click the \u0026ldquo;Set up directory\u0026rdquo; link.\nFirst, enter a Directory DNS. This is a FQDN for your environment. I use \u0026ldquo;hollow.local\u0026rdquo; for my on-prem domain so I like to use something like sbx1.hollow.local for my sandbox cloud environment. You can optionally provide a NetBIOS name if necessary. Next, enter an administrator password. This will be your domain admin password and you\u0026rsquo;ll need this later to configure the infrastructure.\nNext select a size. Simple AD comes in two sizes and the main difference is the number of objects the directory can manage. Small can handle about 500 users or 2000 objects and Large supports up to 5000 users or 20,000 objects. If you need more than this, consider Microsoft AD instead of Simple AD.\nLastly, select the VPC that a pair of domain controllers will be deployed in, and then select which subnets they should live in. Private subnets make a good location for this as most people I know don\u0026rsquo;t allow access to their domain controllers from over the Internet. Click the \u0026ldquo;Next\u0026rdquo; button.\nThe next screen shows you a review before you deploy. If it looks good, click the \u0026ldquo;Create Simple AD\u0026rdquo; button for the magic to happen in the background.\nOnce done you\u0026rsquo;ll get a status message that the directory is being created.\nIf you aren\u0026rsquo;t all about deploying this through the console, Simple AD can be deployed through CloudFormation so you can have even more Infrastructure as Code (IaC). Here is a quick snippet for doing the steps above through a CloudFormation Template in JSON format.\n{ \u0026#34;AWSTemplateFormatVersion\u0026#34; : \u0026#34;2010-09-09\u0026#34;, \u0026#34;Description\u0026#34;: \u0026#34;Simple AD Service\u0026#34;, \u0026#34;Parameters\u0026#34; : { \u0026#34;SimpleADPW\u0026#34; : { \u0026#34;Type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;subnetID1\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;Subnet ID to provision instance in\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::EC2::Subnet::Id\u0026#34;, \u0026#34;Default\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;subnetID2\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;Subnet ID to provision instance in\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::EC2::Subnet::Id\u0026#34;, \u0026#34;Default\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;VPC\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;The VPC to deploy resources into\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::EC2::VPC::Id\u0026#34;, \u0026#34;Default\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;DirectoryName\u0026#34; : { \u0026#34;Description\u0026#34; : \u0026#34;Unique Name for Directory\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;ADSize\u0026#34; : { \u0026#34;Description\u0026#34; : \u0026#34;AD Directory Size\u0026#34;, \u0026#34;Type\u0026#34; : \u0026#34;String\u0026#34;, \u0026#34;AllowedValues\u0026#34;: [ \u0026#34;Small\u0026#34;, \u0026#34;Large\u0026#34; ] } }, \u0026#34;Resources\u0026#34;: { \u0026#34;myDirectory\u0026#34; : { \u0026#34;Type\u0026#34; : \u0026#34;AWS::DirectoryService::SimpleAD\u0026#34;, \u0026#34;Properties\u0026#34; : { \u0026#34;Name\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;DirectoryName\u0026#34;}, \u0026#34;Password\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;SimpleADPW\u0026#34; }, \u0026#34;VpcSettings\u0026#34; : { \u0026#34;SubnetIds\u0026#34; : [ { \u0026#34;Ref\u0026#34; : \u0026#34;subnetID1\u0026#34; }, { \u0026#34;Ref\u0026#34; : \u0026#34;subnetID2\u0026#34; } ], \u0026#34;VpcId\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;VPC\u0026#34; } }, \u0026#34;Size\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;ADSize\u0026#34;} } } } } Whichever deployment method you choose, it will take a bit to deploy but when done you\u0026rsquo;ll see a new directory listed in your portal. Select the directory that was created and you\u0026rsquo;ll see some information needed for the rest of this post. Specifically, you\u0026rsquo;ll want to take note of the DNS Address listed in the details page for the following section.\nModify DHCP Option Sets You\u0026rsquo;ve deployed your domain controllers, but your clients will need to be reconfigured to use these two domain controllers for their DNS resolution. To provide this for the entire VPC, we\u0026rsquo;ll want to create a new DHCP Option Set and assign it to the VPC or VPCs that will use these Domain Controllers. Go to your VPC menu in the console and find the DHCP Options Set link. Create a new option set with the Domain name and DNS servers from your new Simple AD servers that we just created.\nOnce you\u0026rsquo;ve created the options set you\u0026rsquo;ll need to associate it with your VPC(s) so that new addresses are handed out with the appropriate settings. NOTE: You can only have one DHCP option set associated with at VPC at a time. To assign the new Option Set, select the VPC from the VPC mentu and click the actions button, then select the Edit DHCP Options Set link. You\u0026rsquo;ll then have a drop down to select your preferred option set.\nAs you can see my options set is applied to my VPC now.\nConfigure Roles Before we start deploying member servers, we\u0026rsquo;ll need to create a role in the IAM console. This role will allow Simple Systems Manager (SSM or Systems Manager for short) the permission to join new EC2 instances to the new domain. To create this role, go to the IAM console and click on Roles. Click the \u0026ldquo;Create role\u0026rdquo; button.\nWhen the create role window opens up select \u0026ldquo;AWS service\u0026rdquo; and then select EC2 under the service that will use the role. Click the \u0026ldquo;Next:permissions\u0026rdquo; button to continue.\nIn the permissions screen search for AmazonEC2RoleforSSM and select it. Click the \u0026ldquo;Next:Review\u0026rdquo; button.\nReview the screen and give the role a name before click the \u0026ldquo;Create role\u0026rdquo; button.\nConfigure Management Hosts You\u0026rsquo;re ready to go, but there isn\u0026rsquo;t an interface within the AWS console for you to create new users, groups etc like you normally would with Active Directory. This is a normal AD setup though so to manage our AD infrastructure we need to deploy a member server and then install our AD tools on it. So first, lets install a new member server that is joined to our new domain.\nDeploy a new EC2 instance with a Windows server 2016 operating system on it as you normally would. But take notice that the console has a pair of subtle changes that need to be set as we deploy. In step 3 - Configure Instance you\u0026rsquo;ll see that we need to select the \u0026ldquo;Domain join directory\u0026rdquo; setting which should show as our new domain. Also, in the IAM role we need to select the role we created in the previous section. This is critical so that the machine can be joined to the domain as its deployed. Finish deploying your server.\nOnce the server has been deployed, it will restart to join the domain so wait a bit before trying to login to it. When its finished being deployed, connect to the instance over Remote Desktop and login with a domain user account. Up to this point the only user that has been created is \u0026ldquo;administrator\u0026rdquo; and the password you specified. Login to the member server and install the Lightweight Directory Service tools from Server Manager.\nAfter the tools are installed, you\u0026rsquo;ll see your Active Directory tools like you\u0026rsquo;re accustomed to seeing. If you look in Active Directory Users and Computers (ADUC) you\u0026rsquo;ll notice some interesting things. Under the Domain Controllers Folder, two DCs will be listed in this folder for the Simple AD servers. These are the two DCs deployed for you through the AWS service.\nAlso, if you look in your Computers folder under aws, your member server will be listed.\nUse Simple AD to Authenticate to the AWS Management Console You can use Simple AD to do more things in AWS such as use your new domain to authenticate to the console. This limits the number of IAM users needed to be crated in the AWS console and hopefully helps to protect the environment even further.\nFirst, we create an endpoint so that the AWS services can access the new directory. Enter a name for the endpoint and click the \u0026ldquo;Create Access URL\u0026rdquo;.\nClick Continue to proceed with creating an endpoint. Note that you can\u0026rsquo;t change it later. Click Continue.\nThere are other services integrated with Simple AD but for this example, we\u0026rsquo;ll just use the Management Console. Navigate back to your directory service details and look towards the bottom of the screen under AWS apps \u0026amp; services. Click the AWS Management Console. When the new window opens click the \u0026ldquo;Enable Access\u0026rdquo; button.\nBefore the users and groups within AD can login to the console with their AD credentials, another Role needs to be created to provide access to the console. Go to the IAM console again and create another role. This time when you create a new role, choose the Directory Service as the service that will use the role.\nYou don\u0026rsquo;t need to assign any additional permissions (at this time) since we\u0026rsquo;re only demonstrating that this role can be used to authenticate. If you plan to use this role for users to have permissions to use anything in the console, those permissions need to be added. On the last step, give the role a name.\nOnce you\u0026rsquo;ve created the role, go back to your directory and click the Management Console Access link.\nFrom here you\u0026rsquo;ll see a section for Users and Groups to Roles. A single Role will be listed which is what was just built in the previous few steps. Click the role to assign users from your Simple AD domain.\nIn the Add Users and Groups to Role window type a name. I added a new AD user for my own account in this example.\nWhen done you\u0026rsquo;ll see your user(s) added to the directory.\nNow, if you go to your endpoint URL (hint, the link is located next to the Management Console in your directory) you\u0026rsquo;ll be taken to a login page. Enter the Username and Password of the user that you added, and you\u0026rsquo;ve used your new Microsoft AD service and your directory store for the AWS Management Console.\nSummary You should have a working Simple AD service up and running in your AWS account and can now manage users in much the same way you\u0026rsquo;ve always managed them in AD. Now that you\u0026rsquo;ve got your domain working correctly, you can go about building all those apps you\u0026rsquo;ve been dying to get to in your cloud. And now they\u0026rsquo;ll have an authentication method that is secure and familiar to you but won\u0026rsquo;t have to worry about those pesky servers being patched, and managed. Happy coding!\n","permalink":"https://theithollow.com/2018/04/16/aws-directory-service-simple-ad/","summary":"\u003cp\u003eJust because you\u0026rsquo;ve started moving workloads into the cloud, doesn\u0026rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you\u0026rsquo;re moving to AWS, there are also some great services you can take advantage of to provide similar functionality. This post focuses on Simple AD is based on Samba4 and handles a subset of the features that the \u003ca href=\"/2018/04/09/aws-directory-service-microsoft-ad/\"\u003eMicrosoft AD\u003c/a\u003e type Directory Service provides. This service still allows you to use Kerberos authentication and manage users and computers as well as provide DNS services. One of the major differences between this service and Microsoft AD is that you can\u0026rsquo;t create a trust relationship with your existing domain, so if you need that functionality look at Microsoft AD instead. Simple AD gives you a great way to quickly stand up new domains and cut down on the things you need to manage such as OS patches, etc.\u003c/p\u003e","title":"AWS Directory Service - Simple AD"},{"content":"Just because you\u0026rsquo;ve started moving workloads into the cloud, doesn\u0026rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you\u0026rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on Microsoft AD which is a Server 20012 R2 based domain that provides a pair of domain controllers across Availability Zones and also handles DNS. This service is the closest service to a full blow Active Directory that you\u0026rsquo;d host on premises. You can even create a trust between the Microsoft AD deployed in AWS and your on-prem domain. You cannot extend your on-premises domain into Microsoft AD at the time of this writing though. If you wish to extend your existing domain, you should consider building your own DCs on EC2 instances and then you have full control over your options.\nMicrosoft AD gives you a great way to quickly stand up new domains and cut down on the things you need to manage such as OS patches, etc but you will have to put up with some limitations for this ease of use. For example, right now it\u0026rsquo;s only Server 2012 R2 domain functional level, so if you\u0026rsquo;re already at Server 2016 in your existing domain, you\u0026rsquo;ll have to use a downgraded version in AWS if you chose this solution.\nDeploy To deploy Microsoft AD within your existing AWS VPCs, go to the Directory Service from the services menu.\nWhen the Directory Service page opens up you\u0026rsquo;ll see several options available to you, but here we\u0026rsquo;ll stick with Microsoft AD. Location Microsoft AD and click the \u0026ldquo;Set up directory\u0026rdquo; button.\nTo setup a new directory, first pick an edition. This decision comes down to how big your directory will be. If you need to support more than 5,000 employees or 30,000 managed objects, then you should pick Enterprise (and you\u0026rsquo;ll pay more for it) but otherwise Standard should be sufficient.\nAfter this, enter a Directory DNS. This is a FQDN for your environment. I use \u0026ldquo;hollow.local\u0026rdquo; for my on-prem domain so I like to use something like aws.hollow.local for my cloud environment. You can optionally provide a NetBIOS name if necessary.\nNext, enter an administrator password. This will be your domain admin password and you\u0026rsquo;ll need this later to configure the infrastructure.\nLastly, select the VPC that a pair of domain controllers will be deployed in, and then select which subnets they should live in. Private subnets make a good location for this as most people I know don\u0026rsquo;t allow access to their domain controllers from over the Internet. Click the \u0026ldquo;Next\u0026rdquo; button.\nThe next screen shows you a review before you deploy. If it looks good, click the \u0026ldquo;Create Microsoft AD\u0026rdquo; button for the magic to happen in the background.\nIf you aren\u0026rsquo;t all about deploying this through the console, Microsoft AD can be deployed through CloudFormation so you can have even more Infrastructure as Code (IaC). Here is a quick snippet for doing the steps above through a CloudFormation Template in JSON format.\n{ \u0026#34;AWSTemplateFormatVersion\u0026#34; : \u0026#34;2010-09-09\u0026#34;, \u0026#34;Description\u0026#34;: \u0026#34;Microsoft Directory Service\u0026#34;, \u0026#34;Parameters\u0026#34; : { \u0026#34;MicrosoftADPW\u0026#34; : { \u0026#34;Type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;subnetID1\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;Subnet ID to provision instance in\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::EC2::Subnet::Id\u0026#34;, \u0026#34;Default\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;subnetID2\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;Subnet ID to provision instance in\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::EC2::Subnet::Id\u0026#34;, \u0026#34;Default\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;VPC\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;The VPC to deploy resources into\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::EC2::VPC::Id\u0026#34;, \u0026#34;Default\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;DirectoryName\u0026#34; : { \u0026#34;Description\u0026#34; : \u0026#34;Unique Name for Directory\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;String\u0026#34; } }, \u0026#34;Resources\u0026#34;: { \u0026#34;myDirectory\u0026#34; : { \u0026#34;Type\u0026#34; : \u0026#34;AWS::DirectoryService::MicrosoftAD\u0026#34;, \u0026#34;Properties\u0026#34; : { \u0026#34;Name\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;DirectoryName\u0026#34;}, \u0026#34;Password\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;MicrosoftADPW\u0026#34; }, \u0026#34;VpcSettings\u0026#34; : { \u0026#34;SubnetIds\u0026#34; : [ { \u0026#34;Ref\u0026#34; : \u0026#34;subnetID1\u0026#34; }, { \u0026#34;Ref\u0026#34; : \u0026#34;subnetID2\u0026#34; } ], \u0026#34;VpcId\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;VPC\u0026#34; } } } } } } Whichever deployment method you choose, it will take a bit to deploy but when done you\u0026rsquo;ll see a new directory listed in your portal. Select the directory that was created and you\u0026rsquo;ll see some information needed for the rest of this post. Specifically, you\u0026rsquo;ll want to take note of the DNS Address listed in the details page for the following section.\nModify DHCP Option Sets You\u0026rsquo;ve deployed your domain controllers, but your clients will need to be reconfigured to use these two domain controllers for their DNS resolution. To provide this for the entire VPC, we\u0026rsquo;ll want to create a new DHCP Option Set and assign it to the VPC or VPCs that will use these Domain Controllers. Go to your VPC menu in the console and find the DHCP Options Set link. Create a new option set with the Domain name and DNS servers from your new Microsoft AD servers that we just created.\nOnce you\u0026rsquo;ve created the options set you\u0026rsquo;ll need to associate it with your VPC(s) so that new addresses are handed out with the appropriate settings. NOTE: You can only have one DHCP option set associated with at VPC at a time. To assign the new Option Set, select the VPC from the VPC mentu and click the actions button, then select the Edit DHCP Options Set link. You\u0026rsquo;ll then have a drop down to select your preferred option set.\nAs you can see my options set is applied to my VPC now.\nConfigure Roles Before we start deploying member servers, we\u0026rsquo;ll need to create a role in the IAM console. This role will allow Simple Systems Manager (SSM or Systems Manager for short) the permission to join new EC2 instances to the new domain. To create this role, go to the IAM console and click on Roles. Click the \u0026ldquo;Create role\u0026rdquo; button.\nWhen the create role window opens up select \u0026ldquo;AWS service\u0026rdquo; and then select EC2 under the service that will use the role. Click the \u0026ldquo;Next:permissions\u0026rdquo; button to continue.\nIn the permissions screen search for AmazonEC2RoleforSSM and select it. Click the \u0026ldquo;Next:Review\u0026rdquo; button.\nReview the screen and give the role a name before click the \u0026ldquo;Create role\u0026rdquo; button.\nConfigure Management Hosts You\u0026rsquo;re ready to go, but there isn\u0026rsquo;t an interface within the AWS console for you to create new users, groups etc like you normally would with Active Directory. This is a normal AD setup though so to manage our AD infrastructure we need to deploy a member server and then install our AD tools on it. So first, lets install a new member server that is joined to our new domain.\nDeploy a new EC2 instance with a Windows server 2016 operating system on it as you normally would. But take notice that the console has a pair of subtle changes that need to be set as we deploy. In step 3 - Configure Instance, you\u0026rsquo;ll see that we need to select the \u0026ldquo;Domain join directory\u0026rdquo; setting which should show as our new domain. Also, in the IAM role we need to select the role we created in the previous section. This is critical so that the machine can be joined to the domain as its deployed. Finish deploying your server.\nOnce the server has been deployed, it will restart to join the domain so wait a bit before trying to login to it. When its finished being deployed, connect to the instance over Remote Desktop and login with a domain user account. Up to this point the only user that has been created is \u0026ldquo;Admin\u0026rdquo; and the password you specified. Login to the member server and install the domain controller tools. The command below should install everything you need if you run it from a PowerShell console.\nInstall-WindowsFeature -Name GPMC,RSAT-AD-PowerShell,RSAT-AD-AdminCenter,RSAT-ADDS-Tools,RSAT-DNS-Server After the tools are installed, you\u0026rsquo;ll see your Active Directory tools like you\u0026rsquo;re accustomed to seeing. If you look in Active Directory Users and Computers (ADUC) you\u0026rsquo;ll notice some interesting things. Under the Domain Controllers Folder, two DCs will be listed and are Global Catalog. These are the two DCs deployed for you through the AWS service.\nIf you look further, you\u0026rsquo;ll see an aws folder which as Computers and Users in it. You\u0026rsquo;ll see your Admin account listed here with a note to not delete it.\nAlso, if you look in your Computers folder under aws, your member server will be listed.\nGenerally, the default folder named \u0026ldquo;Computers\u0026rdquo; under the root is where member servers are listed, but this is not the case for the Microsoft AD service.\nUse Microsoft AD to Authenticate to the AWS Management Console You can use Microsoft AD to do more things in AWS such as use your new domain to authenticate to the console. This limits the number of IAM users needed to be crated in the AWS console and hopefully helps to protect the environment even further.\nFirst, we create an endpoint so that the AWS services can access the new directory. Enter a name for the endpoint and click the \u0026ldquo;Create Access URL\u0026rdquo;.\nClick Continue to proceed with creating an endpoint. Note that you can\u0026rsquo;t change it later. Click Continue.\nThere are other services integrated with Microsoft AD but for this example, we\u0026rsquo;ll just use the Management Console. Navigate back to your directory service details and look towards the bottom of the screen under AWS apps \u0026amp; services. Click the AWS Management Console. When the new window opens click the \u0026ldquo;Enable Access\u0026rdquo; button.\nBefore the users and groups within AD can login to the console with their AD credentials, another Role needs to be created to provide access to the console. Go to the IAM console again and create another role. This time when you create a new role, choose the Directory Service as the service that will use the role.\nYou don\u0026rsquo;t need to assign any additional permissions (at this time) since we\u0026rsquo;re only demonstrating that this role can be used to authenticate. If you plan to use this role for users to have permissions to use anything in the console, those permissions need to be added. On the last step, give the role a name.\nOnce you\u0026rsquo;ve created the role, go back to your directory and click the Management Console Access link.\nFrom here you\u0026rsquo;ll see a section for Users and Groups to Roles. A single Role will be listed which is what was just built in the previous few steps. Click the role to assign users from your Microsoft AD domain.\nIn the Add Users and Groups to Role window type a name. I chose the \u0026ldquo;Admin\u0026rdquo; account because I didn\u0026rsquo;t bother creating any new users. [This blog post is long enough already!]\nWhen done you\u0026rsquo;ll see your user(s) added to the directory.\nNow, if you go to your endpoint URL (hint, the link is located next to the Management Console in your directory) you\u0026rsquo;ll be taken to a login page. Enter the Username and Password of the user that you added, and you\u0026rsquo;ve used your new Microsoft AD service and your directory store for the AWS Management Console.\nSummary If you\u0026rsquo;re still reading this, I commend you but you should have a working Microsoft AD service up and running in your AWS account and can now manage users in much the same way you\u0026rsquo;ve always managed them in AD. Now that you\u0026rsquo;ve got your domain working correctly, you can go about building all those apps you\u0026rsquo;ve been dying to get to in your cloud. And now they\u0026rsquo;ll have an authentication method that is secure and familiar to you but won\u0026rsquo;t have to worry about those pesky servers being patched, and managed. Happy coding!\n","permalink":"https://theithollow.com/2018/04/09/aws-directory-service-microsoft-ad/","summary":"\u003cp\u003eJust because you\u0026rsquo;ve started moving workloads into the cloud, doesn\u0026rsquo;t mean you can forget about Microsoft Active Directory. Many customers simply stand up their own domain controllers on EC2 instances to provide domain services. But if you\u0026rsquo;re moving to AWS there are also some great services you can take advantage of, to provide similar functionality. This post focuses on Microsoft AD which is a Server 20012 R2 based domain that provides a pair of domain controllers across Availability Zones and also handles DNS. This service is the closest service to a full blow Active Directory that you\u0026rsquo;d host on premises. You can even create a trust between the Microsoft AD deployed in AWS and your on-prem domain. You cannot extend your on-premises domain into Microsoft AD at the time of this writing though. If you wish to extend your existing domain, you should consider building your own DCs on EC2 instances and then you have full control over your options.\u003c/p\u003e","title":"AWS Directory Service - Microsoft AD"},{"content":"Locking down an AWS environment isn\u0026rsquo;t really that if you know what threats you\u0026rsquo;re protecting against. You have services such as the Web Application Firewall, Security Groups, Network Access Control Lists, Bucket Policies and the list goes on. But many times you encounter threats from malicious attackers just trying to probe which vulnerabilities might exist in your cloud. AWS has built a service, called Amazon GuardDuty, to help monitor and protect your environment that is based on AWS machine learning tools and threat intelligence feeds. GuardDuty currently reads VPC Flow Logs (used for network traffic analysis) and CloudTrail Logs (used for control plane access analysis) along with DNS log data to protect an AWS environment. GuardDuty will use threat intelligence feeds to alert you when your workloads may be communicating with known to be malicious IP Addresses and can alert you when privileged escalation occurs as part of its machine learning about suspicious patterns.\nAt this point you\u0026rsquo;re probably thinking that Amazon GuardDuty sounds like a pretty useful tool but have two big questions: How much does it cost? How difficult is it to use?\nPricing Amazon GuardDuty is a fairly new service and that comes with some benefits. Primarily, this gives you a 30 day free trial on your usage of GuardDuty on a new account. This gives you an opportunity to kick the tires before you decide to start paying for it for real. One of the things I really like about this trial is that the console will show you exactly where you are at during the trial period and how much it would cost if you weren\u0026rsquo;t in a trial period.\nOK, so what are the prices after the trial period ends? Well just like most services it varies based on your region. If you\u0026rsquo;re in the us-east-1 region you can expect to pay about 1$ per GB of VPC Flow logs that are analyzed up to 500 GB. After that the price falls to $.50 per GB for the next 2000 GB and then the price falls again to $.25 from there after. Like most services the prices drop based on the volume. For CloudTrail log analysis you\u0026rsquo;ll expect to pay $4.00 for 1 Million CloudTrail requests.\nAs you can see the pricing is pretty reasonable. In a small environment or lab, you\u0026rsquo;ll probably expect to pay a couple bucks per month for the service while a larger environment obviously will pay more, but probably worth it for the added security it would bring to a production workload.\nSetup GuardDuty\nOK, so you\u0026rsquo;re sold on how Guard Duty works, and cool with the pricing. But machine learning and thread intelligence sounds tricky to manage. How hard is it to setup? First, lets take a look at setting this up through the console.\nGo to the Amazon GuardDuty service from your list and you\u0026rsquo;ll get the familiar \u0026ldquo;Get started\u0026rdquo; screen since you\u0026rsquo;ve never set it up before. Click the \u0026ldquo;Get started\u0026rdquo; button.\nOn the first screen, you can view the role permissions, but if you\u0026rsquo;re good with those click the \u0026ldquo;Enable GuardDuty\u0026rdquo; button at the bottom right hand corder of the screen.\nWell, that\u0026rsquo;s it. You\u0026rsquo;ve enabled GuardDuty and it can help protect you now. This does assume that you\u0026rsquo;ve setup VPC Flow logs and CloudTrail logs for your environment already but CloudTrail for example is now enabled by default. You did it!\nFrom here, you can look at any findings that GuardDuty came back with. Initially, you won\u0026rsquo;t have any, but as time goes on GuardDuty will list potential issues in this screen so that you may take actions on them.\nAdditional Configurations So if you want to do some more optional configurations you can look at the GuardDuty portal and configure some of your own things. First, if we look at the lists menu within GuardDuty, you can add a trusted IP or a threat list for unwanted traffic. Maybe you want to make sure to whitelist your corporate networks, or add known bot networks to a threat list for notification.\nIf you have a multiple account environment, you can setup GuardDuty to use Master and Member account relationships. For instance maybe you have an AWS Security account used for logging, patching, authorization and other security related solutions. Maybe this account gets setup as a Master GuardDuty account and your other accounts are Member accounts. This lets those Member accounts send their GuardDuty data to a single account where it can be analyzed across all your environments.\nIf your just dying to learn what kind of things might show up in your findings window, you can force sample findings to show up. This will put 1 entry for each type of finding in your findings pane with a [SAMPLE] tag so you know it isn\u0026rsquo;t real.\nOnce you\u0026rsquo;ve added the sample findings, you can click into each of them to get additional details such as the Portscan entry below.\nAlerting and Auto Remediation GuardDuty is pretty cool, but most people don\u0026rsquo;t want to continuously check on those findings. Luckily, CloudWatch Event rules can be integrated to take action based on a new finding. From the CloudWatch portal go to Events \u0026ndash;\u0026gt; Rules and add a new source of GuardDuty and an Event Type of GuardDuty Finding. From there you can specify a target such as an SNS topic for email alerts, or a Lambda Function so that you can have a script auto-remediate your environment based on a finding that occurs.\nSetup with CloudFormation The GuardDuty setup with CloudFormation is also really simple. Below is an example of setting up GuardDuty with a new account and it also creates an SNS Topic and a subscription to that topic so that new findings are automatically triggering email notifications.\n{ \u0026#34;AWSTemplateFormatVersion\u0026#34;: \u0026#34;2010-09-09\u0026#34;, \u0026#34;Description\u0026#34;: \u0026#34;GuardDuty with CloudWatch Event Rule and SNS Topic\u0026#34;, \u0026#34;Parameters\u0026#34;: { \u0026#34;Environment\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;String\u0026#34; }, \u0026#34;EmailAddress\u0026#34; : { \u0026#34;Type\u0026#34;: \u0026#34;String\u0026#34; } }, \u0026#34;Resources\u0026#34;: { \u0026#34;mydetector\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::GuardDuty::Detector\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;Enable\u0026#34;: true } }, \u0026#34;GDSNSTopic\u0026#34;:{ \u0026#34;Type\u0026#34;:\u0026#34;AWS::SNS::Topic\u0026#34;, \u0026#34;Properties\u0026#34; : { \u0026#34;DisplayName\u0026#34; : {\u0026#34;Fn::Join\u0026#34;: [ \u0026#34;-\u0026#34;, [\u0026#34;GuardDuty\u0026#34;, { \u0026#34;Ref\u0026#34;: \u0026#34;Environment\u0026#34;}, \u0026#34;SNSTopic\u0026#34;]] }, \u0026#34;TopicName\u0026#34;: {\u0026#34;Fn::Join\u0026#34;: [ \u0026#34;-\u0026#34;, [\u0026#34;GuardDuty\u0026#34;, { \u0026#34;Ref\u0026#34;: \u0026#34;Environment\u0026#34;}, \u0026#34;SNSTopic\u0026#34;]] } } }, \u0026#34;GDCWEvent\u0026#34;: { \u0026#34;Type\u0026#34;: \u0026#34;AWS::Events::Rule\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;GuardDuty Event Rule\u0026#34;, \u0026#34;EventPattern\u0026#34;: { \u0026#34;source\u0026#34;: [ \u0026#34;aws.guardduty\u0026#34; ], \u0026#34;detail-type\u0026#34;: [ \u0026#34;GuardDuty Finding\u0026#34; ] }, \u0026#34;State\u0026#34;: \u0026#34;ENABLED\u0026#34;, \u0026#34;Targets\u0026#34;: [{ \u0026#34;Arn\u0026#34;: { \u0026#34;Ref\u0026#34; : \u0026#34;GDSNSTopic\u0026#34;}, \u0026#34;Id\u0026#34;: \u0026#34;TargetGuardDutySNSTopic\u0026#34; }] } }, \u0026#34;GuardDutySubscription\u0026#34; : { \u0026#34;Type\u0026#34; : \u0026#34;AWS::SNS::Subscription\u0026#34;, \u0026#34;Properties\u0026#34; : { \u0026#34;Endpoint\u0026#34; : { \u0026#34;Ref\u0026#34; : \u0026#34;EmailAddress\u0026#34; }, \u0026#34;Protocol\u0026#34; : \u0026#34;email\u0026#34;, \u0026#34;TopicArn\u0026#34; : {\u0026#34;Ref\u0026#34; : \u0026#34;GDSNSTopic\u0026#34;} } } } } Summary Amazon GuardDuty is a pretty useful service to help to automatically find potential security threats within your AWS accounts. The price isn\u0026rsquo;t unreasonable and the configuration is incredibly simple for a powerful tool such as machine learning. Try it out for yourself, at least for 30 days of a free trial.\n","permalink":"https://theithollow.com/2018/04/02/protect-your-aws-accounts-with-guardduty/","summary":"\u003cp\u003eLocking down an AWS environment isn\u0026rsquo;t really that if you know what threats you\u0026rsquo;re protecting against. You have services such as the Web Application Firewall, Security Groups, Network Access Control Lists, Bucket Policies and the list goes on. But many times you encounter threats from malicious attackers just trying to probe which vulnerabilities might exist in your cloud. AWS has built a service, called Amazon GuardDuty, to help monitor and protect your environment that is based on AWS machine learning tools and threat intelligence feeds. GuardDuty currently reads VPC Flow Logs (used for network traffic analysis) and CloudTrail Logs (used for control plane access analysis) along with DNS log data to protect an AWS environment. GuardDuty will use threat intelligence feeds to alert you when your workloads may be communicating with known to be malicious IP Addresses and can alert you when privileged escalation occurs as part of its machine learning about suspicious patterns.\u003c/p\u003e","title":"Protect Your AWS Accounts with GuardDuty"},{"content":"Information Technology is a very difficult field to keep up with. Not only does computing power increase year after year, making the number of things we can do with computers increase, but drastic transformations always plague this industry. Complete paradigm shifts are a major part of our recent past such as mainframes, to client/server, to virtualization to cloud computing. In addition to these changes there are also silos of technologies we might want to focus on such as database design, programming, infrastructure or cloud computing. Inside each of these categories there are different platforms to learn, such as if you are a programmer, do you know C++, Java, Python or Cobol?\nSo what is a technician to do? The important thing is to keep grinding through new technologies and never give up on learning new things. Your career depends on learning new things all the time.\nAn Example The main constraint we have with learning is usually time. Time is fixed and there is only so much of it in a day. You need to take care of personal things, work, fun, family and studying. So if your skill capacity is based off of your time, then the skill capacity is also fairly fixed. Everyone\u0026rsquo;s skill capacity will be different because some people have more study time, less distractions and learn at different rates but in any case your own skill capacity is fairly fixed. As you learn new things, it\u0026rsquo;s like turning on a faucet that fills up your skills but pay attention that some of these skills will not be needed down the road. When technology changes and winners/loser are decided, some of the skills that you have may no longer be useful. Like the HD-DVD and the Zunes of the world, not all your skills will be valuable. You can think of your skill reservoir as having a drain with no stopper on it that slowly leaks your skills out into the ground. You need to learn faster than the drain lets out or you\u0026rsquo;ll not have any usable skills in the industry. I\u0026rsquo;ve often heard this referred to as walking up a down escalator. If you don\u0026rsquo;t keep moving, you\u0026rsquo;ll be at the bottom of the stairs again.\nYou have plenty of choices here such as what to learn, and how much of it to learn. For example, you could pick an individual technology and learn it to an expert level. I use the CCIE and VCDX certifications as an example of this in the diagram below. You\u0026rsquo;ve filled up your skill capacity with either a Cisco Certified Internetworking Expert (CCIE) or a VMware Certified Design Expert (VCDX) and that might fill up most of your skill capacity. The drain is still dribbling out your skills as new versions and updates come out. You need to continually learn new things to keep up with that expert level of knowledge.\nYou can also learn several different things at a lesser level of detail. In the example below, instead of learning a technology really deeply, we\u0026rsquo;ve learned several technologies at differing levels. A Cisco Certified Networking Professional (CCNP), Microsoft Certified Solutions Administrator (MCSA) or a VMware Certified Advanced Professional (VCAP). The benefit to this choice is that employers might be looking for one but not all of these skills. This broad level of knowledge might make you more employable but you\u0026rsquo;re not an expert in any one technology. Again, the drain is still leaking these skills into the earth.\nI can\u0026rsquo;t tell you want to learn but one of your goals should be to maintain skills that employers need. If you\u0026rsquo;re not picking the right technologies it may be more difficult to find work. In the example below, you have some skills but an employer is looking for other skills. Maybe you should consider changing what things to learn to fill up your skill capacity?\nWhat Should I Learn? At this point you see the goal, but now you\u0026rsquo;re asking what you should be learning to stay employable. That\u0026rsquo;s a pretty hard thing to get right. You can see major major technologies being used right now such as VMware Virtualization, Microsoft Operating Systems, or Amazon Web Services. You can pick these technologies which are probably a good choice, or you can bet on some of the new fun stuff that hasn\u0026rsquo;t taken off (at a massive scale) quite as much yet such as Containers or DevOps pipelines. If you pick the new hotness you might have some very valuable skills but you\u0026rsquo;re also making a bet that these will be important skills to have. If you lose the bet, that drain at the bottom of your skills tank might be open at full blast for those skills if they aren\u0026rsquo;t embraced by the industry.\nMy advice to you is this though. Just keep filling the tank with SOMETHING! As long as you keep learning things, you can adjust what is in the tank as you see the industry change directions. My best advice is to pick technologies that you think seem fun or are passionate about. It won\u0026rsquo;t seem like work if you continually learn things that you\u0026rsquo;re interested in. Start there and just keep filling the tank.\n","permalink":"https://theithollow.com/2018/03/26/fill-skills-tank/","summary":"\u003cp\u003eInformation Technology is a very difficult field to keep up with. Not only does computing power increase year after year, making the number of things we can do with computers increase, but drastic transformations always plague this industry. Complete paradigm shifts are a major part of our recent past such as mainframes, to client/server, to virtualization to cloud computing. In addition to these changes there are also silos of technologies we might want to focus on such as database design, programming, infrastructure or cloud computing. Inside each of these categories there are different platforms to learn, such as if you are a programmer, do you know C++, Java, Python or Cobol?\u003c/p\u003e","title":"Fill Your Skills Tank"},{"content":"Age discrimination can be an issue in any industry, but this issue is something members of the information technology (IT) industry can specifically identify with. My goal for this post is just to shine some light on the topic and discuss whether or not there is an injustice happening in IT when you reach a certain age, or if there is some less heinous reason why we see so many younger people in tech. I want to make it crystal clear that this is just an off the cuff discussion and not based on any discrimination that I\u0026rsquo;ve been witness to from my employer or anywhere else. Ageism has been a bit of the elephant in the room where I don\u0026rsquo;t see many people discussing it publicly, but it\u0026rsquo;s in the back of people\u0026rsquo;s mind. It does seem that there are many more young people in the technology industry than older people, but this also may just be a perception and not reality.\nDiscrimination First of all, I want to define what I\u0026rsquo;m calling age discrimination. Like other types of discrimination its sometimes based on preconceived notions or stereotypes about a group of people such as if we said that everyone you\u0026rsquo;d meet in Mos Eisley spaceport are all scum and villains. It\u0026rsquo;s not true of everyone of course, because Obi Wan was in the spaceport and I doubt you\u0026rsquo;d consider him a villain.\nBut making a broad sweeping statement like that can be dangerous. What if we said, \u0026ldquo;Once you reach a certain age in the information technology field you\u0026rsquo;re not a desirable asset to companies any longer.\u0026rdquo; It\u0026rsquo;s not true because we probably all know people in the industry that are older than we are and are incredibly valuable resources, especially for their experience. But if we let the \u0026ldquo;older people aren\u0026rsquo;t good with technology\u0026rdquo; stereotype prevail then we\u0026rsquo;ve got some issues.\nEmployers who act upon a statement like this are likely breaking the law. Someone\u0026rsquo;s age shouldn\u0026rsquo;t be a good reason not to hire them and in the United States this practice can have legal consequences if it can be proven.\nMaybe it\u0026rsquo;s Something Else? Please understand that I do not condone the practice of discrimination, not only to age, but to race, sexual preference, gender, or political affiliation. But how about some less nefarious reasons why people of a certain age might be less hire-able in the technology sector.\nOlder People Make More Money In many industries, your experience is a valued asset. As you get older, and more experienced, you can earn more money because that experience is something companies find desirable. That experience can help teach other people what you know, steer clear of pitfalls you\u0026rsquo;ve seen in your past, and just have more knowledge about a subject. This experience might earn you more money, which is great, but to your employer, you\u0026rsquo;re now an expensive resource. If managers can figure out how to cut costs without losing anything, they\u0026rsquo;ll certainly have to consider it.\nIn the IT world, your experience might have a shelf life. Consider, for a second, that a new company is starting up with a focus on public cloud. The cloud really isn\u0026rsquo;t that old, so if someone with 25 years of IT experience and someone with 5 years of experience apply for the same cloud position, do they have a different amount of \u0026ldquo;cloud\u0026rdquo; experience\u0026quot;? My guess here is that only a few years out of that 25 would be cloud related, so the two employees would really have about the same amount of \u0026ldquo;cloud\u0026rdquo; related work experience. Unfortunately, the person with 25 years of experience probably thinks that they should be paid for all of their experience and not just the cloud related stuff. They might be right too since there are many skills that are useful even if it\u0026rsquo;s not specific to your primary role. Things like knowing the industry, relating cloud to other data center concepts, working with teams, etc are all useful skills, but will the employer see it that way? Or will the employer see two people with the same cloud experience and one of them is much less expensive than the other?\nHere is an example I just saw on twitter which illustrates this point pretty well.\nIs Tech is Just Better Suited for Younger People? My young son has never used a rotary phone or seen a compact disk. He has grown up his whole life with iPads, computers, and smart phones. He naturally has a mindset about technology whereas older people have had to learn each new technology as it came out. For my son, learning to use a touch screen, mobile device, voice activated devices, IoT device, etc is the same as learning how to use a fork. These things have been around his whole life and he\u0026rsquo;s grown up with them. Learning how to use them is second nature to him. In contrast, my own experiences might make me less suited to learn, or try to learn a new technology.\nTake this example for instance. I write a lot of designs for customers and make a drawings to illustrate concepts. I\u0026rsquo;ve done this for quite a while and Microsoft Visio has long been the standard for this type of thing. I\u0026rsquo;m pretty good at using Visio and I haven\u0026rsquo;t found many instances where Visio couldn\u0026rsquo;t do something that I needed, so there isn\u0026rsquo;t an incentive for me to learn a competing product. However, many of my colleagues have begun to use LucidChart and love it. Both of us can get our jobs done, but what if we were both up for the same job and the employer was looking for experience with documents based on LucidChart diagrams? Maybe LucidChart is the new hot thing and people are gravitating towards it, or maybe it\u0026rsquo;s a fad, or whatever. The point here is that I might be seen as an older person who isn\u0026rsquo;t good with new technology and might lose the job to someone younger. In this case, it doesn\u0026rsquo;t mean that I\u0026rsquo;m really not as good with new tech, but I\u0026rsquo;ve not put a focus on re-learning a tool when the tool I have works well.\nTake Aways I\u0026rsquo;ve been in the industry for fifteen years and think the experience that I have is truly valuable to me personally and to my employer. The truth of the matter is that I think it is difficult for people in the technology industry as they get older. I don\u0026rsquo;t think this is some nefarious plot by employers to get rid of people when they hit a certain age, but I do think that younger people have some advantages in this industry that are hard to deny. An environment where everything is disrupted so frequently makes peoples experience slightly less valuable and younger people are less expensive.\nWhat do we do as we get older? I don\u0026rsquo;t have all the answers here, but one thing seems important. Continuous education is critical. You can\u0026rsquo;t assume that since you have a lot of experience that you\u0026rsquo;ll always be needed. This industry, moves far too quickly to stop learning new things. You have to keep learning new technologies and staying on top of what\u0026rsquo;s changing in tech if you\u0026rsquo;re to stay relevant.\nI know that when I\u0026rsquo;ve heard people discuss this topic, I\u0026rsquo;ve immediately jumped to the conclusion that companies might not be discriminating against people based on age, but that companies did want to hire younger people. I\u0026rsquo;ve not considered it too heavily because I still feel like I\u0026rsquo;m pretty young and this doesn\u0026rsquo;t impact me yet. But as I started to really think about it, it doesn\u0026rsquo;t seem like age discrimination but rather a set of circumstances that make younger people easier to hire. I\u0026rsquo;m not too sure if I have a call to action here in this post, but if I had one it would be this: \u0026ldquo;Don\u0026rsquo;t assume the technology industry is discriminating against older people or only want to hire young talent.\u0026rdquo;\nI\u0026rsquo;m interested to hear your experiences and thoughts on the matter. Post your comments below.\n","permalink":"https://theithollow.com/2018/03/12/woke-age-discrimination/","summary":"\u003cp\u003eAge discrimination can be an issue in any industry, but this issue is something members of the information technology (IT) industry can specifically identify with. My goal for this post is just to shine some light on the topic and discuss whether or not there is an injustice happening in IT when you reach a certain age, or if there is some less heinous reason why we see so many younger people in tech. I want to make it crystal clear that this is just an off the cuff discussion and not based on any discrimination that I\u0026rsquo;ve been witness to from my employer or anywhere else. Ageism has been a bit of the elephant in the room where I don\u0026rsquo;t see many people discussing it publicly, but it\u0026rsquo;s in the back of people\u0026rsquo;s mind. It does seem that there are many more young people in the technology industry than older people, but this also may just be a perception and not reality.\u003c/p\u003e","title":"Woke to IT Age Discrimination"},{"content":"I\u0026rsquo;m a big advocate for building your cloud apps to take advantage of cloud features. This usually means re-architecting them so that things like AWS Availability Zones can be used seemlessly. But I also know that to get benefits of the cloud quickly, this can\u0026rsquo;t always happen. If you\u0026rsquo;re trying to reduce your data center footprint rapidly due to a building lease or hardware refresh cycle quickly approaching, then you probably need a migration tool to accomplish this task.\nRecently, I demoed CloudEndure to see how their solution works. This post will take you through the steps of migrating a Windows instance in my vCenter home lab, to my AWS environment. If you\u0026rsquo;d like to follow along in your own environment, request a free trial form CloudEndure here: https://appce.cloudendure.com/appce/?mod=signups\u0026amp;act=check\nPrepare your AWS Environment Before you can use a migration tool to move servers to the cloud, you\u0026rsquo;ll need to have the permissions to do so. To do this in AWS, you\u0026rsquo;ll need to setup a user account with programmatic access and a policy attached to that user that allows it to do the migrations. To do this from the AWS console, login to your AWS Account and go the the Identity and Access Management service (IAM).\nFirst, we\u0026rsquo;ll create a policy so click on the policies menu within the IAM service window. Click the \u0026ldquo;Create policy\u0026rdquo; button. Within this screen, you can select the services you want, or better yet, select the JSON tab and paste in the following json policy. NOTE: don\u0026rsquo;t worry if you don\u0026rsquo;t understand this, the CloudEndure portal shows you exactly what policy is needed.\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;ec2:*\u0026#34;, \u0026#34;elasticloadbalancing:*\u0026#34;, \u0026#34;cloudwatch:*\u0026#34;, \u0026#34;autoscaling:*\u0026#34;, \u0026#34;iam:GetUser\u0026#34;, \u0026#34;iam:PassRole\u0026#34;, \u0026#34;iam:ListInstanceProfiles\u0026#34;, \u0026#34;kms:ListKeys\u0026#34;, \u0026#34;mgh:*\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; } ] } Click the \u0026ldquo;Review Policy\u0026rdquo; button. On the next screen give the policy a name and then click the \u0026ldquo;Create policy\u0026rdquo; button.\nNow that the policy is created, it\u0026rsquo;s time to add an IAM user who has programmatic access to AWS. Go to the IAM console again and this time select \u0026ldquo;Users\u0026rdquo; from the menu. Click the \u0026ldquo;Add user\u0026rdquo; button. Give the user a name and select the \u0026ldquo;Programmatic access\u0026rdquo; check box to allow access to the API. Next, click the \u0026ldquo;Next: Permissions\u0026rdquo; button.\nAt this point it\u0026rsquo;s a good idea to add users to a group and then assign the permissions to the group instead of the user, but that choice is up to you. I clicked the \u0026ldquo;Create Group\u0026rdquo; button to add the user to create a group where I added a name. The important piece is to then select the policy you created earlier to assign it to the user or group.\nYou\u0026rsquo;ll see the permissions added.\nClick through the wizard to review and complete the permissions piece of the install. Click the \u0026ldquo;Create user\u0026rdquo; button.\nOn the last screen, be sure to copy the Access key ID and Secret access key. This is the only opportunity to get the secret access key. This also provides access to the environment, so protect it. Don\u0026rsquo;t post it on your blog or something foolish. :)\nSetup CloudEndure When you activate your CloudEndure license you\u0026rsquo;ll get a SaaS portal to log into. The first thing you\u0026rsquo;ll do is to create a new project. Do this by clicking the plus symbol.\nGive the project a name and then select which cloud in which you\u0026rsquo;ll be migrating. Click \u0026ldquo;Create Project\u0026rdquo;\nOnce you create the project you\u0026rsquo;ll see a message warning you that it\u0026rsquo;s not been setup yet. Click \u0026ldquo;Continue\u0026rdquo; to get started with that.\nThe first thing you\u0026rsquo;ll need to do in the setup window is to add the AWS Access keys which you copied from the AWS Setup section above. Paste them in here and click Save. Also notice that if you weren\u0026rsquo;t sure about how to setup the AWS permissions, there are links to the policies so you can easily retrieve them from the wizard.\nNext, you\u0026rsquo;ll need to map your replication to your environment. For instance I set my target to be the us-east-1 region of my AWS account. I also selected a subnet and a security group to apply to the instance when it fails over. You can do replication through VPN or Direct Connect, but if you\u0026rsquo;re OK with it for a test, you can use the public internet like I did. Proxy servers can also be used if you have that requirement. Once you\u0026rsquo;re done filling in some of your AWS mappings, click \u0026ldquo;Save Replication Settings\u0026rdquo; button.\nYou\u0026rsquo;ll see a message saying that your project is setup but you still need to install the agent on your machine. If you click the \u0026ldquo;Show Me How\u0026rdquo; button, you\u0026rsquo;ll get some more very straight forward examples of how to complete that process.\nNote here the Agent Installation token and your installers for both Windows and Linux.\nReplication Our project is setup, and AWS is ready for us to move servers. But we need to install an agent on the machine before we start replication. This is nice because it means that it will work on physical or virtual machines migrating to the cloud. For this example, I\u0026rsquo;ve deployed a Windows Server 2016 VM in my vSphere lab. I logged into my VM and copied the installer provided in the \u0026ldquo;Show Me How\u0026rdquo; wizard.\nRun the installer on your machine and you\u0026rsquo;ll be prompted with some questions. First, you\u0026rsquo;ll need to provide the CloudEndure username or token we saw before. Next you\u0026rsquo;ll need to pick with disks should be replicated. That\u0026rsquo;s it!\nIf you go back to the CloudEndure SaaS portal you\u0026rsquo;ll see your machine is now listed and that it\u0026rsquo;s replicating.\nOn the main dashboard, you\u0026rsquo;ll also notice the initial sync is starting on your machine and can get a glimpse of any other issues in your environment. Now you wait until the replication is complete. Depending on your WAN bandwidth, this could take some time. NOTE: At this point an EC2 instance is stood up in your AWS Account that will serve as a replication endpoint. Be aware of this because you will need to pay for this instance as part of your normal cloud costs.\nMigrate Once your machine has replicated it\u0026rsquo;s data, you should see a machine in the \u0026ldquo;Require Attention\u0026rdquo; state on the dashboard. Don\u0026rsquo;t worry, this is a good thing. Before you do an actual failover, you should test it to make sure it\u0026rsquo;ll work the way you want it to before doing the real migration.\nBefore you migrate, it might be work looking at the Blueprint. This is where you can specify the size of the instance to use, the networks it\u0026rsquo;ll be deployed on, whether it has a public IP and so forth. You can update these settings any time as long as you do it BEFORE you do a migration of course.\nWhen you\u0026rsquo;re all set, it\u0026rsquo;s time to run a test failover. Go to your machines tab and click the machine you want to migrate. Click the \u0026ldquo;Launch Target Instance\u0026rdquo; button and then select \u0026ldquo;Test\u0026rdquo;. You\u0026rsquo;ll get a warning message about the process where you can click \u0026ldquo;Next\u0026rdquo;.\nSelect the point in time for your test and then select the \u0026ldquo;Continue with Test\u0026rdquo; button.\nYou can look at the job progress as it finishes replication and converts the data into your virtual machine. This process spins up a second EC2 instance temporarily to do the conversion.\nWhen the test is finished, you can look in your EC2 console and see 3 instances related to the process.\nReplication Instance which should still be running because it\u0026rsquo;s still handling the replication of your on prem instance. The converter which will terminate as the job finishes The migrated instance itself. If you look back at the CloudEndure Portal, you\u0026rsquo;ll notice you have one machine ready now. It\u0026rsquo;s been tested and ready to go. During this test process, the machine is still running on-prem and undisturbed.\nNow, you can destroy your test EC2 instance once you\u0026rsquo;re done with all of your testing. You\u0026rsquo;re now still replicating and ready for a failover. When you\u0026rsquo;re ready to do it, go to machines again and select the machine to failover. Select the \u0026ldquo;Launch Target Instance\u0026rdquo; button and this time select Recovery.\nRepeat the same process you did for the test recovery and wait for it to complete.\nSummary Again, I\u0026rsquo;m not a big fan of migrating workloads to the cloud, simply because the architecture is different in a cloud environment. Servers built for on-premises environments might not be able to take full advantage of the cloud so it\u0026rsquo;s worth re-architecting them when possible. However, I was pretty pleased with how easy it was to setup CloudEndure\u0026rsquo;s agents and the screens really walk you through what is needed to get things up and running. It\u0026rsquo;s also nice that CloudEndure can migrate to multiple cloud providers such as Azure, AWS, GCP and even other on-premises clouds. There are a lot of things I didn\u0026rsquo;t cover about the product in this post like failing back, etc but if you\u0026rsquo;re thinking of migrating workloads, it\u0026rsquo;s worth taking a look at CloudEndure for your project.\n","permalink":"https://theithollow.com/2018/03/05/migration-cloud-cloudendure/","summary":"\u003cp\u003eI\u0026rsquo;m a big advocate for building your cloud apps to take advantage of cloud features. This usually means re-architecting them so that things like AWS Availability Zones can be used seemlessly. But I also know that to get benefits of the cloud quickly, this can\u0026rsquo;t always happen. If you\u0026rsquo;re trying to reduce your data center footprint rapidly due to a building lease or hardware refresh cycle quickly approaching, then you probably need a migration tool to accomplish this task.\u003c/p\u003e","title":"Migration to the Cloud with CloudEndure"},{"content":"Reserved Instances are often used to reduce the price of Amazon EC2 instance on-demand pricing. If you\u0026rsquo;re not familiar with Reserved Instances, then you\u0026rsquo;re missing out. Reserved Instances, or RIs, are a billing construct used in conjunction with Amazon EC2 instances (virtual machines). The default usage on the AWS platform is the on-demand pricing in which you get billed by the hour or second with no commitments. Basically, when you decide to terminate an instance you stop paying for it.\nReserved Instances comes into play when you have EC2 instances that will be running all the time and you can\u0026rsquo;t terminate them each night to save money. Reserved instances let you pay a reduced price per hour for your instances, but you commit to paying for the entire year or for three years depending on the type of RI you purchase. These price decreases can be pretty significant cost savings (up to 75%) but remember that you\u0026rsquo;re committing to a fixed length so if you don\u0026rsquo;t need those instances any longer, you\u0026rsquo;re still paying for the RI.\nBe Aware of How RIs Work There are a few things that you need to know about RIs before you consider using them. Reserved Instances are purchased with a list of attributes for a class, Instance type, Platform, Scope, tenancy and term.\nClass - Convertible or Standard Instance type - The instance family and instance size such as an m4.large. Platform - The operating system Scope - RIs can be purchased for a region or for a specific Availability Zone. Tenancy - Shared Tenancy (default) or instances running on dedicated hardware Why do we care about these attributes? Because you can\u0026rsquo;t transfer a Reserved Instance from one instance to another with differing attributes. For example a Reserved Instance will be applied to any instance that matches all of the attributes. If you have a single RI that matches two different instances, it will be applied to one of them. If you terminate one of those instances, the RI will be applied to the second instance automatically. However, if you were to terminate an instance to change the size, the type will no longer patch and the RI won\u0026rsquo;t be applied unless there is another instance with the same attributes. There is some help here though, if you purchase a Convertible RI, you can modify the RI attributes through the API or console but note that convertible RIs provide a smaller discount for this flexibility.\nRIs also provide you insurance on capacity. With On-Demand instances there is no guarantee that capacity will exist within a Region or AZ. For example, if an entire region went down and you needed to spin up your instances in another region, you may be fighting with other customers for the limited capacity in that region. If you purchased an RI in that region, you are guaranteed to have resources to spin up your instances.\nConsiderations Now that we understand the basics behind RIs, let\u0026rsquo;s explore some considerations that you should take into account before deciding to purchase an AWS Reserved Instance.\nFind the Steady State First and foremost, if you\u0026rsquo;re just starting to migrated servers to AWS or deploying new workloads, don\u0026rsquo;t purchase an RI right away. You\u0026rsquo;ll likely find out that the instance size you picked needs to be scaled either up or down. While this is easy to do within AWS, an RI is tied to an instance type so changing the family or size will have pricing consequences for you. So ensure you\u0026rsquo;ve run your workloads for a while and have a good understanding of the capacity it needs to use for a steady state before purchasing your RIs.\nUse Shared Reserved Instance Sharing in Multi-Account Environments If you have multiple accounts setup with a root level master billing account, consider whether or not to purchase the RIs at the root account or one of the subordinate accounts. If you purchase the RI at the root account level the RI can be shared across any of the other accounts within the organization. This is incredibly useful for ensuring that the pool of resources is very large so RIs can be allocated to any instance with attributes that match across any account. However, in some situations those individual accounts are owned by their own business unit and have their own budgets. In those cases you may want to turn off Reserved Instance sharing so you can specify which account gets the RI, and the benefits. Be aware here though, that if that account doesn\u0026rsquo;t use the RI, the other accounts can\u0026rsquo;t use it either.\nDetermine the Criticality of your Workload If the workload you\u0026rsquo;re running in AWS is mission critical you may want to consider an RI to ensure that no matter what, AWS will have resources available for you to spin that instance up if an AZ fails, or a region fails. Having snapshots, backups, AMIs etc replicated to other regions and infrastructure as code readily available to re-deploy applications is great. But don\u0026rsquo;t forget that AWS has finite resources as well, even if you don\u0026rsquo;t see the cloud this way. Don\u0026rsquo;t let an \u0026ldquo;Insufficient Capacity\u0026rdquo; error prevent your perfectly architected business continuity strategy from working correctly.\nDon\u0026rsquo;t Purchase an RI for Every Instance The public cloud provides a lot of options for elasticity. With the use of auto-scaling groups and load balancers we can automatically scale out our applications based on demand and then reduce them later when the demand subsides. When purchasing an RI consider buying for the steady state and use On-Demand (or spot pricing) for any of the additional capacity that might only be temporary.\nUse One-Year RIs Reserved Instances come in one year or three year options. Three year RIs give you pretty good cost reduction, but consider that AWS is always releasing new instance sizes and families. You might want to take advantage of these but if you have a three year RI this might be more difficult. Consider using the one-year RIs to give you the flexibility to re-evaluate your sizing each year.\nIf you Goof Up So what do you do if you accidentally purchased an RI for the wrong type or something. Well, If you contact AWS Support they might let you return it or swap it out for the right type but you can\u0026rsquo;t count on that. From what I\u0026rsquo;ve been told this is usually accepted one time but any future exceptions you should expect to own. There is an RI marketplace where you can sell your RIs if you no longer need them. Think of this like the craigslist of AWS RIs where you can buy and sell RIs based on your situation.\n","permalink":"https://theithollow.com/2018/02/19/aws-reserved-instance-considerations/","summary":"\u003cp\u003eReserved Instances are often used to reduce the price of Amazon EC2 instance on-demand pricing. If you\u0026rsquo;re not familiar with Reserved Instances, then you\u0026rsquo;re missing out. Reserved Instances, or RIs, are a billing construct used in conjunction with Amazon EC2 instances (virtual machines). The default usage on the AWS platform is the on-demand pricing in which you get billed by the hour or second with no commitments. Basically, when you decide to terminate an instance you stop paying for it.\u003c/p\u003e","title":"AWS Reserved Instance Considerations"},{"content":"Multi-Factor Authentication or MFA, is a common security precaution used to prevent someone from gaining access to an account even if an attacker has your username and password. With MFA you must also have a device that generates a time based one time password (TOTP) in addition to the standard username/password combination. The extra time it might take to login is well worth the advantages that MFA provides. Having your AWS account hijacked could be a real headache.\nSetting up MFA Setting it up is pretty easy. First login to your AWS account. I\u0026rsquo;m security my root user account so I\u0026rsquo;m going to the IAM console and looking at the dashboard. As you can see I have a very unsightly orange/yellow exclamation mark in my security status dashboard. If you\u0026rsquo;re like me, we can\u0026rsquo;t have any of those hanging around. Click the dropdown and then select the \u0026ldquo;Manage MFA\u0026rdquo; button.\nA dialogue window will pop up asking what kind of device to use. I\u0026rsquo;m using my smartphone which isn\u0026rsquo;t quite as secure as a hardware MFA device, but still pretty good. Click the \u0026ldquo;Next Step\u0026rdquo; button.\nThe next screen is just letting you know that you\u0026rsquo;ll need to install an AWS MFA compatible application on your phone. I\u0026rsquo;m using OKTA Verify which works fine, but there are others you can download such as Google Authenticator or Authy. For more information on this see: https://aws.amazon.com/iam/details/mfa/.\nOnce you\u0026rsquo;ve installed an MFA app on your phone click the \u0026ldquo;Next Step\u0026rdquo; button.\nOn the next screen you\u0026rsquo;ll be given a QR code to scan with your MFA application that was just installed on your device. Scan the QR code that pops up on the screen. After this, you\u0026rsquo;ll need to put in the next two codes that your MFA app provides so that it sync\u0026rsquo;s with AWS. When you\u0026rsquo;re done with this step, click the \u0026ldquo;Activate Virtual MFA\u0026rdquo; button.\nNOTE: You should screenshot the QR code and store it in a SAFE PLACE! If you lose your phone or something or need to reset your MFA, you\u0026rsquo;ll need to re-scan this QR code. You can\u0026rsquo;t retrieve this QR code again from the AWS console so it\u0026rsquo;s up to you to store it in a safe place. I store mine within my 1Password vault along with my AWS credentials.\nIf all went well you\u0026rsquo;ll get a success message.\nIf we look at the IAM dashboard again, we\u0026rsquo;ll see our green checkmarks and we\u0026rsquo;re good to go.\n\u0026ldquo;If the light is green, the trap is clean\u0026rdquo; -Ray Stanz\nResult When we go to login to the AWS account the next time, we\u0026rsquo;ll be prompted to enter in a one time password after entering the username and password.\n","permalink":"https://theithollow.com/2018/02/12/setup-mfa-aws-root-accounts/","summary":"\u003cp\u003eMulti-Factor Authentication or MFA, is a common security precaution used to prevent someone from gaining access to an account even if an attacker has your username and password. With MFA you must also have a device that generates a time based one time password (TOTP) in addition to the standard username/password combination. The extra time it might take to login is well worth the advantages that MFA provides. Having your AWS account hijacked could be a real headache.\u003c/p\u003e","title":"Setup MFA for AWS Root Accounts"},{"content":"There is news in the backup world today. Rubrik has acquired startup company Datos IO.\nWho is Datos IO? Datos IO was founded in 2014 and focuses on copy data management of distributed scale out databases purpose built for the cloud. The reason Datos IO is different from the common backup solutions we\u0026rsquo;re accustomed to seeing (Commvault, DataDomain, etc) is that they are building a solution from the ground up that tackles the problems of geo-dispersed scale out database which are becoming commonplace in the cloud world. Think about databases that scale multiple continents, and multiple clouds even.\nAccording to Datos IO\u0026rsquo;s own website:\nDatos IO provides the industry’s first cloud-scale, application-centric, data management platform enabling organizations to protect, mobilize, and monetize all their application data across private cloud, hybrid cloud, and public cloud environments.\nDatos IO has several products including their RecoverX product which can leverage datacenter aware backups of distributed MongoDB databases that are shared across continents. The RecoverX product can be used to backup a sharded database from within a single geographic location which can be a real struggle especially with General Data Protection Regulations (GDPR).\nIn late November of 2017, Datos IO was added to CRN\u0026rsquo;s 2017 list of emerging vendors in the storage category for their RecoverX product. Datos IO also received bronze in the Enterprise Software category of the best in biz awards.\nWhat Does Rubrik Want With Another Backup Company? Rubrik has had a bit of a meteoric rise since it was (also) founded in 2014. While Datos IO was still in it\u0026rsquo;s seed funding round according to Crunchbase, Rubrik has had several rounds of funding. Rubrik\u0026rsquo;s cash rich bank accounts, might have been burning a hole in their pockets but my guess is that CEO Bipul Sinha has a very strategic vision for how this company can help the Rubrik brand, in the cloud marketplace.\nThe backup solution that Datos IO provides seems like a pretty natural fit with the existing Rubrik products. Both companies were founded in 2014 and looking to revolutionize the way traditional backups were completed. Both companies use native cloud storage for archival or backup. Both companies have cloud based backup solutions that require no hardware.\nThe acquisition of Datos IO should extend Rubrik’s platform to include protecting mission-critical cloud applications and NoSQL databases. Rubrik is likely trying to bolster their existing product with proven technology for backing up geo-distributed databases, giving them a good one-two punch in the cloud backup market. Time will tell how this acquisition will pan out for the two companies.\n","permalink":"https://theithollow.com/2018/02/06/rubrik-acquires-datos-io/","summary":"\u003cp\u003eThere is news in the backup world today. Rubrik has acquired startup company Datos IO.\u003c/p\u003e\n\u003ch1 id=\"who-is-datos-io\"\u003eWho is Datos IO?\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2018/02/datosio1.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2018/02/datosio1-300x73.png\"\u003e\u003c/a\u003e Datos IO was founded in 2014 and focuses on copy data management of distributed scale out databases purpose built for the cloud. The reason Datos IO is different from the common backup solutions we\u0026rsquo;re accustomed to seeing (Commvault, DataDomain, etc) is that they are building a solution from the ground up that tackles the problems of geo-dispersed scale out database which are becoming commonplace in the cloud world. Think about databases that scale multiple continents, and multiple clouds even.\u003c/p\u003e","title":"Rubrik Acquires Datos IO"},{"content":"AWS Organizations is a way for you to organize your accounts and have a hierarchy not only for bills to roll up to a single paying account, but also to setup a way to add new accounts programatically.\nFor the purposes of this discussion, take a look at my AWS lab account structure.\nFrom the AWS Organizations Console we can see the account structure as well.\nI need to create a new account in a new OU under my master billing account. This can be accomplished through the console, but it can also be done through the AWS CLI, which is what I\u0026rsquo;ll do here. NOTE: This can be done through the API as well which can be really useful for automating the building of new accounts.\nPermissions Prep Before we start issuing commands there are some pre-requisites that need to be met first. To begin, we\u0026rsquo;ll need to have a login that has permissions for Organizations:CreateAccount. Since I\u0026rsquo;ll be doing additional work such as moving accounts around and Creating OUs I\u0026rsquo;ve created an AWS policy for OrganizationalAdmins and given my user full permissions on the organizations service.\nI also want to mention that our CLI connection must be made to the root account within AWS organizations.\nCreate A New Account Now that we\u0026rsquo;ve got our permissions taken care of, open up a terminal and connect to the Master Billing Account as the user who has permissions to create accounts and modify organizations.\nFrom here we\u0026rsquo;re going to run our AWS CLI command to create a new account.\naws organizations create-account -email user@domain.com --account-name [name] Here is a screenshot of what happened when I created my account.\nThis command starts the account creation build and as you can see some return data comes back and shows the status is \u0026ldquo;IN_PROGRESS\u0026rdquo;.\nIf we want to check the status of the account creation we can run the following command and insert the requestID which was returned in the create-account command.\naws organizations describe-create-account-status --create-account-request-id [requestid] Here is my screenshot of the describe command. We can see that when I checked it again, the status was \u0026ldquo;SUCCEEDED\u0026rdquo;\nAt this point the account has been created. You should get an email to the address specified with further instructions but also, the account should be built with a role named \u0026ldquo;OrganizationAccountAccessRole\u0026rdquo; which will allow you to do a role switch into that account from the root account.\nCreate a New OU Now that the account has been created we need to create a new OU. To do this from the AWS CLI we\u0026rsquo;ll use the \u0026ldquo;create-organizational-unit\u0026rdquo; command, but first we need to find out the ID of the root.\nTo find the root ID run the following:\naws organizations list-roots Once you run that command it should return a list of the root accounts. In my case there is only one root and we\u0026rsquo;re looking for an id that begins with \u0026ldquo;r\u0026rdquo; and has at least 4 characters after it.\nWe will now create the new OU by providing it a new name and also passing in the root as the parent.\naws organizations create-organizational-unit --parent-id [parentID] --name [Name] Here is a screenshot of my commands for both listing the roots and adding a new OU under the root.\nMove Account into new OU Now that the account and OU are created, we can move the account into the appropriate OU. To do this we\u0026rsquo;ll use the \u0026ldquo;move-account\u0026rdquo; command. You\u0026rsquo;ll need to pass in the account ID, parent ID (the root we found earlier beginning with \u0026ldquo;r\u0026rdquo;) and the OU ID which was returned in the create-organizational-unit command that begins with \u0026ldquo;ou\u0026rdquo;\naws organizations move-account --account-id [accountID] --source-parent-id [rootID] -destination-parent-id [OU ID] Here is a screenshot of my commands in the CLI.\nResults Now if we look back in our AWS Console we\u0026rsquo;ll see our new account created and listed under the appropriate OU just as we were hoping for.\n","permalink":"https://theithollow.com/2018/02/05/add-new-aws-account-existing-organization-cli/","summary":"\u003cp\u003eAWS Organizations is a way for you to organize your accounts and have a hierarchy not only for bills to roll up to a single paying account, but also to setup a way to add new accounts programatically.\u003c/p\u003e\n\u003cp\u003eFor the purposes of this discussion, take a look at my AWS lab account structure.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup0.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup0.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eFrom the AWS Organizations Console we can see the account structure as well.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup1-mask.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2018/02/AWS-AcctSetup1-mask.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI need to create a new account in a new OU under my master billing account. This can be accomplished through the console, but it can also be done through the AWS CLI, which is what I\u0026rsquo;ll do here. NOTE: This can be done through the API as well which can be really useful for automating the building of new accounts.\u003c/p\u003e","title":"Add a New AWS Account to an Existing Organization from the CLI"},{"content":"In a previous post, we looked at how to use change sets with CloudFormation. This post covers how to use change sets with a nested CloudFormation Stack.\nIf you\u0026rsquo;re not familiar with nested CloudFormation stacks, it is just what it sounds like. A root stack or top level stack will call subordinate or child stacks as part of the deployment. These nested stacks could be deployed as a standalone stack or they can be tied together by using the AWS::CloudFormation::Stack resource type. Nested stacks can be used to deploy entire environments from the individual stacks below it. In fact a root stack may not deploy any resources at all other than what comes from the nested stacks. An example of a commons stacking method might be to have a top level stack that deploys a VPC, while a nested stack is responsible for deploying subnets within that stack. You could keep chaining this together to deploy EC2 instances, S3 buckets or whatever you\u0026rsquo;d like.\nDeploying a Change Set for a Nested Stack As you can see from the first screenshot below, I have a nested stack that deployed a subnets stack and the root stack is my VPC-Deploy stack. For this example, lets assume I need to modify my subnets. Pretend for a second, I goofed up the IP addressing. Now from our previous post, you might just go and assign a change set to the subnets stack, but since it is part of a nested stack we want to make sure not to break that chain. Remember if we later need to update the VPC stack, we\u0026rsquo;ll want to make sure that we don\u0026rsquo;t break the nested stack as well.\nHere\u0026rsquo;s the problem. If we attempt to deploy a change set on our VPC-Deploy stack, one of the changes listed will be to subnets, but you won\u0026rsquo;t see what those changes are. To test and make sure that only the changes you want to be made are staged, let\u0026rsquo;s perform the first parts of deploying a changes set on our subnets stack, but we WILL NOT execute it.\nSo as we\u0026rsquo;ve done previously, create a new change set for our nested stack.\nNOTE: When you attempt to deploy a change set on a nested stack a warning message will pop up reminding you that this could cause your stack to be out of sync since it\u0026rsquo;s a nested stack. Continue through this process, but remember not to deploy the change.\nSelect your template.\nAdd your details.\nAdd your tags and set the permissions appropriately.\nReview and click \u0026ldquo;Create Change Set\u0026rdquo; button.\nWhen we review the changes we can see that I\u0026rsquo;ll be replacing two subnets and two route tables.\nAt this point we know that the changes that we had wanted to make are reflected correctly in CloudFormation. If we saw changes in the previous screen that we didn\u0026rsquo;t want to make, we would know it at this point.\nReally Deploy a Change Set at the Root Stack Now that we\u0026rsquo;ve tested that, let\u0026rsquo;s actually set the change set on our root stack. Select it from the list and go through the same process with our root template. Remember that the child template is probably the only thing that changed, so the root template should be the same. Once we go through all of the screens, notice what the stack update looks like. Here we see a single change, not on the subnets and the routing tables themselves, but rather the subnets CloudFormation stack. This is the reason we pushed the change set to the child stack first, so we could see what those changes would be. The root stack update makes this visibility difficult.\nDeploy the change set on the root stack and the nested stacks should be updated accordingly.\nSummary Using change sets on a nested CloudFormation stack isn\u0026rsquo;t much different from using them on a stand alone stack, but in order to get the same visibility, testing them out but not deploying them, on the nested stack is an easy way to achieve this. Just be careful not to deploy them so that the root and child stacks don\u0026rsquo;t get out of sync.\n","permalink":"https://theithollow.com/2018/01/29/using-change-sets-nested-cloudformation-stacks/","summary":"\u003cp\u003eIn a \u003ca href=\"/2018/01/22/introduction-aws-cloudformation-change-sets/\"\u003eprevious post\u003c/a\u003e, we looked at how to use change sets with CloudFormation. This post covers how to use change sets with a nested CloudFormation Stack.\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re not familiar with nested CloudFormation stacks, it is just what it sounds like. A root stack or top level stack will call subordinate or child stacks as part of the deployment. These nested stacks could be deployed as a standalone stack or they can be tied together by using the AWS::CloudFormation::Stack resource type. Nested stacks can be used to deploy entire environments from the individual stacks below it. In fact a root stack may not deploy any resources at all other than what comes from the nested stacks. An example of a commons stacking method might be to have a top level stack that deploys a VPC, while a nested stack is responsible for deploying subnets within that stack. You could keep chaining this together to deploy EC2 instances, S3 buckets or whatever you\u0026rsquo;d like.\u003c/p\u003e","title":"Using Change Sets with Nested CloudFormation Stacks"},{"content":"If you\u0026rsquo;ve done any work in Amazon Web Services you probably know the importance of CloudFormation (CFn) as part of your Infrastructure as Code (IaC) strategy. CloudFormation provides a JSON or YAML formatted document which describes the AWS infrastructure that you want to deploy. If you need to re-deploy the same infrastructure across production and development environments, this is pretty easy since the configuration is in a template stored in your source control.\nNow that we are deploying our infrastructure from CFn templates, we have to consider what we do when a small part of that infrastructure needs a change. Perhaps we can redeploy the entire environment, but this might not be feasible in all cases. Also, if we\u0026rsquo;re making a small change, it might take a while to redeploy everything when we really only need to tweak the settings a little.\nChange Sets Thankfully, AWS has \u0026ldquo;Change Sets\u0026rdquo; which allows us to modify an existing CloudFormation Stack with a new template. If you\u0026rsquo;re not familiar with a stack, think of this as the deployed object that comes from a CloudFormation Template. For example, if you had a template that deployed four EC2 instances, when you deploy the template, it will create a stack that represents the four servers as part of a deployment. If you delete the stack, you remove all the resources that it described.\nChange Sets are created by building a new CloudFormation Template (or modifying the original) and creating a change on the original stack. You can then view the changes that will be deployed before you decide to execute the change.\nCreating a Change Set Let\u0026rsquo;s take a look at the process from the console. First we need to have a CloudFormation stack that we want to modify. In the example below, I\u0026rsquo;m going to modify a CFn stack that deployed a Lambda function and an IAM policy document. Assume that I forgot to add a permission to the policy and I want to fix that without re-deploying my whole CFn template.\nSelect the CloudFormation Stack that you want to modify.\nthen click the Actions drop down. Select \u0026ldquo;Create Change Set for Current Stack\u0026rdquo; from the list.\nFrom this point forward, it should look a lot like a normal CloudFormation deployment if you do it from the console. The wizard that opens will ask for the template to use as the change set. Select your new CloudFormation Template. NOTE: You can use the same template that was used to deploy the resources if you want, which should re-deploy the exact same settings unless you\u0026rsquo;ve modified the template. In this case I\u0026rsquo;ve selected a newly updated template with my new IAM policy permissions.\nOn the next screen enter a change set name (Instead of a Stack name like a standard CFn deployment) and a description. Also, if you\u0026rsquo;re CFn template has input parameters, enter those here as well.\nThe next screen will ask for additional options such as adding any tags and specifying a role with permissions to deploy the CFn template resources.\nOn the last screen, you have an opportunity to review your settings before clicking the \u0026ldquo;Create Change Set\u0026rdquo; button.\nReview the Change Set At this point, nothing in your environment has changed yet. You created a change set, but that doesn\u0026rsquo;t deploy your code, it just stages it for the upcoming deployment. If we look at our CloudFormation Stacks again, we can select the stack we created the change set for and click the \u0026ldquo;Change Sets\u0026rdquo; tab. We\u0026rsquo;ll notice that our Change Set is listed under this tab. Click the name of the change set to open up the change set window.\nIf we look at the change set, we can see under the changes section what will actually be modified. In my case the Change Set will modify my Execution Role and my Lambda Function. Also, under the \u0026ldquo;Replacement\u0026rdquo; field, you\u0026rsquo;ll see False, meaning that the object doesn\u0026rsquo;t need to be replaced, it can just be edited in place by CloudFormation. Pretty neat huh? Now we can stage any changes we need for the environment ahead of time and assess the impact right from this screen. Pretty handy for System Administrators who want to get as much work done as possible before a change window starts. This is also great for figuring out what components might need a change request opened in your change management system.\nExecute the Change Now, from the change set screen, press the \u0026ldquo;Execute\u0026rdquo; button to push the code changes. If you watch your CloudFormation Stacks, you\u0026rsquo;ll notice your stack start to update.\nIn a moment, you\u0026rsquo;ll see the stack has been updated successfully and if we look in the change sets tab again we\u0026rsquo;ll notice that a change set has been applied to our stack. Summary Change is going to happen so any Infrastructure as Code initiative needs to have a plan to handle it when those changes arise. Can you re-deploy? Should you update it manually? There are reasons that you wouldn\u0026rsquo;t want to do either of those things. Change Sets allow you to still manage your environment through CloudFormation, but make changes if they need to occur.\n","permalink":"https://theithollow.com/2018/01/22/introduction-aws-cloudformation-change-sets/","summary":"\u003cp\u003eIf you\u0026rsquo;ve done any work in Amazon Web Services you probably know the importance of CloudFormation (CFn) as part of your Infrastructure as Code (IaC) strategy. CloudFormation provides a JSON or YAML formatted document which describes the AWS infrastructure that you want to deploy. If you need to re-deploy the same infrastructure across production and development environments, this is pretty easy since the configuration is in a template stored in your source control.\u003c/p\u003e","title":"An Introduction to AWS CloudFormation Change Sets"},{"content":"If you\u0026rsquo;ve been in technology for a while, you\u0026rsquo;ve probably had to go through a hardware refresh cycle at some point. These cycles usually meant taking existing hardware, doing some capacity planning exercises and setting out to buy new hardware that is supported by the vendors. This process was usually lengthy and made CIOs break into a cold sweat just thinking about paying for more hardware, that\u0026rsquo;s probably just meant to keep the lights on. Whenever I first learned of a hardware refresh cycle, my first thoughts were \u0026ldquo;Boy, this sounds expensive!\u0026rdquo;\nI\u0026rsquo;m sure that hardware vendors loved to hear about hardware refresh cycles. Sales teams knew they would probably make a new sale and better yet, the customer probably called them up to do it. Very little work on their side,but things are much different now due to so many customers moving to the cloud where the hardware is not the customer\u0026rsquo;s concern any longer.\nCloud Refresh Cycles So hardware refresh cycles are dead now, right? I mean, you\u0026rsquo;re in the cloud now, and no longer need to worry about this kind of nonsense. Well, sorry to burst your bubble here, but they aren\u0026rsquo;t quite dead just because you\u0026rsquo;re in the cloud. Now, while capacity planning becomes much easier, and calculating budgets for new hardware is much simpler, you still need to review your infrastructure and make sure you\u0026rsquo;re up to date, but now for an entirely different reason. Saving Money!!!\nOh! I have your attention now huh? I thought so. In the cloud world, upgrades can save you money. Take a look at a few examples here just from the AWS platform.\nI used the simple monthly calculator provided by AWS and looked up the price for a Linux instance in US-East-1 for m3.large, m4.large, and m5.large. These are similar sized instances but three different generations of hardware. M3 instances being the oldest of the three and M5 the newest of the three. What you\u0026rsquo;ll find is that the size of the instances is virtually the same, but the hardware that comprises the instances is faster. That makes pretty good sense, but also notice that the prices of those instances gets cheaper as it gets newer.\n[table id=6 /]\nSo let me reiterate that point one more time. If you upgrade from older, slower hardware to the newer faster hardware, you\u0026rsquo;ll not only be gaining performance, but you\u0026rsquo;ll be saving money while you do it. Pretty neat huh?\nLet\u0026rsquo;s look at one more example. Here we show the price of S3 standard storage vs S3 Reduced Redundancy storage. Now this isn\u0026rsquo;t a generation thing, but Amazon isn\u0026rsquo;t putting a lot of effort into the Reduced Redundancy Storage option and would prefer to spend time on the standard S3 storage that is used for most workloads. Notice here that as time as gone on, Reduced Redundancy Storage is actually more expensive than the standard storage even though it provides 7 Nines less durability than standard.\n[table id=7 /]\nAction Plan Now that you know this, it is a good idea to have an action plan and pay attention to new releases that come out in the cloud. Sure you can keep running your workloads without messing with them until the cloud provider discontinues your generation of hardware. But you could take advantage of these new performance enhancements from better hardware while saving money along the way. Set a schedule to review your infrastructure and determine if and when you need to upgrade your own cloud infrastructure. Maybe this is a good time to leverage a partner to help education you on the changes that are constantly happening in the cloud.\nHow much money are you wasting?\n","permalink":"https://theithollow.com/2018/01/16/cloud-world-cheaper-upgrade/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been in technology for a while, you\u0026rsquo;ve probably had to go through a hardware refresh cycle at some point. These cycles usually meant taking existing hardware, doing some capacity planning exercises and setting out to buy new hardware that is supported by the vendors. This process was usually lengthy and made CIOs break into a cold sweat just thinking about paying for more hardware, that\u0026rsquo;s probably just meant to keep the lights on. Whenever I first learned of a hardware refresh cycle, my first thoughts were \u0026ldquo;Boy, this sounds expensive!\u0026rdquo;\u003c/p\u003e","title":"In the Cloud World, It's Cheaper to Upgrade"},{"content":"Over recent years, Infrastructure as Code (IaC) has become sort of a utopian goal of many organizations looking to modernize their infrastructure. The benefits to IaC have been covered many times so I won\u0026rsquo;t go into too much detail, but the highlights include:\nReproducibility of an environment Reduction in deployment time Linking infrastructure deployments with application deployments Source control for infrastructure items Reduction of misconfiguration The reasoning behind storing all of your infrastructure as code is valid and a worthy goal. The agility, stability, and deployment speeds achieved through IaC can prove to have substantial benefits to the business as a whole.\nIaC is a Commitment to the Process Now for the bad news. If you\u0026rsquo;re going to set out on a path towards infrastructure as code, you can\u0026rsquo;t waiver on when you\u0026rsquo;ll be using it. You must commit to the use of IaC for your infrastructure tasks. Let me give you an example so that you understand why it\u0026rsquo;s important to always use IaC if the environment was created this way.\nAssume you\u0026rsquo;ve built out a great Amazon Web Services environment through the use of CloudFormation (CFn) templates. These templates are either JSON or YAML documents that exist as a desired state for your deployment. Your environment includes, VPCs, subnets, encryption keys, logging standards, monitoring configurations, and several workloads. All of these items have been deployed through a CFn template and exist in your environment today. Now suppose that the application owners have identified a problem and need to make a change. Perhaps they found out that they need to have access to a server located in an ancillary data center some place to make their app work properly. Now this is an emergency request because [reasons] (There are always a list of reasons why these things are emergencies, even when they aren\u0026rsquo;t). So you deploy a VPN tunnel through the AWS console and get the application working again.\nEverything in the environment is running as expected now, but there is a problem. We didn\u0026rsquo;t commit to IaC so our environment isn\u0026rsquo;t completely defined by code. You might think, big deal, but what if we need to re-deploy the environment? Assume that later we need to re-deploy the environment for a lab, or development environment. The new environment won\u0026rsquo;t have the VPN tunnel created earlier because it was done manually. So this would be a manual configuration change that would need to be made again if it wan\u0026rsquo;t added to your code. What\u0026rsquo;s worse is if you apply a change set later on to modify your configuration, you might find that it deletes your VPN tunnels because they were defined in the code. Note: this is unlikely to affect the VPN tunnel in this situation, but in other cases, it\u0026rsquo;s very likely to undo your manual configurations.\nObservations Over the past year I\u0026rsquo;ve seen companies set forth in new cloud initiatives and have a desire to do the right things like employing infrastructure as code for their deployments. But if this is the goal, team members should embrace the process of putting any configuration changes into the code base. This is a tough pill to swallow for some organizations because, just like in the situation described above, it may take longer to make a small configuration change to fix or improve a system. The benefits of having the changes committed to code are that, the system can be re-deployed from scratch very easily and version controlled if necessary.\nSuggestions First of all, decide on whether or not infrastructure as code is a worthy objective. Maybe your organization just isn\u0026rsquo;t ready to tackle this. Maybe you have a small infrastructure footprint or don\u0026rsquo;t deploy workloads very often, or maybe you just don\u0026rsquo;t have the coding skillsets yet. Maybe IaC just isn\u0026rsquo;t a good fit for you at this time.\nSecond, if you decide to go forward, commit to doing it for all aspects of that environment. Switching whole heartedly to IaC for everything is a pretty tough thing to do all at once, but you might have a cloud project coming up where you decide everything in that small environment will be IaC. That way you can slowly but whole heartedly commit to the concept for a smaller subset of the environment. Once you\u0026rsquo;re comfortable with that environment the coding experience you\u0026rsquo;ve gained will help with future environments and should ease the way for them.\nGood luck with your coding! Thanks for reading.\n","permalink":"https://theithollow.com/2018/01/08/commit-infrastructure-code/","summary":"\u003cp\u003eOver recent years, Infrastructure as Code (IaC) has become sort of a utopian goal of many organizations looking to modernize their infrastructure. The benefits to IaC have been covered many times so I won\u0026rsquo;t go into too much detail, but the highlights include:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eReproducibility of an environment\u003c/li\u003e\n\u003cli\u003eReduction in deployment time\u003c/li\u003e\n\u003cli\u003eLinking infrastructure deployments with application deployments\u003c/li\u003e\n\u003cli\u003eSource control for infrastructure items\u003c/li\u003e\n\u003cli\u003eReduction of misconfiguration\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThe reasoning behind storing all of your infrastructure as code is valid and a worthy goal. The agility, stability, and deployment speeds achieved through IaC can prove to have substantial benefits to the business as a whole.\u003c/p\u003e","title":"Commit to Infrastructure As Code"},{"content":"It\u0026rsquo;s the beginning of a whole new year. Hopefully you\u0026rsquo;ve gotten some time off recently to recharge your batteries a bit, before heading back to the grind. While you\u0026rsquo;re getting back into the ol\u0026rsquo; routine, maybe this is a good time to consider whether or not that routine is still worthwhile?\nAre you Happy With Your Job? I t\u0026rsquo;s easy to get into a funk where you roll out of bed each day to do the same task or face the same challenges over and over again. Maybe there are things in your day to day grind that you hate, but do them anyway, because it\u0026rsquo;s part of your job. No big deal, everyone has these sorts of chores. I\u0026rsquo;m sure that nobody loves every single part of their job. But if you\u0026rsquo;ve gotten a break from work and you can\u0026rsquo;t bear to think about going back to that routine, maybe that should tell you something about your job. Are you really happy doing what you\u0026rsquo;re doing, or are you doing it because it\u0026rsquo;s a steady paycheck? Are you doing it because it\u0026rsquo;s what you know, and change is hard? Are you doing it because you feel like you have no other choice?\nThere are a ton of reasons to stay with a company or stay in a position, but really at the most basic level, ask yourself this: \u0026ldquo;Am I happy doing what I\u0026rsquo;m doing?\u0026rdquo; If the answer is \u0026ldquo;Yes\u0026rdquo;, then take note of this answer and remember this moment, for when you have those awful days at work when you\u0026rsquo;re ready to flip over a table and walk out the door.\nYou\u0026rsquo;re happy with your job, you just had a bad day, now let\u0026rsquo;s move on.\nNow if your answer was \u0026ldquo;No\u0026rdquo;, then start asking yourself why you\u0026rsquo;re not happy with your job and what could make your work life more enjoyable. Are you away from home too much? Are you too busy to do your job well? Are you not happy with management? Are you grumpy because of a position you\u0026rsquo;re in and not the company itself? These questions might help you figure out what actions to take to make 2018 a better year for you in your job role. Maybe you can have a discussion about a career path with your manager? Maybe you can talk to your boss about a role change so you can be home more? Maybe that even means a pay decrease, but you\u0026rsquo;d be in a happier place overall?\nWhat Else Aren\u0026rsquo;t You Happy With? Do you feel like others are passing you buy? Have you been slacking too much and not working on certifications needed to move you into a happier place? What things did you neglect to do last year that you can resolve to do this year? Now, when you\u0026rsquo;ve got that list, put a quick task list or a plan together to figure out how you want to achieve them. And don\u0026rsquo;t worry if you don\u0026rsquo;t achieve all of them either. Give yourself a stretch goal that might be virtually unobtainable this year. You know its a long shot to hit so if you don\u0026rsquo;t make the goal, it shouldn\u0026rsquo;t crush your spirit, but you should make a plan to get there regardless. The journey towards reaching that goal will likely give you plenty of value in other things.\nFor example, say you set a stretch goal of getting a VCDX this year. Maybe you won\u0026rsquo;t get it, but along the way, you figure out how to be confident in front of a room full of people, or you figure out how to build better designs because of what you\u0026rsquo;ve learned. There are tons of benefits in trying things, even if you haven\u0026rsquo;t completed them.\nRemember, It\u0026rsquo;s OK to Fail We\u0026rsquo;re not successful all of the time. It\u0026rsquo;s important to realize that just because you didn\u0026rsquo;t reach your goal doesn\u0026rsquo;t mean something awful is about to happen to you, or that you are a failure. Some of the most important things that I\u0026rsquo;ve learned came from failing to achieve a goal. Usually this comes with a certain amount of aggravation and reflection but ultimately ends up to be a good experience in the end. Sometimes there are things outside of our control that get in the way of our goals. Maybe a sick family member, maybe you realized that time spent towards training for a certification might be time away from family, and that is the thing that makes you most happy. Just remember that it\u0026rsquo;s OK to fail and it\u0026rsquo;s OK to admit that you\u0026rsquo;ve failed. Don\u0026rsquo;t take it from me:\nPass on what you have learned. Strength, mastery. But weakness, folly, failure also. Yes, failure most of all. The greatest teacher, failure is. Luke, we are what they grow beyond. That is the true burden of all masters. - Yoda\nSummary I know what you\u0026rsquo;re thinking, \u0026ldquo;I can make life decisions any time of the year and a new year has no impact on my ability to do that.\u0026rdquo; Well, if that\u0026rsquo;s what you\u0026rsquo;re thinking, then you\u0026rsquo;re right, but for many of us, the end of the year comes with some time away from our daily routines. Out of ear shot from the corporate world for even a little bit where you can see the forest for the trees. You can come up for air for a second and take a look around, and maybe you\u0026rsquo;ll see what things are really important to you.\nTake a second to figure out what makes you happy, and what doesn\u0026rsquo;t. When you figure that out, then you can make your plan on what changes you want to make this year. Maybe they are just small tweaks, or maybe you\u0026rsquo;re looking for a new career. Either way, it\u0026rsquo;s nice to be able to identify a path forward even if there is a chance of failure along the way. Good luck on your 2018 adventure.\n","permalink":"https://theithollow.com/2018/01/01/new-opportunities-2018/","summary":"\u003cp\u003eIt\u0026rsquo;s the beginning of a whole new year. Hopefully you\u0026rsquo;ve gotten some time off recently to recharge your batteries a bit, before heading back to the grind. While you\u0026rsquo;re getting back into the ol\u0026rsquo; routine, maybe this is a good time to consider whether or not that routine is still worthwhile?\u003c/p\u003e\n\u003ch1 id=\"are-you-happy-with-your-job\"\u003eAre you Happy With Your Job?\u003c/h1\u003e\n\u003cp\u003eI \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/12/groundhog-day-driving.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/12/groundhog-day-driving-300x206.jpg\"\u003e\u003c/a\u003e t\u0026rsquo;s easy to get into a funk where you roll out of bed each day to do the same task or face the same challenges over and over again. Maybe there are things in your day to day grind that you hate, but do them anyway, because it\u0026rsquo;s part of your job. No big deal, everyone has these sorts of chores. I\u0026rsquo;m sure that nobody loves every single part of their job. But if you\u0026rsquo;ve gotten a break from work and you can\u0026rsquo;t bear to think about going back to that routine, maybe that should tell you something about your job. Are you really happy doing what you\u0026rsquo;re doing, or are you doing it because it\u0026rsquo;s a steady paycheck? Are you doing it because it\u0026rsquo;s what you know, and change is hard? Are you doing it because you feel like you have no other choice?\u003c/p\u003e","title":"New Opportunities in 2018"},{"content":"With all of the services that Amazon has to offer, it can sometimes be difficult to manage your cloud environment. Face it, you need to manage multiple regions, users, storage buckets, accounts, instances and the list just keeps going on. Well the fact that the environment can be so vast might make it difficult to notice if something nefarious is going on in your cloud. Think of it this way, if a new EC2 instance was deployed in one of your most used regions, you might see it and wonder what it was, but if that instance (or 50 instances) was deployed in a region that you never login to, would you notice that?\nTo mitigate against issues like this we use the AWS CloudTrail service which can log any console or API request and store those logs in S3. It can also push these logs to Amazon CloudWatch Logs which allows us to do some filtering on those logs for specific events.\nThis post assumes that you\u0026rsquo;ve already setup CloudTrail to push new log entries to CloudWatch Logs. Once that\u0026rsquo;s setup we\u0026rsquo;re going to go through an example to alert us whenever a new IAM user account is created by someone other than our administrator.\nCreate a Metric Filter on the CloudTrail Logs Login to the AWS console and navigate to the CloudWatch Service. Once you\u0026rsquo;re in the CloudWatch console go to Logs in the menu and then highlight the CloudTrail log group. After that you can click the \u0026ldquo;Create Metric Filter\u0026rdquo; button.\nIn the \u0026ldquo;Filter Pattern\u0026rdquo; box we\u0026rsquo;ll select a pattern that we\u0026rsquo;re looking for. In my case I want to filter out any events where a new user account is created and the user who did it is not \u0026ldquo;ithollow\u0026rdquo;. To do that we need to use the Filter and Pattern Syntax found below.\n{($.eventName = \u0026#34;CreateUser\u0026#34;) \u0026amp;\u0026amp; ($.userIdentity.userName != \u0026#34;ithollow\u0026#34;)} You can test the results of your filter pattern agains some of your existing logs to see what is returned. In my case I got no results because I don\u0026rsquo;t have any events like that yet in my logs. When you\u0026rsquo;re ready click the \u0026ldquo;Assign Metric\u0026rdquo; button.\nNow you can leave the filter name as is, or use your own custom naming. Under the Metric Details a namespace will be added for use in the event that multiple logs have filters on them. And you can give the metric a name there as well. I\u0026rsquo;ve left the rest of the values as defaults. Click the \u0026ldquo;Create Filter\u0026rdquo; button.\nYou should be taken back to the CloudWatch Console and see that a new filter has been created.\nCreate an Alarm Now that we\u0026rsquo;ve created a way to filter our logs. Lets add an alarm to notify us when these events have occurred. On the logs screen from above, click the \u0026ldquo;Create Alarm\u0026rdquo; link next to your filter. Give the alarm a name and description for easy identification later. Then set the threshold values. I\u0026rsquo;ve said, anytime this event happens more than or equal to 1 time for a single period, trigger the alarm. I also changed the setting to treat missing data as good, otherwise I will have an alarm with \u0026ldquo;insufficient data\u0026rdquo; in it all the time until one of these weird accounts shows up. So, no news is good news, in my scenario.\nLastly, under the actions section, I\u0026rsquo;ve selected my \u0026ldquo;NotifyMe\u0026rdquo; SNS topic so that it will email me when this happens.\nTesting Now that our alarms are created and metric filters configured, lets test it. I logged into the AWS account with a user that had Admin permissions that wasn\u0026rsquo;t me and created a new user. Shortly after creating the user the CloudWatch console showed an alarm and the \u0026ldquo;StrangeUserAccounts\u0026rdquo; alarm went off.\nMy SNS notification came through email and you can see that email in the screenshot below with the details.\nSummary This was a pretty basic example, but using CloudWatch Logs with metric filters and alarms can really help you keep you a close eye on your environment. Think of all the ways you can use CloudWatch Logs to send alerts about things in your environment that you care about.\n","permalink":"https://theithollow.com/2017/12/11/use-amazon-cloudwatch-logs-metric-filters-send-alerts/","summary":"\u003cp\u003eWith all of the services that Amazon has to offer, it can sometimes be difficult to manage your cloud environment. Face it, you need to manage multiple regions, users, storage buckets, accounts, instances and the list just keeps going on. Well the fact that the environment can be so vast might make it difficult to notice if something nefarious is going on in your cloud. Think of it this way, if a new EC2 instance was deployed in one of your most used regions, you might see it and wonder what it was, but if that instance (or 50 instances) was deployed in a region that you never login to, would you notice that?\u003c/p\u003e","title":"Use Amazon CloudWatch Logs Metric Filters to Send Alerts"},{"content":"Today at AWS re:INVENT, Amazon had several new product announcements which is not uncommon for the company but one in-particular raised several eyebrows. Amazon has been working very hard to make machine learning much easier for people to use. Typically, understanding machine learning has taken great expertise and a relatively small number of people even attempted to learn these concepts just because of the complexity. That is all changing thanks to some of Amazon\u0026rsquo;s more recently announced services such as Amazon Sage Maker.\nThe biggest announcement (in my mind) was the AWS DeepLens product, which is a deep learning camera that can be programmed out of the box in less than 10 minutes!\nI\u0026rsquo;m sure the 10 minute statement is for some basic programming and to do complex stuff it will require more time, but this is a considerable jump in technology, and with that, comes some concerns. To understand more about the possibilities, AWS CEO Andy Jassy, used two examples of things that could be done with the new camera:\nThe camera sees a license plate enter your driveway and uses the API to programmatically open your garage door (assuming you have a smart home device to do that). The camera will trigger an alert that your dog sat on the couch. Very neat huh? With this $249.00 camera you can open up a whole new world of possibilities that we couldn\u0026rsquo;t programmatically access (very easily) in the consumer market. What would you use the camera for?\nThe Good The camera will allow us to programmatically take actions on everyday events that it witnesses. These capabilities could really enhance our lives in a number of ways that I\u0026rsquo;m sure I\u0026rsquo;m just scratching the surface of in this post. Think about security concerns that this camera might help to alleviate? The U.S. has a bit of a gun problem and mass shooting seem to be on the rise. School shootings are of particular concern for me, having a school aged child and being husband to a teacher. Schools could have these cameras scattered throughout the school and at the entrances. If the camera notices a gun, alerts can be triggered to put the school on lock-down, alert authorities or other safety precautions.\nPolice brutality has been a problem in the U.S., as well. Police offers have been wearing body cameras to document incidents. What if that camera could alert others that a gun has been drawn or fired, by either the police officer or a criminal?\nThe Bad Unfortunately, with these great new capabilities, they may go too far and likely will. Where will these cameras infringe upon our own personal privacy? Will these cameras be used to prevent vandalism of public restrooms? No, don\u0026rsquo;t be silly, we won\u0026rsquo;t go so far as to put them IN the restrooms, but the camera might be watching the outside of that restroom and logging who goes in and when. Would you like someone knowing how many times you used a restroom in a day and when?\nWe can imagine these cameras in department stores, watching our every move, storing a profile for our faces and logging our reactions to certain products. Do we really want these relatively inexpensive cameras tracking all of this data about our behavior?\nThe places that we\u0026rsquo;ll feel safe from prying eyes just got really small all of a sudden.\nThe Both Good and Bad The biggest concern in my mind is that there are applications for this that have good purposes but also bad consequences. How about this example. Sexual predators in the U.S. have to abide by Megan\u0026rsquo;s Law which states requires them to do paperwork to register and have their neighbors notified. Regardless of what we think of that law, consider what might happen with the DeepLens camera. Could sexual predators be required to wear one of these? The camera could send alerts based on any contact they had with someone under-aged or that met a specific profile.\nPerhaps that\u0026rsquo;s good for public safety. Perhaps it\u0026rsquo;s a total infringement on personal rights to privacy. I can\u0026rsquo;t answer this question and I don\u0026rsquo;t have the answers. But I\u0026rsquo;m also concerned that the people who make the laws and run the government won\u0026rsquo;t know how to regulate these things. These are hard questions to answer.\nMy Take From an engineering perspective, this might be the neatest technological thing I\u0026rsquo;ve seen in quite a while. I already have ideas for updating my smart home and to watch my cat so he stays out of trouble (don\u0026rsquo;t ask). But from a sociological perspective this is a truly terrifying thing. Smart devices that can make decisions based on what they are seeing seems very much like skynet from those terminator movies with Arnold Schwarzenegger.\nI don\u0026rsquo;t want to say that we shouldn\u0026rsquo;t make these devices. You cannot roll back progress either. Someone was able to create a product that people will want. We can\u0026rsquo;t say that we won\u0026rsquo;t use it because of some risks. The best correlation that I can make with this new technology that may have great consequences is that of a nuclear bomb. The engineering was amazing to create such a powerful weapon and it was cool. The consequences of doing that though were grave to many people and still haunt us today. Will this new technology forever change how we appreciate privacy or miss it?\nI\u0026rsquo;ll leave you with a famous quote from the father of the atom bomb, Robert Oppenheimer.\nNow, I am become death, the destroyer of worlds.\nWhat do you think? Leave your comments at the bottom of the post.\n","permalink":"https://theithollow.com/2017/11/29/aws-deeplens-nuclear-weapon-privacy/","summary":"\u003cp\u003eToday at AWS re:INVENT, Amazon had several new product announcements which is not uncommon for the company but one in-particular raised several eyebrows. Amazon has been working very hard to make machine learning much easier for people to use. Typically, understanding machine learning has taken great expertise and a relatively small number of people even attempted to learn these concepts just because of the complexity. That is all changing thanks to some of Amazon\u0026rsquo;s more recently announced services such as \u003ca href=\"https://aws.amazon.com/sagemaker/\"\u003eAmazon Sage Maker\u003c/a\u003e.\u003c/p\u003e","title":"AWS DeepLens - The Nuclear Weapon of Privacy"},{"content":"If you\u0026rsquo;re an Amazon Web Services customer and you\u0026rsquo;re not using the built in AWS config rules, you should be. AWS Config is a service that shows you the configuration changes that have happened on your AWS accounts. Whether that\u0026rsquo;s changes to your user accounts, changes to networks, modifications to S3 buckets or plenty of other configurations. AWS Config will keep this audit log of your changes in a specified S3 bucket which could be used for all sorts of other solutions such as updating your ServiceNow configuration management database. See this post from ServiceNow on some details of the solution.\nOK, well maybe going that far is too much for you to begin with, but AWS Config also provides some very useful and basic rules that can alert you when something in the environment isn\u0026rsquo;t the way you want it. Amazon has provided some of the most common best practices checks to help you get started.\nAdd AWS Managed Config Rules Login to your AWS Console and go to the AWS Config service. Click the Dashboard and we\u0026rsquo;ll see some information about some of the services in your account. On the right hand side you\u0026rsquo;ll see the \u0026ldquo;Add rule\u0026rdquo; button unless you\u0026rsquo;ve set some up in the past, in which you can then click Rules in the menu to get to the same screen. Click the \u0026ldquo;Add rule\u0026rdquo; button.\nOn the Add rule screen, browse through the list of managed rules that AWS has provided for us. When you see a rule that you\u0026rsquo;d like to use select it from the list.\nThe first rule that I\u0026rsquo;ve chosen to add is the CloudTrail-enabled rule. CloudTrail is a service used to audit all API and Console changes so that we can go back at any time and see what\u0026rsquo;s happened to our account. As a best practice it should always be enabled and I want to make sure that this is the case with my account. So once we add the rule, we\u0026rsquo;ll need to enter in some information for the rule to be used successfully. Depending on the rule that you\u0026rsquo;ve selected, the information needed to configure it will be different. In my case I need to add a frequency for the rule to run, which could either be a schedule, or on a configuration change. Since this is a lab account, a schedule will work for me. After that I need to enter in some optional information about my CloudTrail configuration.\nOnce you\u0026rsquo;re done modifying, click save to add the rule. The rules screen will show up with a list of your rules and will show Evaluating until the checks are all done.\nAt this point, feel free to add any other rules that you might find useful to protect your account from unwanted changes such as publicly accessible S3 buckets, Security Groups that are open to the Internet and a whole list of others that require almost no work on your part to get setup. As you can see from my list, I have a writeable S3 bucket that is publicly accessible. Probably not a good thing in many cases, but in my case it\u0026rsquo;s just fine. In my lab, I\u0026rsquo;m also making sure that root accounts have multi-factor authentication enabled and my tenancy model is shared.\nRules Dashboard Once you\u0026rsquo;ve setup all your rules and they\u0026rsquo;ve had a chance to go through your account to evaluate the settings, you\u0026rsquo;ll have a nice dashboard with some basic information about your account. This allows you to check some very basic settings at a glance.\n","permalink":"https://theithollow.com/2017/11/27/use-aws-config-managed-rules-protect-accounts/","summary":"\u003cp\u003eIf you\u0026rsquo;re an Amazon Web Services customer and you\u0026rsquo;re not using the built in AWS config rules, you should be. AWS Config is a service that shows you the configuration changes that have happened on your AWS accounts. Whether that\u0026rsquo;s changes to your user accounts, changes to networks, modifications to S3 buckets or plenty of other configurations. AWS Config will keep this audit log of your changes in a specified S3 bucket which could be used for all sorts of other solutions such as updating your ServiceNow configuration management database. See \u003ca href=\"http://www.servicenow.com/solutions/technology-solutions/lifecycle-management/cloud-lifecycle.html\"\u003ethis post from ServiceNow\u003c/a\u003e on some details of the solution.\u003c/p\u003e","title":"Use AWS Config Managed Rules to Protect Your Accounts"},{"content":"Sometimes it\u0026rsquo;s just not desirable to have your Amazon EC2 instances deployed all willy-nilly across the AWS infrastructure. Sure it\u0026rsquo;s nice not having to manage the underlying infrastructure but in some cases you actually need to be able to manage the hosts themselves. One example is when you have licensing that is \u0026ldquo;old-fashioned\u0026rdquo; and uses physical core counts. With the default tenancy model, host core counts just don\u0026rsquo;t make sense, so what can we do?\nEnter dedicated hosts. A dedicated host is an AWS physical server and hypervisor where you get to manage what goes on it. It\u0026rsquo;s dedicated in the sense that other customers can\u0026rsquo;t share that piece of hardware with you (however, note that the EBS volumes you use are still on shared infrastructure) so it\u0026rsquo;s yours to do as you please, well sort of.\nBefore We Start Before we get started with using dedicated hosts, there are a few things you should be aware of in order to use the hosts efficiently.\nOne host = one instance size - When you get a dedicated host from AWS, you must pick the size of the instances that will live on it, for example an M4.large instance type. This means that if you want to have M4.large and m4.xLarge instances, you\u0026rsquo;ll need two separate hosts. Also, depending on the size you select, a certain number of instances can be launched on that host before it\u0026rsquo;s out of space. For sizing please see: https://aws.amazon.com/ec2/dedicated-hosts/pricing/ Only certain OS\u0026rsquo;s can be used - RHEL, SUSE Linux and Windows AMIs provided by AWS cannot be used with dedicated hosts. You\u0026rsquo;ll need to use your own AMIs, or the Amazon Linux AMIs. I believe this is due to licensing issues with those operating systems, but in any case be aware of this limitation. Instances must be Launched in a VPC - No big deal here, but you an have multiple instances on a single host and those instance can belong to different VPCs, but they must belong to a VPC. Autoscaling, RDS, and instance recovery are a no-go - Some features we\u0026rsquo;re accustomed to aren\u0026rsquo;t available with Dedicated Hosts You Pay for the Hosts - Maybe this is self-explanatory but when you order a dedicated host, you pay for that host and not the instances that reside on it. Use the host however you want, but you pay for the whole host. A host with a single instance on it might not be a good use of cash, but a fully loaded host might save you money over buying individual instances with default tenancy. Reserved instances are available for these hosts just like they are for EC2 instances which can save you money. The best savings dedicated hosts can get you though is usually related to the licensing of the instances where core counts matter. Using Dedicated Hosts To use dedicated hosts go to the Amazon EC2 console and select \u0026ldquo;Dedicated Hosts\u0026rdquo; in the menu. If this is the first time you\u0026rsquo;re using dedicated hosts, you\u0026rsquo;ll see a familiar getting started page. Click the \u0026ldquo;Allocate a Host\u0026rdquo; button.\nBefore AWS can give you a host to use, you must select an Instance type, Availability Zone and a quantity to order. I selected m4.large instances so those are the only sized instances I can use on this host. In addition, you must select whether or not you want untargeted instances to be allowed to be placed on this host. Basically this means that if you don\u0026rsquo;t specify a host, AWS might assign new instances to this host. For our purposes we\u0026rsquo;ll select no on the \u0026ldquo;Allow instance auto-placement\u0026rdquo; option. Click the \u0026ldquo;Allocate host\u0026rdquo; button.\nWhen you\u0026rsquo;re done you should see a success message. Note: you might get an error saying you\u0026rsquo;ve hit a resource limit. I needed to request more resources from AWS before I was able to successfully complete this process.\nAs soon as you click the \u0026ldquo;View hosts\u0026rdquo; button you should see a host ready to go. You\u0026rsquo;ll notice that the host is available and has a utilization of 0/XX where XX is the total number of slots available for that instance type. If you refer to the table that was linked above, you\u0026rsquo;ll notice that 22 m4.large instances can be used. I now have 22 slots available for my to launch m4.large instances.\nNow we can go about launching some new instances. Just go to the EC2 console and launch a new ec2 instance like you\u0026rsquo;ve probably done plenty of times. On the AMI screen, be sure to select an approved AMI such as the Amazon Linux AMI or your custom AMIs with BYOD licensing.\nOn the instance type screen be sure to select the instance type that matches your dedicated host type.\nOn the instance details screen we need to make some changes. Select the normal stuff including which VPC, number of instances etc. But you must make sure to select a subnet in the same AZ as where your dedicated host lives. It\u0026rsquo;s likely that you\u0026rsquo;ll have dedicated hosts in multiple AZs anyway, but pay attention to that. The big piece is to ensure that under Tenancy that \u0026ldquo;Dedicated Host\u0026rdquo; is selected. Once you do this, you\u0026rsquo;ll have two new drop downs in which selections need to be made.\nHost: You can select Auto-Placement which (surprisingly) places your instances on any dedicated hosts that match the configuration or, select the individual host where the instance should be placed.\nAffinity: Affinity lets you pin the instance to the host. If you set this to host then the instance will be stuck with this host after an instance reboot. If you leave this setting off, then a restart could cause the instance to start on another host with the same configurations.\nProceed through the rest of the normal EC2 launch settings. When you\u0026rsquo;re done you can look at your dedicated hosts and see that the utilization has changed. You can also see the instances listed in the bottom tab.\nIf you look in your EC2 instances console, you\u0026rsquo;ll also see the instance running there. So you can manage these instances within either screen.\nNow if you\u0026rsquo;ve deleted all of the instances on a host, you can select the host and click the Actions drop down and then select \u0026ldquo;Release Hosts\u0026rdquo; to give the host back to AWS for the next customer.\nSummary Dedicated hosts have a purpose in the cloud. While specifying a host and managing the minute details about deployment might not seem like the tenets of cloud like \u0026ldquo;resource pooling\u0026rdquo;, it is a pretty valuable option for licensing reasons or for compliance purposes. Don\u0026rsquo;t rule them out in your environment.\n","permalink":"https://theithollow.com/2017/11/13/aws-dedicated-hosts/","summary":"\u003cp\u003eSometimes it\u0026rsquo;s just not desirable to have your Amazon EC2 instances deployed all willy-nilly across the AWS infrastructure. Sure it\u0026rsquo;s nice not having to manage the underlying infrastructure but in some cases you actually need to be able to manage the hosts themselves. One example is when you have licensing that is \u0026ldquo;old-fashioned\u0026rdquo; and uses physical core counts. With the default tenancy model, host core counts just don\u0026rsquo;t make sense, so what can we do?\u003c/p\u003e","title":"AWS Dedicated Hosts"},{"content":"Amazon Web Services has some great tools to help you operate your EC2 instances with their Simple Systems Manager services. These services include ensuring patches are deployed within maintenance windows specified by you, automation routines that are used to ensure state and run commands on a fleet of servers through the AWS console. These tools are great but wouldn\u0026rsquo;t be be even better if I could use these tools to manage my VMware virtual machines too? Well, you\u0026rsquo;re in luck, because EC2 SSM can do just that and better yet, the service itself is free! Now, if you\u0026rsquo;ve followed along with the \u0026quot; AWS EC2 Simple Systems Manager Reference\u0026quot; guide you\u0026rsquo;ve probably already seen the goodies that we\u0026rsquo;ve got available, so this post is used to show you how you can use these same tools on your vSphere, Hyper-V or other on-premises platforms.\nBefore we begin, we\u0026rsquo;ll need to be logged into the AWS console with someone who has permissions to the SSM service and IAM roles. This post assumes that you have administrative access to the console before beginning.\nCreate an Instance Profile The first thing that we want to do is to ensure that our non-EC2 instances have the appropriate rights within AWS. To do this we create an IAM Service role that allows us to assume a role to trust the SSM Service. To begin, login to the AWS Console and open the IAM service. From there go down to \u0026ldquo;Roles\u0026rdquo; in the menu and click the \u0026ldquo;Create role\u0026rdquo; button.\nIn the create role wizard, select the \u0026ldquo;AWS service\u0026rdquo; trusted entity and then click on EC2.\nThen select \u0026ldquo;EC2 Role for Simple Systems Manager\u0026rdquo; as your use case. Click \u0026ldquo;Next: Permissions\u0026rdquo; button.\nON the permissions page make sure that the AmazonEC2RoleforSSM is listed and then go ahead and click the \u0026ldquo;Next: Review\u0026rdquo; button.\nOn the review screen, give the role a name and a description. Then click the \u0026ldquo;Create Role\u0026rdquo; button.\nManaged Instance Activation Now that the AWS role has been created, we can focus on how we can get our VMs connected with the AWS SSM service. To do this, we need to create a Managed Instance Activation. To do this go to the EC2 console and click on \u0026ldquo;Activations\u0026rdquo; under the \u0026ldquo;Systems Manager Shared Resource\u0026rdquo; group in the navigation pane.\nYou\u0026rsquo;ll probably be greeted with a familiar welcome page since you haven\u0026rsquo;t done this before. Click the \u0026ldquo;Create an Activation\u0026rdquo; button to get started with this step.\nOn the first screen, give the activation a description and an instance count. The instance count makes sure that only a certain number of instances are being used. After that select the \u0026ldquo;Create a system default command execution role that has the required permissions\u0026rdquo; so that the automation services can act on your behalf within AWS. You may also enter a date when the activations will expire and a default instance name which is how the machines will show up in the EC2 console.\nIf you set an instance limit or an expiry date, be aware that you can\u0026rsquo;t edit those later. You can create new activations but you can\u0026rsquo;t edit the existing ones after they\u0026rsquo;re created. When you\u0026rsquo;re done, click the \u0026ldquo;Create Activation\u0026rdquo; button. You should get a successful activation link that has some important information on it. This screen gives you an Activation Code and an Activation ID which you won\u0026rsquo;t be able to retrieve again so store it someplace safe. These keys are similar to secret and access keys or usernames and passwords. (Don\u0026rsquo;t worry, these codes aren\u0026rsquo;t still usable so no bother trying.) Configure your Managed Instance Now that we\u0026rsquo;ve got the AWS Console work done, we need to make sure to install the SSM agent on our guests. The instructions to do this differ based on the operating system so I\u0026rsquo;ll just point you to this link. In my lab, I\u0026rsquo;ve gone through the steps to install the SSM Agent on a CentOS virtual machine within my vSphere environment. To do that I ran the following code:\nmkdir /tmp/ssm sudo curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm -o /tmp/ssm/amazon-ssm-agent.rpm sudo yum install -y /tmp/ssm/amazon-ssm-agent.rpm sudo systemctl stop amazon-ssm-agent That makes sure that my guest has the agent installed, but not running. Before I start the service on my guest I also need to register the node with AWS when that agent starts up. To do that I run the following code on my guest. You\u0026rsquo;ll notice two sections labled \u0026ldquo;code\u0026rdquo; and \u0026ldquo;id\u0026rdquo; which come from the create activation screen above. Remember the Activation Code and Activation ID? Place those in the code snippet below and run it.\nsudo amazon-ssm-agent -register -code \u0026#34;code\u0026#34; -id \u0026#34;id\u0026#34; -region \u0026#34;region\u0026#34; Once you\u0026rsquo;re done you can start the service. Here is the code to do that on CentOS7\nsudo systemctl start amazon-ssm-agent Check the Simple Systems Manager Console It might take a minute or two to register, but you can go to the \u0026ldquo;Managed Instances\u0026rdquo; screen within the AWS EC2 console navigation pane and you should see a new managed instance. Note that the name matches what we placed in the Activation screen. You can also see that the IP Address is listed and this is the local IP address from my vSphere environment.\nTest SSM Now to run a test, lets go to the Run Command service listed in the EC2 Console navigation pane. I\u0026rsquo;ve run a few commands here as tests, but my main test was used to install Apache and deploy my basic website which the files are stored in S3.\nAfter running the following command through the EC2 SSM console you can see that my on-premises instance now has an Apache site available and that the IP Address matches the one that shows up in the managed instances console.\n#!/bin/bash yum install httpd -y yum update -y aws s3 cp s3://ithollow-webbucket/image001.png /var/www/html/image001.png aws s3 cp s3://ithollow-webbucket/index.html /var/www/html/temp1.html service httpd start chkconfig httpd on Summary So now that you\u0026rsquo;ve setup all the hard stuff, maybe you can use EC2 SSM to manage other automation routines, or start using patch compliance instead of manually updating servers each month. This might be a nice solution instead of using SCCM or WSUS, or you might just need it for a small office deployment. How will you use EC2 Simple Systems Manager with your environment?\n","permalink":"https://theithollow.com/2017/11/06/manage-vsphere-virtual-machines-aws-ssm/","summary":"\u003cp\u003eAmazon Web Services has some great tools to help you operate your EC2 instances with their Simple Systems Manager services. These services include ensuring \u003ca href=\"/2017/07/24/patch-compliance-ec2-systems-manager/\"\u003epatches are deployed\u003c/a\u003e within maintenance windows specified by you, \u003ca href=\"/2017/09/26/aws-ec2-systems-manager-state-manager/\"\u003eautomation routines\u003c/a\u003e that are used to ensure state and \u003ca href=\"/2017/07/17/run-commands-ec2-systems-manager/\"\u003erun commands\u003c/a\u003e on a fleet of servers through the AWS console. These tools are great but wouldn\u0026rsquo;t be be even better if I could use these tools to manage my VMware virtual machines too? Well, you\u0026rsquo;re in luck, because EC2 SSM can do just that and better yet, the service itself is free! Now, if you\u0026rsquo;ve followed along with the \u0026quot; \u003ca href=\"/2017/10/02/aws-ec2-simple-systems-manager-reference/\"\u003eAWS EC2 Simple Systems Manager Reference\u003c/a\u003e\u0026quot; guide you\u0026rsquo;ve probably already seen the goodies that we\u0026rsquo;ve got available, so this post is used to show you how you can use these same tools on your vSphere, Hyper-V or other on-premises platforms.\u003c/p\u003e","title":"Manage vSphere Virtual Machines through AWS SSM"},{"content":"VMware has been busy over the last year trying to re-invent themselves with more focus on cloud. With that they\u0026rsquo;ve added some new SaaS products that can be used to help manage your cloud environments and provide some additional governance IT departments. Cloud makes things very simple to deploy and often eliminates the resource request phases that usually slow down provisioning. But once you start using the cloud, you can pretty quickly lose track of the resources that you\u0026rsquo;ve deployed, and now are paying for on a monthly basis, so it\u0026rsquo;s important to have good visibility and management of those resources.\nOne of VMware\u0026rsquo;s new offerings is called VMware Discovery. This tool can be used to help, you guessed it, discover the inventory across your public clouds and within vCenter. The front page will give you a dashboard of your cloud environments and your vCenters and from there you can drill down into more specifics. This dashboard is handy so you can see some of the more common cloud resources in use across your environment. Managers would be able to see the number of VMs running in an enviornment, their virtual disks and any networks in use. A dashboard like this may be very handy for someone without a lot of technical expertise in these cloud platforms, but are still responsible for controlling costs.\nThe tool is dead simple to setup. After you sign up for an account from: https://cloud.vmware.com/discovery/request-access you can go add your cloud accounts for discovery to go out and report on your resources. Right now you have the options of AWS and Azure but if the solution takes off, you can be sure they\u0026rsquo;ll add other cloud vendors as well. Under \u0026ldquo;Cloud Accounts\u0026rdquo; you can click the \u0026ldquo;ADD NEW\u0026rdquo; button to go through the process of providing access keys and account information so the tool has permissions to discover the environment.\nOnce the tool runs discovery, which didn\u0026rsquo;t take long in my case, you can list all of the resources in use under the \u0026ldquo;Resources\u0026rdquo; tab in the navigation pane. I was a bit disappointed that the only value I could sort on was the Name field, but I\u0026rsquo;m sure that\u0026rsquo;ll be fixed soon. Being able to see tags here is awesome since I can then check those tags across multiple clouds to ensure consistency. It would be amazing to sort on these tags though in the future.\nThe tool\u0026rsquo;s best feature, in my opinion, is that you can create your own resource groups. So I can go to Resource Groups in the navigation bar and create a group of my choosing. I can group based on Account, Category, Cloud, Name, Public IP, Region, Resource Type, and Tags. As a quick example I created two groups, one for AWS in the US-East-1 region and one for Azure in the US-East Region. This lets me quickly look at resources with specific attributes which can be really handy.\nSummary The VMware Discovery tool is a nice way to very quickly get a look at your resources across multiple clouds but at this point doesn\u0026rsquo;t have a lot of functionality. It\u0026rsquo;s pretty new so that\u0026rsquo;s to be expected, but if you want to try it out for yourself, the product is free in the United State (sorry rest of the world) until November 31st of 2017. Try it out for yourself and make your own decisions. For more information check out the cloud blog from VMware. The link to signup is right here: https://cloud.vmware.com/discovery\n","permalink":"https://theithollow.com/2017/10/30/vmware-discovery/","summary":"\u003cp\u003eVMware has been busy over the last year trying to re-invent themselves with more focus on cloud. With that they\u0026rsquo;ve added some new SaaS products that can be used to help manage your cloud environments and provide some additional governance IT departments. Cloud makes things very simple to deploy and often eliminates the resource request phases that usually slow down provisioning. But once you start using the cloud, you can pretty quickly lose track of the resources that you\u0026rsquo;ve deployed, and now are paying for on a monthly basis, so it\u0026rsquo;s important to have good visibility and management of those resources.\u003c/p\u003e","title":"VMware Discovery"},{"content":"Sometimes, you just need to change the data center where you\u0026rsquo;re running your virtual machines. You could be doing this for disaster recovery reasons, network latency reasons, or just because you\u0026rsquo;re shutting down a region. In an on-prem environment, you might move workloads to a different data center by vMotion, VMware Site Recovery Manager, Zerto, Recoverpoint for VMs, Veeam, or one of the other great tools for a virtualized environment. But how about if that VM is running in an AWS region and you want to move it to another region?\nThis post aims to explain how you can move a running EC2 instance within one region, to another region. I\u0026rsquo;ll take an EC2 instance running my (incredibly sophisticated) Apache website from the US-East-1 (Northern Virginia) region over to US-East-2 (Ohio) region.\nSnapshot the EC2 Instance Before we begin, lets make sure my web server is running in an optimal state. I\u0026rsquo;ll go to the EC2 console and find the public address for my EC2 instance and browse to it through my web browser. Yep, my EC2 instance is providing my web page which lists the internal IP Address and my logo. This internal IP Address is automatically added to the web page through the user data parameters provided at boot up.\nNext we want to take a snapshot of the instance from the EC2 console. Go to your EC2 console and navigate to the \u0026ldquo;Snapshots\u0026rdquo; item in the left hand menu.\nClick \u0026ldquo;Create Snapshot\u0026rdquo; to open up the snapshot window. From there select the volume that\u0026rsquo;s attached to your EC2 instance (this post assumes only a root volume) and then name your snapshot. You should also add a good description here for reference purposes later on down the road. When you\u0026rsquo;re done, click the \u0026ldquo;Create\u0026rdquo; button.\nNOTE: notice the Encrypted option is set to No.\nReplicate the Snapshot to the New Region Once your snapshot has been created, select it from the menu and click the \u0026ldquo;Actions\u0026rdquo; button and then select \u0026ldquo;Copy\u0026rdquo; from the drop down menu.\nHere we select the region where we want the new instance to be created. We can modify the description and then notice that we have an option to encrypt the volume as part of the copy command. If we encrypt the volume we\u0026rsquo;ll need to select the key from the AWS KMS service that would be used to encrypt the volume. When you\u0026rsquo;ve set your configurations click the \u0026ldquo;Copy\u0026rdquo; button.\nYou should receive a success message that the operation has started. Click the \u0026ldquo;Close\u0026rdquo; button.\nThe New Region Now, if we change to the destination region in our AWS console, we can look for any snapshots that exist in that region. Depending on how long you wait before switching regions, you may see the snapshot that we\u0026rsquo;re in the process of copying over is in a pending state. Wait until the snapshot has finished copying to the new location before proceeding.\nOnce the snapshot is ready, select it and then click the \u0026ldquo;Actions\u0026rdquo; button and then the \u0026ldquo;Create Image\u0026rdquo; item in the drop down menu. This process takes the snapshot and creates an Amazon Machine Image (AMI - Pronounced: Ah-mee) which can be used to spin up new EC2 instances. Provide a name, the architecture, a description, the virtualization type and any size differences that you might want to make here. You\u0026rsquo;ll notice that the root volume here will be encrypted since we encrypted our snapshot when we copied it to the secondary region. When you\u0026rsquo;re done, click the \u0026ldquo;Create\u0026rdquo; button.\nIf all goes well, you should get a success message that your snapshot was created correctly. Click the \u0026ldquo;Close\u0026rdquo; button.\nNow, if you navigate to the AMIs section of the EC2 console, you should see a new AMI with the name that you gave it in the previous step. From here, you can launch a new EC2 instance using this image and place it in whichever Availability Zone, subnet and security group that makes sense in your new region.\nThe Results After deploying my new AMI in the Ohio region, I test my application by finding the IP Address from the EC2 console and browsing to it with my web browser. Notice that the metadata in the website still shows my previous IP Address that we saw in the first screenshot in this post. Thats because the image is identical to the snapshot we made from the US-East-1 instance. When i started up this snapshotted AMI I didn\u0026rsquo;t provide any user data to change this so it used what it already had which is left over from the previous region. In your case, this probably won\u0026rsquo;t matter, but I did it to show that it\u0026rsquo;s an exact copy of the instance from US-East-1.\nSummary So moving an instance from one region to another isn\u0026rsquo;t quite as neat as using long-distance vMotion but it does the job adequately enough. Remember that in the cloud world we\u0026rsquo;re likely more interested in re-creating an instance quickly through automation, than moving a live workload like we used to do. This process keeps the primary instance running the entire time while using a replicated snapshot to run a new instance in another region. Whether you need to power down the original instance or not, is something you\u0026rsquo;ll have to decide for your use case. Thanks for reading.\n","permalink":"https://theithollow.com/2017/10/23/move-ec2-instance-another-region/","summary":"\u003cp\u003eSometimes, you just need to change the data center where you\u0026rsquo;re running your virtual machines. You could be doing this for disaster recovery reasons, network latency reasons, or just because you\u0026rsquo;re shutting down a region. In an on-prem environment, you might move workloads to a different data center by vMotion, VMware Site Recovery Manager, Zerto, Recoverpoint for VMs, Veeam, or one of the other great tools for a virtualized environment. But how about if that VM is running in an AWS region and you want to move it to another region?\u003c/p\u003e","title":"Move an EC2 Instance to Another Region"},{"content":"When it comes to deploying EC2 instances within Amazon Web Services VPCs, you may find yourself confused when presented with those tenancy options. This post aims to describe the different options that you have with AWS tenancy and how they might be used.\nFirst and foremost, what do we mean by tenancy? Well, tenancy determines who is the owner of a resource. It might be easiest to think of tenancy in terms of housing. For instance if you have a house then you could consider it a dedicated tenant since only one family presumably lives there. However, if you have an apartment building, there is a good chance that several families have rooms in a single building which would be more like a shared tenancy model.\nAWS provides a few options for tenancy including dedicated or the default type of shared. These models work in a very similar fashion to the housing example above. Shared tenancy means that multiple EC2 instances from different customers may reside on the same piece of physical hardware. The dedicated model means that your EC2 instances will only run on hardware with other instances that you\u0026rsquo;ve deployed, no other customers will use the same piece of hardware as you.\nShared Tenancy - Default The default tenancy model is the one most commonly used with AWS. Multiple customers will share the same pieces of hardware even though they don\u0026rsquo;t interact with each other. Remember that underneath the covers in AWS, there is a physical host with a hypervisor running on it to handle the virtualization of CPU, Memory, Storage etc. Customers will choose to deploy a new EC2 instance and AWS fits that instance onto the appropriate physical host and isolate it from other customers even if they\u0026rsquo;re sharing the same physical resources. This is generally the option that you will want to use unless you have regulatory compliance or licensing restrictions causing you to pick a dedicated model. The shared tenancy model is also the cheapest option for running your EC2 instances.\nDedicated Tenancy As mentioned previously, dedicated tenancy ensures that your EC2 instances are run on hardware specific to your account but comes at a price. AWS usually focuses on driving down costs to operate their data centers and providing you your own isolated hosts to use makes that difficult. The result is that different charges need to be added to make it worthwhile to offer to their customers. Now, you might be asking why you\u0026rsquo;d want to use a dedicated tenancy model when there are pricing complications associated with them. In some cases due to licensing restrictions some software isn\u0026rsquo;t allowed to be run on a shared tenancy model. For instance if you\u0026rsquo;re trying to use Bring Your Own License (BYOL) to AWS, some licenses are based on the Socket model where the number of hosts sockets are used for licensing. In other circumstances, regulatory compliance may dictate that you can\u0026rsquo;t use the shared model. HIPAA up until earlier this year required dedicated tenancy to ensure data confidentiality. This restriction has since been removed.\nThere are two different options for dedicated tenancy with AWS: Dedicated Hosts and Dedicated Instances.\nDedicated Hosts With a dedicated host, you purchase an entire physical host from AWS and that host is billed to you on an hourly basis just like EC2 instances are billed. Once you\u0026rsquo;ve purchased that host, you\u0026rsquo;re allowed to spin up as many EC2 instances as that host will allow for no additional charges. This might seem a lot like how you would manage an on-premises solution like vSphere. You buy and license the host and then you can move your instances on it until it\u0026rsquo;s full. Dedicated hosts have a few considerations that you should be aware of to ensure the proper usage and cost reductions.\nYou may not mix EC2 instance types on the same dedicated host - If you purchase a dedicated instance you must decide what type of instance that you will be placing on it. For example you would purchase an m4.large host meaning that you could put as many m4.large instances on that host that you want up to the maximum (22 as of this writing) but you are not allowed to add m3.large or m4.xlarge for example. If you want to add m4.xlarge instances on dedicated hosts then you must purchase another dedicated host.\nYou are responsible for waste - As with the previous example, you\u0026rsquo;ll be paying for the entirety of the dedicated host. It does not make sense to purchase a dedicated host to run a single VM and leave 21 additional slots available that are unused. You\u0026rsquo;ll pay for the whole host so if you aren\u0026rsquo;t fully utilizing them, you\u0026rsquo;re wasting money.\nEach host type has different maximums - Since each EC2 instance type has a different amount of CPU, Memory and storage, the dedicated hosts will offer different maximums by instance type. The current breakdown for m4 and m3 dedicated hosts are shown below but be aware that they are subject to change. Please check the latest AWS documentation for up to the minute changes.\n[table id=5 /]\nPrimary Use Cases - In addition to the compliance purposes, dedicated hosts are used for licensing purposes when the license model requires you to use sockets or cores. Specifically this option is most often used with Microsoft BYOL situations where the customer doesn\u0026rsquo;t have Software Assurance or the product doesn\u0026rsquo;t have license mobility. Since you can control the placement of workloads and ensure the socket count is being properly maintained, this option can be used. Also, if you fill the host to capacity, this option may be cheaper than the shared tenancy model but you must manage that capacity wisely.\nDedicated Instances With a dedicated instance, you\u0026rsquo;re still receiving the benefits of having separated hosts from the rest of the AWS customers but you are not paying for the entire host all at once. You do not need to worry about the capacity of the hosts but you\u0026rsquo;re being charged a higher rate for the instances. This type of instance model is similar to the default model where you don\u0026rsquo;t worry about where the instances are, but it does ensure they\u0026rsquo;re kept separate. In addition to the higher rate that you\u0026rsquo;re charged for dedicated instances, you\u0026rsquo;re also charged a $2 per hour charge per region where dedicated instances are being used. You should be aware though, that even though the instances are on dedicated hardware, if they are using Elastic Block Storage (EBS) devices, they will be on shared hardware. The dedicated instance tenancy doesn\u0026rsquo;t include their virtual disks unless you choose instance storage.\nThe following table shows the price differences between some m3 and m4 instances using the default tenancy or the dedicated instance tenancy in AWS.\n[table id=4 /]\nDedicated Instances Use Cases - Dedicated instances might be used if you have compliance reasons that require that hosts are not shared between customers, but you don\u0026rsquo;t want to manage all of the hosts. This model can be used with BYOL options for anything licensed by the user such as Windows Desktops operating systems or MSDN as examples. The use of dedicated instances does not require License Mobility or Software Assurance. The biggest note though is that Microsoft Server licenses do not support the BYOL model here. You must purchase the licenses with the instance if you plan to use them here.\nSummary The figure below should provide a good overview of the main differences between the tenancy models. Shared will include your instances with other customers while the dedicated model ensures that only your instances will run on those hosts. Dedicated hosts are entire hosts of different sizes that are available for you to fill up as you need while dedicated instances are similar to the shared model without other customers.\n","permalink":"https://theithollow.com/2017/10/16/understanding-aws-tenancy/","summary":"\u003cp\u003eWhen it comes to deploying EC2 instances within Amazon Web Services VPCs, you may find yourself confused when presented with those tenancy options. This post aims to describe the different options that you have with AWS tenancy and how they might be used.\u003c/p\u003e\n\u003cp\u003eFirst and foremost, what do we mean by tenancy? Well, tenancy determines who is the owner of a resource. It might be easiest to think of tenancy in terms of housing. For instance if you have a house then you could consider it a dedicated tenant since only one family presumably lives there. However, if you have an apartment building, there is a good chance that several families have rooms in a single building which would be more like a shared tenancy model.\u003c/p\u003e","title":"Understanding AWS Tenancy"},{"content":"Geeks and sports just don\u0026rsquo;t mix. Well, thats not really true, but seems to be the stereotype that I\u0026rsquo;m accustomed to hearing. If you\u0026rsquo;re good with computers, or like science, then you probably don\u0026rsquo;t get, or don\u0026rsquo;t like sports. But here\u0026rsquo;s another crass generalization that I\u0026rsquo;ll make with absolutely no statistics to back it up: Baseball should be the sport that geeks gravitate towards.\nIt\u0026rsquo;s a Giant Algorithm One of the knocks I hear about baseball is that the game is just too slow. It is in fact a slower paced game than basketball, hockey, soccer or really any sport that uses a game clock. But that\u0026rsquo;s what geeks should love about the game. It\u0026rsquo;s a game of anticipation for what\u0026rsquo;s going to happen next. But the list of things that will happen during any play is pretty small and they\u0026rsquo;re all based on \u0026ldquo;IF / THEN\u0026rdquo; rules just like in computer science. Let me give you an example.\nInputs: (Inning: Top of the 5th, Score: 2-1 in your favor, Location: Home Team, Situation: {Runner on 1st, 1 out, 3 hitter up})\nSo we\u0026rsquo;ve got a set of inputs (In real life there is more to consider but this should suffice for an example) to our algorithm. Each of the position players would have a list of \u0026ldquo;IF / THEN\u0026rdquo; rules that they\u0026rsquo;d need to be aware of before any play happens. Let\u0026rsquo;s look at what some of them would look like. The diagram below shows what two fielders would need to consider before a ball is ever thrown to the hitter.\nNow once the ball is hit and you know what to do, it comes down to athletic ability to catch, throw, run, etc but the game itself can be thrown into a giant case statement or select statement as you can see from the diagram above.\nLook at the Data Mining OK, so if the baseball algorithm above doesn\u0026rsquo;t convince you, how about a look at the crazy amount of data that is stored, captured and analyzed for baseball games.\nBox Scores = Audit Summary At the end of every baseball game there is a box score. The box score totals up everything that happened for each of the players who played during that game. The image below is an example of what a box score would look like. These box scores take into account the number of hits per at-bat calculated as a batting average, totals up strike outs, home runs, runs batted in and several other common statistics. These stats are aggregated every day and a player will have a running batting average and total home runs for the year for instance. These box scores serve as a log of the important statistics from each game sort of like an aggregated audit log would show you what had happened to a computer system over a certain time period.\nThe statistics don\u0026rsquo;t stop here though. Baseball in particular has gone nuts with the statistical information collected. How does a pitcher do against a certain team? How about against left handed hitters? How about left handed hitters during the month of June? How about left handed hitters on a team in the month of June with less than two outs and a runner in scoring position? Think of all the data that is generated throughout a season.\nTeams have even started looking for important metrics that don\u0026rsquo;t show up in a box score known as \u0026ldquo;sabermetrics\u0026rdquo;. These metrics look at the metrics most important to win a game such as on-base percentage and wins above replacement.\nPhysics data Over the past few years baseball had a need (really baseball fans just love stats and want more) to collect even more data including physics of what\u0026rsquo;s happening. Baseball is using technology to calculate the trajectory of a baseball so they can call balls and strikes with a computer. Once they got that technology pretty well set, they started offering viewers with statistics such as pitch rotation (how fast a ball spins - usually this explains how much a curveball will break), exit velocity (how fast a ball leaves a hitter\u0026rsquo;s bat), and launch angle (shows at what angle a ball leaves a hitter\u0026rsquo;s bat).\nManagers Just Turn the Nerd Nobs As technologists, we like to tweak things to see if we can squeeze out just a little bit more performance out of our systems. This is the exact same job as a baseball manager. They can take the monitoring data that baseball collects and see which players might have the best advantage against a particular team or opposing pitcher that day. Baseball is all about averages in the sense that a .300 hitter is considered very good, but also means that they fail to get a hit 70% of the time. But one player might hit slightly better against a different type of pitcher and managers will play those odds to try to find a favorable advantage.\nThis has gotten really incredible over the past decade. Managers have \u0026ldquo;spray charts\u0026rdquo; (a chart that shows where hitters most often hit the ball) and will reposition their fielders in non-standard baseball positions.\nFor instance the shortstop might play to the right of second base if a hitter usually hits the ball to that side of the field. It leaves a big empty spot on the left side of the field, but managers are willing to take that chance based on the data.\nSummary All of these decisions that you have time to consider during a slow moving baseball game is what makes the game so great. Fans at home get to second guess these decisions and make their own predictions about what might happen. Not only that but the data available about players lets you argue about which players are better and who\u0026rsquo;d perform the best under certain situations. If you\u0026rsquo;re a technologist that uses math, statistics, and algorithms on a daily basis, how do you not love a game like this?\n","permalink":"https://theithollow.com/2017/10/09/baseball-sport-geeks/","summary":"\u003cp\u003eGeeks and sports just don\u0026rsquo;t mix. Well, thats not really true, but seems to be the stereotype that I\u0026rsquo;m accustomed to hearing. If you\u0026rsquo;re good with computers, or like science, then you probably don\u0026rsquo;t get, or don\u0026rsquo;t like sports. But here\u0026rsquo;s another crass generalization that I\u0026rsquo;ll make with absolutely no statistics to back it up: Baseball should be the sport that geeks gravitate towards.\u003c/p\u003e\n\u003ch1 id=\"its-a-giant-algorithm\"\u003eIt\u0026rsquo;s a Giant Algorithm\u003c/h1\u003e\n\u003cp\u003eOne of the knocks I hear about baseball is that the game is just too slow. It is in fact a slower paced game than basketball, hockey, soccer or really any sport that uses a game clock. But that\u0026rsquo;s what geeks should love about the game. It\u0026rsquo;s a game of anticipation for what\u0026rsquo;s going to happen next. But the list of things that will happen during any play is pretty small and they\u0026rsquo;re all based on \u0026ldquo;IF / THEN\u0026rdquo; rules just like in computer science. Let me give you an example.\u003c/p\u003e","title":"Baseball: The Sport for Geeks"},{"content":"Please use this post as a landing page to get you started with using the EC2 Simple Systems Manager services from Amazon Web Services. Simple Systems Manager or (SSM) is a set of services used to manage EC2 instances as well as on-premises machines (known as managed instances) with the SSM agent installed on them. You can use these services to maintain state, run ad-hoc commands, and configure patch compliance among other things.\nThe following posts should get you started using many of the SSM tools. Follow the posts in order from top to bottom since some of them build on each other.\nGetting Started with EC2 Systems Manager EC2 Systems Manager Run Command Patch Compliance with EC2 Systems Manager EC2 Systems Manager Parameter Store EC2 Systems Manager Documents Using EC2 Systems Manager - State Manager Manage vSphere Instances with EC2 SSM EC2 Systems Manager Session Manager Official Links EC2 SSM API Reference Systems Manager User Guide Systems Manager AWS CLI Reference PowerShell Command Line Reference\nOther Useful AWS SSM Videos ","permalink":"https://theithollow.com/2017/10/02/aws-ec2-simple-systems-manager-reference/","summary":"\u003cp\u003ePlease use this post as a landing page to get you started with using the EC2 Simple Systems Manager services from Amazon Web Services. Simple Systems Manager or (SSM) is a set of services used to manage EC2 instances as well as on-premises machines (known as managed instances) with the SSM agent installed on them. You can use these services to maintain state, run ad-hoc commands, and configure patch compliance among other things.\u003c/p\u003e","title":"AWS EC2 Simple Systems Manager Reference"},{"content":"Sometimes you need to ensure that things are always a certain way when you deploy AWS EC2 instances. This could be things like making sure your servers are always joined to a domain when being deployed, or making sure you run an Ansible playbook every hour. The point of the AWS EC2 SSM State Manager service is to define a consistent state for your EC2 instances.\nThis post will use a fictional use case where I have a an EC2 instance or instances that are checking every thirty minutes to see if they should use a new image for their Apache website. The instance will check against the EC2 Simple Systems Manager Parameter Store, which we\u0026rsquo;ve discussed in a previous post, and will download the image from the S3 location retrieved from that parameter.\nGet Started with State Manager To get started with the State Manager process click \u0026ldquo;State Manager\u0026rdquo; in the EC2 dashboard on the left hand side under the \u0026ldquo;System Manager Services\u0026rdquo; heading. You should see a familiar getting started screen if you\u0026rsquo;ve never used it before like the one below.\nNow if you haven\u0026rsquo;t been following along with the series, the first thing you\u0026rsquo;d want to do is to create an association document. This is an SSM document like the one we created in a previous post that copied our image from S3 to our Apache server. The link to how we setup that document can be found here: /2017/09/18/aws-ec2-simple-systems-manager-documents/\nIf you haven\u0026rsquo;t created a document before you\u0026rsquo;ll want to create that association document first, but since we\u0026rsquo;ve done that already during our previous post, we next need to create our association. Click the \u0026ldquo;Create an association\u0026rdquo; button to get started. The first thing you\u0026rsquo;ll need to enter is an association name. Be sure to make it something descriptive so you can understand later what the association is used for. The next thing you\u0026rsquo;ll do is select the association document. This will be the document that you created during the previous post on SSM Documents. I\u0026rsquo;ve selected the one named itHollowApache-Web1 from that post. If you have multiple revisions of that document you can specify which version should be run when the state manager executes. I\u0026rsquo;ve left the default for this example.\nAs you scroll down you\u0026rsquo;ll need to select the targets. Since I\u0026rsquo;m only using a single instance, i\u0026rsquo;m just manually selecting it here. To make this more useful for a dynamic environment you can specify a tag so that any new instances with that tag will automatically be associated with the state we\u0026rsquo;re creating. Probably a better choice for most environments, but this is an example.\nNext, we\u0026rsquo;ll specify a schedule. You can use a cron or rate scheduler to determine how often this job will run. I\u0026rsquo;ve selected every 30 minutes. So in effect, I\u0026rsquo;ll be copying an image from Amazon S3 every thirty minutes to my Apache server.\nNext, if your SSM Document had any parameters, you can specify them here as well. My parameter is just used for show so I\u0026rsquo;m leaving it blank. If you remember from the previous post I\u0026rsquo;m grabbing a parameter from the parameter store by running the SSM get-parameters command from within my EC2 instance. Lastly, you can choose to write the state association details to an S3 bucket, which I won\u0026rsquo;t for this example.\nThe Result What happens in the end is that I have my EC2 instance running Apache with a default web page on the left. With the quick change to the parameter in the parameter store, I can change the default image on my Apache instance(s) and they\u0026rsquo;ll automatically update their configurations within 30 minutes throughout my entire fleet of Apache instances.\nThe result is shown below where the site on the left would be image1 and the site on the right is image2. All updated by changing the parameter in the parameter store within SSM.\nSummary I\u0026rsquo;m sure that you can come up with tons of great ways to use this service and this post should serve as a very basic example. I\u0026rsquo;d think that this could be incredibly useful when leveraged with things like a configuration management tool like Ansible, or if you\u0026rsquo;d rather, create your own configuration management like scripts to keep your fleet from having too much drift. Good luck and happy coding!\n","permalink":"https://theithollow.com/2017/09/26/aws-ec2-systems-manager-state-manager/","summary":"\u003cp\u003eSometimes you need to ensure that things are always a certain way when you deploy AWS EC2 instances. This could be things like making sure your servers are always joined to a domain when being deployed, or making sure you run an Ansible playbook every hour. The point of the AWS EC2 SSM State Manager service is to define a consistent state for your EC2 instances.\u003c/p\u003e\n\u003cp\u003eThis post will use a fictional use case where I have a an EC2 instance or instances that are checking every thirty minutes to see if they should use a new image for their Apache website. The instance will check against the EC2 Simple Systems Manager Parameter Store, which we\u0026rsquo;ve discussed in a \u003ca href=\"/2017/09/11/ec2-systems-manager-parameter-store/\"\u003eprevious post\u003c/a\u003e, and will download the image from the S3 location retrieved from that parameter.\u003c/p\u003e","title":"AWS EC2 Systems Manager - State Manager"},{"content":"Amazon Web Services uses Systems Manager Documents to define actions that should be taken on your instances. This could be a wide variety of actions including updating the operating system, copying files such as logs to another destination or re-configuring your applications. These documents are written in Javascript Object Notation (JSON) and are stored within AWS for use with theother Simple Systems Manager (SSM) services such as the Automation Service or Run command.\nAmazon has created some SSM documents that you can use to get started operating your cloud such as running shell scripts or Powershell scripts. These documents are also used for things like patch compliance which we\u0026rsquo;ve covered before in another post.\nLet\u0026rsquo;s assume that we want to create our own documents. We may very well never need to do this, since the Powershell and shell scripts documents will let us do about anything we want, but in this example I\u0026rsquo;m going to create my own shellscript document. In this document I will embed my commands so that I don\u0026rsquo;t need to use the document with a shell parameter anymore. This new document that I\u0026rsquo;ll create will be self contained and will have a single purpose that will be repeated each time it\u0026rsquo;s executed.\nSpecifically I want to download an image from an S3 bucket to use for my web page for an apache server through the AWS Run Command.\nThe SSM documents have a few parts that can be edited for your own purposes. Each section should be reviewed.\nschemaVersion - This is the version of the SSM document you\u0026rsquo;d be using. Over time the documents may change so the version will be used to ensure it will work after new versions are released. Description - This is a useful section to remind people what the document is used for. Parameters - You can add parameters to these documents so the same document can be used with different inputs. mainSteps - This will be the actions that will be taken when the document is used. precondition - This is an option in version 2.2 or higher where you can specify both Windows and Linux commands. The precondition would be used to ensure that the proper commands were running on the proper operating system. I.E. Powershell runs on Windows and Bash runs on Linux. Note: This post assumes that the documents are version 2.X. These documents may change based on the versioning.\nThe example below is a custom document that I created for testing.\n{ \u0026#34;schemaVersion\u0026#34;:\u0026#34;2.2\u0026#34;, \u0026#34;description\u0026#34;:\u0026#34;Hollow World Web Document\u0026#34;, \u0026#34;parameters\u0026#34;:{\u0026#34;location\u0026#34;:{ \u0026#34;description\u0026#34;: \u0026#34;Image Location\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;default\u0026#34;:\u0026#34;\u0026#34; } }, \u0026#34;mainSteps\u0026#34;:[ { \u0026#34;action\u0026#34;:\u0026#34;aws:runShellScript\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Configure_Apache\u0026#34;, \u0026#34;precondition\u0026#34;:{ \u0026#34;StringEquals\u0026#34;:[ \u0026#34;platformType\u0026#34;, \u0026#34;Linux\u0026#34; ] }, \u0026#34;inputs\u0026#34;:{ \u0026#34;runCommand\u0026#34;:[ \u0026#34;temp=$(aws --region=us-east-1 ssm get-parameters --names /hollowweb/image --query Parameters[0].Value)\u0026#34;, \u0026#34;image=$(echo $temp | sed \u0026#39;s/\\\u0026#34;//g\u0026#39;)\u0026#34;, \u0026#34;aws s3 cp $image /var/www/html/image001.png\u0026#34;, \u0026#34;service httpd restart\u0026#34; ] } } ] } I don\u0026rsquo;t want to focus too much on the schemaVersion or description which seem pretty straight forward.\nParameters Below that the document has a parameters section. For this particular example, I\u0026rsquo;m not using the parameters but we could use them to ask for input so I added a section just for reference. If this was really needed, we\u0026rsquo;d be asking for a location parameter of type string and the default value would be blank.\n\u0026#34;parameters\u0026#34;:{\u0026#34;location\u0026#34;:{ \u0026#34;description\u0026#34;: \u0026#34;Image Location\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;default\u0026#34;:\u0026#34;\u0026#34; } }, mainSteps The main steps is where the magic happens. You\u0026rsquo;ll notice that we have an action of type aws:runShellScript. This is used to tell SSM which type of commands we\u0026rsquo;d be running such as Powershell or Shell scripts. The next section is just the name of the commands we\u0026rsquo;re using and is like a description for you to reference. After that is the precondition where we ensure that the commands only run if the platform type is Linux. Since this is a shell script we don\u0026rsquo;t want to try to run it on Windows. We could add another precondition with different commands for Windows machines.\nThen under inputs: we\u0026rsquo;ll have the type of command it would be used for. In this case we\u0026rsquo;re using the runCommand and then we must list our commands in this section. You may notice that I\u0026rsquo;m running some commands that then get a parameter from the AWS SSM Parameter Store as we explained in the previous post on Parameter Store.\n\u0026#34;mainSteps\u0026#34;:[ { \u0026#34;action\u0026#34;:\u0026#34;aws:runShellScript\u0026#34;, \u0026#34;name\u0026#34;:\u0026#34;Configure_Apache\u0026#34;, \u0026#34;precondition\u0026#34;:{ \u0026#34;StringEquals\u0026#34;:[ \u0026#34;platformType\u0026#34;, \u0026#34;Linux\u0026#34; ] }, \u0026#34;inputs\u0026#34;:{ \u0026#34;runCommand\u0026#34;:[ \u0026#34;temp=$(aws --region=us-east-1 ssm get-parameters --names /hollowweb/image --query Parameters[0].Value)\u0026#34;, \u0026#34;image=$(echo $temp | sed \u0026#39;s/\\\u0026#34;//g\u0026#39;)\u0026#34;, \u0026#34;aws s3 cp $image /var/www/html/image001.png\u0026#34;, \u0026#34;service httpd restart\u0026#34; ] } } ] Add the Document to Systems Manager To create the document we\u0026rsquo;ll go to the EC2 Systems Console and find the \u0026ldquo;Documents\u0026rdquo; section under \u0026ldquo;Systems Manager Shared Resources\u0026rdquo;. Then click the \u0026ldquo;Create Document\u0026rdquo; button.\nAfter that give the document a name and paste in your json that we created earlier. When you\u0026rsquo;re done click the Create Document button.\nNOTE: As usual you can use the AWS cli to do the same thing. Specifically you\u0026rsquo;d use the aws sssm create-document command.\nSummary When you\u0026rsquo;re all done you\u0026rsquo;ll have an SSM document that can be used for future use with other Systems Manager Services.\n","permalink":"https://theithollow.com/2017/09/18/aws-ec2-simple-systems-manager-documents/","summary":"\u003cp\u003eAmazon Web Services uses Systems Manager Documents to define actions that should be taken on your instances. This could be a wide variety of actions including updating the operating system, copying files such as logs to another destination or re-configuring your applications. These documents are written in Javascript Object Notation (JSON) and are stored within AWS for use with theother Simple Systems Manager (SSM) services such as the Automation Service or Run command.\u003c/p\u003e","title":"AWS EC2 Simple Systems Manager Documents"},{"content":"Generally speaking, when you deploy infrastructure through code, or run deployment scripts you\u0026rsquo;ll need to have a certain amount of configuration data. Much of your code will have install routines but what about the configuration information that is specific to your environment? Things such as license keys, service accounts, passwords, or connection strings are commonly needed when connecting multiple services together. So how do you code that exactly? Do you pass the strings in at runtime as a parameter and then hope to remember those each time you execute code? Do you bake those strings into the code and then realize that you\u0026rsquo;ve got sensitive information stored in your deployment scripts?\nAmazon Web Service has a solution that can be used with several of the other EC2 Systems Manager services, called \u0026ldquo;Parameter Store\u0026rdquo;. If you\u0026rsquo;re an active reader you\u0026rsquo;ve probably already figured out what parameter store does, but if not, here goes. Parameter store lets you store these sensitive strings in a centralized location for your code to reference. This way, your sensitive data can be referenced and stored in a central location and your code references that location. Now you can ensure that the code itself doesn\u0026rsquo;t require hard coding these values in, and also gives you flexibility to update parameters in a single place for use throughout your environment. Think about what would happen if you needed to change a password when that password might be hard coded into dozens of scripts scattered through your code repo.\nGetting Started with EC2 Systems Manager Parameter Store Using Parameter store is very simple to get set up. How you use it could range from very simple to very complex but in this example we\u0026rsquo;ll show a quick way to use the store and your imagination will have to provide the best way to use it for your organization.\nFrom within the Amazon EC2 console look for the Parameter Store hyperlink. If its the first time you\u0026rsquo;ve used parameter store, you\u0026rsquo;ll see a familiar screen from AWS that gives you some information about the service and a \u0026ldquo;Get Started Now\u0026rdquo; button. Click that button and let\u0026rsquo;s get started.\nNow we can create our first parameter. In this case we\u0026rsquo;ll make believe that this is a service account password for use in my lab. We\u0026rsquo;ll first give it a name which I\u0026rsquo;ve called /hollowlab/Example. If you\u0026rsquo;re wondering about the slashes in that name, they\u0026rsquo;re used as hierarchies. If you have to manage a giant list of parameters all in a flat list it might be too cumbersome to sort through. A better way might be to organize these into hierarchies (think of a folder structure) so you can group parameters. Maybe you\u0026rsquo;d do this by department, division or application version or environment? Again, the complexities we\u0026rsquo;ll leave up to you. For now I\u0026rsquo;ve got a root hierarchy of hollowlab and a parameter named \u0026ldquo;Example\u0026rdquo;.\nAfter creating the name give it a description. This probably doesn\u0026rsquo;t need much explanation but here\u0026rsquo;s some advice. GIVE IT A GOOD DESCRIPTION! You\u0026rsquo;ll undoubtedly need to go back and figure out what these parameters are used for. A good description might save your bacon.\nAfter this, select the parameter type. I\u0026rsquo;ve selected a basic string here, but if you\u0026rsquo;ve got a sensitive password, it might be a better idea to use a \u0026ldquo;Secure String\u0026rdquo; to obfuscate the actual value from your users. After all the password should be secret right?\nLastly, enter in the value(s) of the parameter and click the \u0026ldquo;Create Parameter\u0026rdquo; button.\nNow you\u0026rsquo;ve got a parameter stored in the service and are ready to either create additional parameters or to start using that parameter in your code.\nHow to Access Your Parameters There are several ways in which to access the parameter that we\u0026rsquo;ve just created. You can use the other EC2 Systems Manager services such as Run Command, State Manager and Automation, or other services such as AWS Lambda or AWS EC2 Container service. In this example we\u0026rsquo;ll just use a familiar service such as Run Command to see if we can access that parameter successfully.\nOpen up the Run Command service from the EC2 console and create a new command to execute. I\u0026rsquo;m running my command on a Linux host deployed in EC2 with the EC2SystemsManager role as described in this post. Since it\u0026rsquo;s a Linux machine I\u0026rsquo;ll execute a shell script, but you could also do this from a PowerShell script if your partial to Windows for your Operating System.\nThe next step is to select which instances we\u0026rsquo;ll be executing our command on. I\u0026rsquo;ve selected my Linux instance with the SSM Agent and role installed. After that comes the critical piece, the command. In the commands box, we\u0026rsquo;ll enter \u0026ldquo;echo {{ssm:/hierarchy/ParameterName}}\u0026rdquo; which will simply print out what the parameter value is. In my example I\u0026rsquo;ve used \u0026ldquo;echo {{ssm:/hollowlab/Example}}. Now clearly this is a silly exercise because all it does is print it to a screen, but should give you the idea about how it can be leveraged for those really important scripts that you\u0026rsquo;re dreaming up as you read this post.\nWhen you\u0026rsquo;re ready run the command.\nYou\u0026rsquo;ll see your command run in the \u0026ldquo;Run Command\u0026rdquo; console within EC2 and then you\u0026rsquo;ll see a link that shows \u0026ldquo;View Output\u0026rdquo;. If you click that link we should see what happened when the command ran in our Linux instance.\nAnd as we\u0026rsquo;d hoped, the fictitious password was output to the screen.\nSummary EC2 Systems Manager Parameter Store may serve as a critical piece to your provisioning or operational process for managing your infrastructure as code. It can serve as a central repository with KMS encrypted values for you to store critical configuration information for your environment so that you don\u0026rsquo;t need to pass that information out to people who are trying to write deployment scripts. I hope to hear in the comments how you\u0026rsquo;ve decided to use Parameter Store and we\u0026rsquo;ll use it ourselves in future posts about EC2 Systems Manger. Thanks for reading and happy coding!\n","permalink":"https://theithollow.com/2017/09/11/ec2-systems-manager-parameter-store/","summary":"\u003cp\u003eGenerally speaking, when you deploy infrastructure through code, or run deployment scripts you\u0026rsquo;ll need to have a certain amount of configuration data. Much of your code will have install routines but what about the configuration information that is specific to your environment? Things such as license keys, service accounts, passwords, or connection strings are commonly needed when connecting multiple services together. So how do you code that exactly? Do you pass the strings in at runtime as a parameter and then hope to remember those each time you execute code? Do you bake those strings into the code and then realize that you\u0026rsquo;ve got sensitive information stored in your deployment scripts?\u003c/p\u003e","title":"EC2 Systems Manager Parameter Store"},{"content":"We focus a lot of time talking about public cloud and provisioning. Infrastructure as code has changed the way in which we can deploy our workloads and how our teams are structured. We\u0026rsquo;re even allowing other teams to deploy their own workloads through our cloud management portals. But some things haven\u0026rsquo;t changed all that much.\nWhen I mention ServiceNow the first things that come to your mind are probably \u0026ldquo;Change Ticketing\u0026rdquo;, \u0026ldquo;CMDB\u0026rdquo;, or \u0026ldquo;Asset Management\u0026rdquo;. While ServiceNow certainly does all of those things, the real purpose of ServiceNow is to streamline operations. Many people who work in the enterprise probably think of ServiceNow as something that just gets in their way. No one wants to stop what they\u0026rsquo;re doing to enter a change ticket, wait for an approval or update a configuration item once deploying new servers, it\u0026rsquo;s a pain. But ServiceNow really is meant to speed up the operations process.\nThink of it this way. Without a central management point operations just becomes a tangled web of phone calls, emails, instant messages, meetings and relative confusion. In a big company, if you needed to find out how to request a server, a laptop, a phone, or just need to get information sent up the chain, it may be really confusing. How do you know who to call for a request? How do you know that the request is being worked on or didn\u0026rsquo;t get lost in someone\u0026rsquo;s email?\nServiceNow allows you to streamline these confusing processes by using automated workflows to push requests to the right place and ensure it\u0026rsquo;s completed correctly. Think of the ServiceNow platform as your proxy to get work done. Everyone can make requests for services thorough the portal and find out what work they need to do through the same portal. It is the hub for not only ITSM but business needs in general.\nServiceNow presented during Cloud Field Day 2 out in Silicon Valley and explained how they were helping companies with these problems. Their platform allows people who aren\u0026rsquo;t even coders to build workflows and applications to help drive business goals.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Cloud Field Day 2.\nWhile the Information Technology department may commonly be the primary users of ServiceNow, once the platform is put into use, other teams may see great value from the platform. Think of the following scenario. Your IT department has just started using ServiceNow to manage ticketing, assets and the cloud management modules to automate their server provisioning. The IT team has seen great gains in the efficiencies from these workflows and decide that they might want to talk to the human resources department about their needs. They build a simple workflow to manage the on-boarding of new hires where the HR person enters in some information about a new hire and the following routines kick off in the platform:\nNew User is added to Active Directory with their contact info and managers updated A laptop and/or a virtual desktop are ordered or provisioned A company wide email goes out introducing the new hire Payroll gets an email about the new hire\u0026rsquo;s compensation plan The benefits package for the new hire is sent to his/her email Tons of other things might happen in that workflow, but it\u0026rsquo;s a lot better than the HR person calling IT, opening tickets, emailing payroll, gathering benefit info etc.\nThe HR team may spread the news about your easy to use process and the efficiencies they\u0026rsquo;ve achieved. From there, other business groups may be clamoring for some of the automated gains your team has been able to provide. ServiceNow has become much more than an ITSM platform, but rather a one-stop-shop for getting work done.\nSummary ServiceNow is more than just an ITSM tool these days. It\u0026rsquo;s a business tool. When asked during the CFD2 session about who ServiceNow\u0026rsquo;s competitors really are, the answer wasn\u0026rsquo;t another company like BMC or IBM. The answer was \u0026ldquo;unstructured workflows.\u0026rdquo; If your company is going to embrace automation, then spending money on a platform such as ServiceNow might be well worth the investment.10\n","permalink":"https://theithollow.com/2017/09/05/servicenow-streamlines-operations/","summary":"\u003cp\u003eWe focus a lot of time talking about public cloud and provisioning. Infrastructure as code has changed the way in which we can deploy our workloads and how our teams are structured. We\u0026rsquo;re even allowing other teams to deploy their own workloads through our cloud management portals. But some things haven\u0026rsquo;t changed all that much.\u003c/p\u003e\n\u003cp\u003eWhen I mention ServiceNow the first things that come to your mind are probably \u0026ldquo;Change Ticketing\u0026rdquo;, \u0026ldquo;CMDB\u0026rdquo;, or \u0026ldquo;Asset Management\u0026rdquo;. While ServiceNow certainly does all of those things, the real purpose of ServiceNow is to streamline operations. Many people who work in the enterprise probably think of ServiceNow as something that just gets in their way. No one wants to stop what they\u0026rsquo;re doing to enter a change ticket, wait for an approval or update a configuration item once deploying new servers, it\u0026rsquo;s a pain. But ServiceNow really is meant to speed up the operations process.\u003c/p\u003e","title":"ServiceNow Streamlines Operations"},{"content":"Recently, I was fortunate enough to attend Cloud Field Day 2, out in Silicon Valley. Cloud Field Day 2 brought a group of industry thought leaders together to speak with companies about their cloud products and stories. I was a little surprised to hear a reoccurring theme from some of the product vendors, which was: customers being so worried about being trapped by a public cloud vendor.\nIs It True? Based on my cloud consulting job, I can say that yes, many times customers are a bit worried about being locked in by a public cloud vendor. But most times this isn\u0026rsquo;t a crippling fear of being locked in, just a concern that they\u0026rsquo;d like to mitigate against if possible. But it\u0026rsquo;s like most things in the industry, you pick a valued partner and move forward with a strategy that makes sense for the business based on the information you know right now and a bet against the future. When virtualization was a new thing, I don\u0026rsquo;t recall that many conversations about making sure that both vSphere and Hyper-V were both in use in the data center so that lock-in could be prevented. We picked the partner that we saw had the most promise, capabilities, and price and built our solutions on top of those technologies. It\u0026rsquo;s still like that today, where you\u0026rsquo;ll pick a hardware vendor and attempt to prevent having multiple vendors because it increases the complexity of your services. You wouldn\u0026rsquo;t want to hire more people so that you can support two platforms, you\u0026rsquo;d want to hire the right employees to operate your corporate vision.\nNow, with all of that being said, public clouds might be slightly different because the prices can change quickly and a new bill is receive every month. Also, moving off of a service means migrating data and instances much like a data center migration would. So I understand the fear of picking a vendor and pushing forward, but there are benefits of this strategy as well.\nWhat Do I Miss Out On By Not Locking In? Consider the following things that you might miss out on by not locking in with a single cloud vendor.\nPrice - Many times you can get price drops by adding volume. If you\u0026rsquo;ve got half of your workloads in a single public cloud vendor and half in another, you\u0026rsquo;ve got lower volume and may miss out on pricing discounts. Capabilities - Some public cloud vendors have capabilities that others don\u0026rsquo;t. If you have decided that all of your workloads must be able to live in any cloud, you must only use the lowest common denominator of services. This means you miss out on some of the best stuff public clouds can offer. Operational Simplicity - How do I deploy my workloads across two different cloud providers? Do I need two different processes coupled with two different sets of administrators who are skilled in such tasks? Do I need to have a single cloud management portal such as VMware vRealize Automation, RightScale or ServiceNow to sit in front of my clouds and deploy workloads for me, and if I do that, am I locking in with that vendor? Wouldn\u0026rsquo;t it be simpler to do your research and pick a public cloud vendor that you believe will be the best fit for your organization and then spend your time optimizing your environment to work really well with it?\nWhat Did Cloud Field Day 2 Showcase? During the CFD2 sessions three different vendors pitched the delegates on the value of not-locking in with a single public cloud vendor.\nScality - Scality was promoting the idea that companies really like and want to use object storage based on Amazon\u0026rsquo;s S3 API which seems to be the defacto standard. But, without a real standard and with the concerns that some companies might want to keep their data on-prem or in Azure, they\u0026rsquo;ve released their \u0026ldquo;Zenko\u0026rdquo; product. Zenko is a solution that provides the S3 interface and lets you put your own storage solutions behind it, giving you the portability of your own S3 object storage.\nPlatform 9 - Platform 9 was pitching the idea that customers love AWS Lambda but are too afraid of vendor lock-in to really use it. So, they\u0026rsquo;re introducing a service called \u0026ldquo;Fission\u0026rdquo; that will spin up a container on-prem to execute your Lambda-like code. This isn\u0026rsquo;t the first time Platform 9 has suggested this type of model. Their other product leverages the Openstack API to be the front end for multiple clouds.\nNirmata - Nirmata had a similar pitch which was to have a single platform to orchestrate containers across multiple clouds. This one is a little different in my opinion since orchestrating containers in multiple locations is a great way to provide availability and portability. But again, this is used when Kubernetes could be used on other cloud providers instead of locking in with a product.\nWhat Do You Think? Mutli-cloud and hybrid cloud is a subject that is often debated but are the trade offs worth the extra complexity? Are you trading a public cloud vendor lock-in for a product vendor lock-in? What is the right method of handling these decisions? I\u0026rsquo;d love to hear your feedback in the comments.\n","permalink":"https://theithollow.com/2017/08/22/really-concerned-public-cloud-vendor-lock/","summary":"\u003cp\u003eRecently, I was fortunate enough to attend \u003ca href=\"http://techfieldday.com/event/cfd2/\"\u003eCloud Field Day 2\u003c/a\u003e, out in Silicon Valley. Cloud Field Day 2 brought a group of industry thought leaders together to speak with companies about their cloud products and stories. I was a little surprised to hear a reoccurring theme from some of the product vendors, which was: customers being so worried about being trapped by a public cloud vendor.\u003c/p\u003e\n\u003ch1 id=\"is-it-true\"\u003eIs It True?\u003c/h1\u003e\n\u003cp\u003eBased on my cloud consulting job, I can say that yes, many times customers are a bit worried about being locked in by a public cloud vendor. But most times this isn\u0026rsquo;t a crippling fear of being locked in, just a concern that they\u0026rsquo;d like to mitigate against if possible. But it\u0026rsquo;s like most things in the industry, you pick a valued partner and move forward with a strategy that makes sense for the business based on the information you know right now and a bet against the future. When virtualization was a new thing, I don\u0026rsquo;t recall that many conversations about making sure that both vSphere and Hyper-V were both in use in the data center so that lock-in could be prevented. We picked the partner that we saw had the most promise, capabilities, and price and built our solutions on top of those technologies. It\u0026rsquo;s still like that today, where you\u0026rsquo;ll pick a hardware vendor and attempt to prevent having multiple vendors because it increases the complexity of your services. You wouldn\u0026rsquo;t want to hire more people so that you can support two platforms, you\u0026rsquo;d want to hire the right employees to operate your corporate vision.\u003c/p\u003e","title":"Are We Really Concerned with Public Cloud Vendor Lock-in?"},{"content":"It is a pretty fair assumption that the Netapp company that you\u0026rsquo;re currently familiar with will be a much different company within the next five years. I say this because there isn\u0026rsquo;t much of a choice for anything else.\nWhere is Netapp? When I say Netapp, my guess is the first thing that you think about is a good ole\u0026rsquo; storage array that’s been sitting in a data center. Netapp has been around for a pretty long time, and pre-dates virtualization. The storage array has had a pretty good run in the data center and provides all the capabilities that enterprises have been looking for in a storage array. The write anywhere file layout (WAFL) introduced a very performant file system and RAID DP (Dual Parity) are part of the legacy of Netapp. Unfortunately, the legacy of Netapp has started to make them feel like a \u0026ldquo;legacy\u0026rdquo; company over the past few years.\nJust like many storage companies, Netapp is trying to stay relevant in a new cloud world. With public cloud services from Amazon Web Services and Microsoft Azure providing on-demand infrastructure resources, those storage arrays are becoming a thing of the past. How can a storage array company continue to stay afloat in a world where customers don\u0026rsquo;t want to have hardware to manage?\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Cloud Field Day 2.\nWhere is Netapp Headed? Admitting that you have a problem is the first step in the right direction. Some companies seems to stick their head in the sand and assume that there will always be company with an on-premises data center in which they can sell storage to. So what does Netapp do now that they admit they\u0026rsquo;re in a bad place?\nOne of the first steps that Netapp did was to hire Anthony Lye to run their new Cloud Business Unit. Mr. Lye has previously served as Senior Vice President for Oracle, Senior Director Major Accounts at Remedy Corporation and Product Marketing Manager for Tivoli among his other endeavors. His focus is on building up this new Cloud Business Unit so that they are more agile and focusing on data management strategies.\nMr Lye has his work cut out for him at Netapp, but the process has already started. Netapp\u0026rsquo;s old, yearly product releases are becoming much faster so that they can be relevant in a cloud world. The Cloud BU has released a few products already and is now focusing on data management. Products such as Cloud Sync, AltaVault, NPS are already rolling out for use within AWS. Questions like, \u0026ldquo;How do I move data around between clouds?\u0026rdquo; and \u0026ldquo;What kind of analytics can I get about our cloud data?\u0026rdquo; are topics that Netapp is trying to tackle these days.\nNetapp started with the easy stuff like adding their Data OnTap storage array operating system to a virtual cloud appliance and adding it to the AWS marketplace. That seems like a pretty expected move, their other moves will take a little more work.\nWill the New Strategy Work? Re-inventing a company is incredibly difficult but reinventing a public company with billions in yearly revenue for an aging technology is even harder. Netapp has stock holders and they aren\u0026rsquo;t going to settle for Netapp abandoning their bread and butter solution to be a cloud company. They can\u0026rsquo;t give up on their existing customer base and have to continue to offer their products and services to them while also diving into the new cloud endeavors.\nMy take is that this is a bit of a long shot but as Dave Hitz, Founder of Netapp, reminded the delegates from Cloud Field Day 2, Netapp has been through transformational change in the past and came out doing very well on the other side. When virtualization changed the way data centers operated, Netapp jumped in with both feet and provided a solid solution for use with vSphere. Mr. Hitz made sure to convey that he knows the company has work to do, but they are up to the challenge.\nA company primarily focused on hardware will need to revamp not only it\u0026rsquo;s product line, but also the way their sales teams are compensated and their go to market strategy. Marketing will also have their work cutout for them to make sure that when people think of Netapp, they don\u0026rsquo;t think of a box sitting in a data center and they start thinking of a data management company.\nWill they re-invent themselves or will they eek out their existence as a data center storage company? Who knows, but this much is pretty evident: Netapp will be a different company within the next five years.\n","permalink":"https://theithollow.com/2017/08/15/netapp-at-a-crossroads/","summary":"\u003cp\u003eIt is a pretty fair assumption that the Netapp company that you\u0026rsquo;re currently familiar with will be a much different company within the next five years. I say this because there isn\u0026rsquo;t much of a choice for anything else.\u003c/p\u003e\n\u003ch1 id=\"where-is-netapp\"\u003e\u003cstrong\u003eWhere is Netapp?\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/07/IMG_2583.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/07/IMG_2583-300x300.jpg\"\u003e\u003c/a\u003e When I say \u003ca href=\"http://netapp.com\"\u003eNetapp\u003c/a\u003e, my guess is the first thing that you think about is a good ole\u0026rsquo; storage array that’s been sitting in a data center. Netapp has been around for a pretty long time, and pre-dates virtualization. The storage array has had a pretty good run in the data center and provides all the capabilities that enterprises have been looking for in a storage array. The write anywhere  file layout (WAFL) introduced a very performant file system and RAID DP (Dual Parity) are part of the legacy of Netapp. Unfortunately, the legacy of Netapp has started to make them feel like a \u0026ldquo;legacy\u0026rdquo; company over the past few years.\u003c/p\u003e","title":"NetApp at a Crossroads"},{"content":"In today\u0026rsquo;s world, if you can get an Internet connection, you can go anywhere and connect to any service that is publicly available. No restrictions are imposed and you can use the entire amount of bandwidth you purchased from your Internet service provider. This is the world under Net Neutrality. To illustrate this point further take the following example.\nIf you purchase a 25Mbps circuit from Comcast or AT\u0026amp;T, you can use all of that bandwidth, assuming the service on the other end is also providing 25Mbps or better.\nThe diagram below shows an incredibly simple diagram of how a request over the Internet works. On the left, a requestor such as a person surfing the web is trying to download content from a public server on the right. Each of them have different amounts of bandwidth so the slowest connection in the chain determines the download speed.\nThis is how the Internet currently works in its most simplistic form under Net Neutrality. But what if we get rid of Net Neutrality like some people are lobbying for, including the head of the FCC? I\u0026rsquo;m glad you asked. If the Internet is no longer neutral, the people who provide Internet access such as Verizon, could artificially limit traffic between two points. In our very basic example, we can see that the requester and public servers still have the same amount of bandwidth but the service provider can limit the bandwidth to a smaller amount for about any reason they choose.\nWhy would a telecom service want to artificially limit the speeds to certain site? Well, for one thing, they might be competitors. What if, hypothetically of course, your ISP was AT\u0026amp;T and they just limited your connections to one of their competitors like T-mobile. It might give you the impression that the T-mobile experience just wasn\u0026rsquo;t that good which might prevent you from trying out their services.\nIn another example, lets use public cloud as an example. Assume your ISP wants AWS and Azure to pay them money just to provide them unrestricted access to their services. One of the cloud vendors might refuse to pay that fee and the other might gladly pay them to keep them from preventing performance bottlenecks.\nIn this example, assume that AWS didn\u0026rsquo;t pay the ISP\u0026rsquo;s fees while Azure did. ( Again, all of this is hypothetical and I\u0026rsquo;m not saying that this is what will happen, but it could without Net Neutrality)\nThe result, in our example, is that AWS is restricted to 5Mbps of bandwidth while Azure gets to use all 40Mbps of the bandwidth they\u0026rsquo;ve purchased. How does this affect the requester of those services? The requester in the last example will think that Azure performs better than AWS. Even if the requester knows the real reason why performance isn\u0026rsquo;t as good, they need to be able to access their cloud services quickly so they would be forced to use Azure to provide the experience that they need.\nNow that our requester has selected Azure due to their higher performance, they\u0026rsquo;ll find that the prices Azure is charging may be significantly higher than AWS. This is because Azure is paying the Internet Service Provider a premium to keep them from throttling their Internet bandwidth. That cost has to be recouped so they have to pass that cost on to the consumer. In the end, the cloud providers provide either higher priced services or worse performing services and the customers are the ones who really suffer.\nWill this Happen? This is very speculative at this point. Net Neutrality is still in place (for now) and all Internet traffic is open. Even if Net Neutrality goes away, it doesn\u0026rsquo;t mean that Internet Service Providers will start threatening providers just to make a buck \u0026hellip; but they could. With the vast amount of resources being pushed into public clouds, they would be prime targets for this sort of extortion. If I\u0026rsquo;ve thought of this, then you can bet the big ISPs will at least consider it, even if they ultimately decide to keep things fair.\nIt should be pointed out that the chairmen of the Federal Communications Commission, the US Government arm responsible for regulating the Internet, believes that the current Internet regulations are too burdensome and stifle innovation. The fix, according to Ajit Pai, is to remove the Internet regulations by scrapping Net Neutrality. Source\nMy Take There is no way to tell if Net Neutrality will be removed, and even if it is, will the Internet Service Providers exploit it to make a profit. My position is this though, the Internet should be open and available to anyone who can get connected to it. Artificially controlling what traffic is fast, slow, or doesn\u0026rsquo;t work shouldn\u0026rsquo;t be in the hands of the companies providing access to it. It wouldn\u0026rsquo;t make sense to buy a phone line but not be allowed to call my grandparents because the phone carrier didn\u0026rsquo;t want me to do so, and the same should go for the Internet.\nMaybe Net Neutrality does stifle the economy, that is probably a bigger question than I can answer, but it seems to me that all of those companies that depend on the Internet, especially cloud service providers, would feel significant financial pressure in a world without Net Neutrality. Wouldn\u0026rsquo;t that also stifle the economy?\nWhat do you think? I\u0026rsquo;d love to hear your position in the comments.\n","permalink":"https://theithollow.com/2017/08/07/will-killing-net-neutrality-end-public-cloud/","summary":"\u003cp\u003eIn today\u0026rsquo;s world, if you can get an Internet connection, you can go anywhere and connect to any service that is publicly available. No restrictions are imposed and you can use the entire amount of bandwidth you purchased from your Internet service provider. This is the world under Net Neutrality. To illustrate this point further take the following example.\u003c/p\u003e\n\u003cp\u003eIf you purchase a 25Mbps circuit from Comcast or AT\u0026amp;T, you can use all of that bandwidth, assuming the service on the other end is also providing 25Mbps or better.\u003c/p\u003e","title":"Will Killing Net Neutrality End the Public Cloud?"},{"content":"Startup Funding - An Example with Rubrik Is Rubrik Really a Cloud Solution??? Rubrik\u0026rsquo;s Doing All the Boring Enterprise Backup Stuff Your Rubrik for the Cloud Rubrik Announces Firefly Rubrik Gets Serious about Security Building Rubrik vRO Workflow\nGetting Started with vRealize Orchestrator and Rubrik\u0026rsquo;s REST API\nvRealize Orchestrator REST Hosts and Operations for Rubrik\nRubrik API Logins through vRealize Orchestrator\nGet Rubrik VM through vRealize Orchestrator\nAssign a VM to a Rubrik slaDomain\nvRO Packages for Rubrik A New Standard for Backups - Rubrik\n","permalink":"https://theithollow.com/rubrik-posts/","summary":"\u003cp\u003e\u003cstrong\u003e\u003ca href=\"http://gestaltit.com/tech-talks/rubrik/eric_shanks/startup-funding-example-rubrik/\"\u003eStartup Funding - An Example with Rubrik\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003e\u003ca href=\"http://gestaltit.com/tech-talks/rubrik/eric_shanks/rubrik-really-cloud-solution/\"\u003eIs Rubrik Really a Cloud Solution???\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003e\u003ca href=\"http://gestaltit.com/tech-talks/rubrik/eric_shanks/rubriks-boring-enterprise-backup-stuff/\"\u003eRubrik\u0026rsquo;s Doing All the Boring Enterprise Backup Stuff\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003e\u003ca href=\"/2017/04/25/your-rubrik-for-the-cloud/\"\u003eYour Rubrik for the Cloud\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003e\u003ca href=\"/2016/08/16/rubrik-announces-firefly/\"\u003eRubrik Announces Firefly\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003e\u003ca href=\"/2016/04/26/rubrik-gets-serious-security/\"\u003eRubrik Gets Serious about Security\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003eBuilding Rubrik vRO Workflow\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003e\u003ca href=\"/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api-2/\"\u003eGetting Started with vRealize Orchestrator and Rubrik\u0026rsquo;s REST API\u003c/a\u003e\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003e\u003ca href=\"/2015/08/27/vrealize-orchestrator-rest-hosts-and-operations-for-rubrik/\"\u003evRealize Orchestrator REST Hosts and Operations for Rubrik\u003c/a\u003e\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003e\u003ca href=\"/2015/09/08/rubrik-api-logins-through-vrealize-orchestrator/\"\u003eRubrik API Logins through vRealize Orchestrator\u003c/a\u003e\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003e\u003ca href=\"/2015/09/10/get-rubrik-vm-through-vrealize-orchestrator/\"\u003eGet Rubrik VM through vRealize Orchestrator\u003c/a\u003e\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003e\u003ca href=\"/2015/09/14/assign-a-vm-to-a-rubrik-sladomain/\"\u003eAssign a VM to a Rubrik slaDomain\u003c/a\u003e\u003c/strong\u003e\u003c/p\u003e","title":"Rubrik Posts"},{"content":"\nHPE recently announced that they were getting deeper into the cloud game bin introducing their Nimble Cloud Volumes (NCV) solution. Now while this sounds a lot like a storage array function, it\u0026rsquo;s really its own separate cloud that is focused only on storage. The idea behind it is that storage in both AWS and Azure isn\u0026rsquo;t great for enterprises and they want a better option to connect to their EC2 instances or Azure VMs.\nThe solution provides a new storage data center that is in close proximity (Close proximity currently equates to single digit millisecond latency over their iSCSI connection) to both Amazon and Azure data centers in the US East and US West regions. HPE will allow you to create a volume based on a Nimble storage array of up to 16TB and 50,000 IOPS and bill you accordingly for the usage. The volumes are billed on a monthly, pay-as-you-go basis so you can get rid of your volumes any time you wish. These volumes can then be attached to your cloud instances through a secured iSCSI connection.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Cloud Field Day 2. In addition, HPE provided a gift to all delegates but no expectations were given about the content of this blog post or other social media.\nShould I Be Using Cloud Volumes OK, HPE is making an investment into a storage cloud which is probably pretty expensive endeavor. Those are really HPE\u0026rsquo;s problems to deal with, we need to be asking ourselves whether or not we should be taking advantage of these new options that HPE is providing as a service.\nPros: Familiarity to Data Center Administrators - Creating a simple volume and attaching it to a virtual machine makes us think about the ways we\u0026rsquo;ve managed vSphere instances for many years. This is similar to creating a LUN and connecting it to a server like we did for years before cloud was a thing. Data Migration - If you\u0026rsquo;ve got Nimble storage arrays on premises you can easily move your data into the Nimble Cloud Volumes through native replication and you can move your data between the two NCV regions. The fact that the storage is close to Azure and AWS means that you can use both clouds independently and store the data they generate in a third cloud. Copy Data Management - Enterprise storage services are readily available since it\u0026rsquo;s on an enterprise array. If we want to take very fast snapshots, deduplicate data, mount copies etc. we can do all of that neat stuff. These storage arrays can do it a bit better than the cloud vendors. Cons: iSCSI egress - Even assuming you\u0026rsquo;re perfectly happy with iSCSI, cloud vendors charge for egress traffic. It\u0026rsquo;s one thing to pay for outgoing network traffic, but now all of the storage traffic will also be considered egress traffic that your cloud vendor will charge you for. HPE admitted that customers should expect a 20% tax on there NCVs to account for that traffic. It\u0026rsquo;s Not Cloud - Maybe my biggest concern with this is that it goes against many of the tenets of cloud. Building volumes and attaching them to servers is something we\u0026rsquo;ve tried to get away from for the public cloud. Data management is a bit more tricky in the public cloud, but this seems much more like a legacy way of managing data and kicks the can down the road for the way we would re-architect our apps. No APIs yet - As of right now, there are no APIs to create and attach the new volumes. These volumes have to be manually created through the NCV interface and then someone would need to login to the instances and add the iscsi volume through a script provided by the NCV interface. HPE did say the APIs will be avialable in the next few months, but no APIs screams \u0026quot; NOT CLOUD!\u0026quot; when I hear it. Poor Availability Options - As of right now Nimble Cloud Volumes only has a single data center in each of the regions. A regional issue such as fibre cut, power outage, Sharknado or whatever could potentially take out your storage array. On top of that you\u0026rsquo;ve be tied to the US-East-1 or US-West-2 regions to take advantage of the low latency of the cloud volumes. My Take It\u0026rsquo;s really interesting that HPE is making this play. Enterprise customers have really struggled to get into the cloud and lift and shift is pretty common if they want to quickly get into the cloud. Application refactoring typically takes a lot of time unless the company was built with a cloud native mentality. I think HPE might make some money in short run with this, but it won\u0026rsquo;t be a major use case for a long time. Eventually enterprises will change their applications to take advantage of the cloud native features. HPE has plenty of capital to throw at the problem but this seems like an expensive bet on a storage cloud for now. Time will tell as always.\n","permalink":"https://theithollow.com/2017/08/01/hpe-built-another-cloud-storage-time/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/07/CloudVolumes1.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/07/CloudVolumes1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eHPE \u003ca href=\"https://www.nimblestorage.com/blog/nimble-cloud-volumes-an-industry-first/\"\u003erecently announced\u003c/a\u003e that they were getting deeper into the cloud game bin introducing their Nimble Cloud Volumes (NCV) solution. Now while this sounds a lot like a storage array function, it\u0026rsquo;s really its own separate cloud that is focused only on storage. The idea behind it is that storage in both AWS and Azure isn\u0026rsquo;t great for enterprises and they want a better option to connect to their EC2 instances or Azure VMs.\u003c/p\u003e","title":"HPE Built Another Cloud - Storage This Time"},{"content":"Once a year the Discovery channel has a Shark-a-palooza around all things sharks. Silly contests like having Olympic swimmers race CGI sharks, Dirty Jobs hosts doing gross stuff with sharks, people busting shark myths\u0026hellip; you get the picture. It\u0026rsquo;s one of my favorite weeks because you can learn stuff about cool animals and there is something to watch on TV during the summer.\nBut this year I gave up much of my Shark Week for a good cause. The folks over at GestaltIT were having their Cloud Field Day 2 in Silicon Valley and they were gracious enough to invite me to join them as a delegate.\nIf you\u0026rsquo;re not familiar with the tech field days, it\u0026rsquo;s run by Steven Foskett and his great crew where they get 12 delegates together, with several product vendors, and they discuss solutions. During the day a product vendor will give give presentations about their value propositions and deep dive on their technologies. The delegates are there to ask questions about how things work, why customers should care, and represent viewers at home who are watching the live streams or recordings from previous field days. It\u0026rsquo;s three packed days full of these sessions and it can be pretty exhausting to talk that much technology. It\u0026rsquo;s drinking from a fire hose of information for three days and then writing some blog posts and live tweeting around it as well.\nBut getting the opportunity to meet with product teams is not the best part of Tech Field Day. It\u0026rsquo;s all the things that are done outside of the event that the real treasure. Between sessions the delegates are shuttled to a different office in a stretch limo and at night are able to relax and talk with each other about their own experiences. These interactions are the best part of these events. You chat with fellow delegates, who are all brilliant, and come from varying different places in their careers with different skill sets and from different parts of the globe. So many different perspectives on how things should work and how they\u0026rsquo;ve approached IT challenges in the past makes the event an amazing experience. Stephen Foskett has said before that Tech Field Day can\u0026rsquo;t be done over the Internet and has to be done in person and he\u0026rsquo;s absolutely nailed it there. Bringing these different personalities together is what makes it work and the value you gain from relating to one another is critical to the events. And it\u0026rsquo;s the thing you take away from it as a delegate.\nI\u0026rsquo;ve done several field days now and each time I meet new friends and keep in touch with many of them later on. The personal connections you gain are worth missing a week of shark television, and if it weren\u0026rsquo;t for missing my family so much, I\u0026rsquo;d want to do it as much as I could.\nTo the Tech Field Day team, thank you for asking me to represent the viewers as a delegate and thank you for the opportunity to make these new connections.\n","permalink":"https://theithollow.com/2017/07/31/whats-worth-interrupting-shark-week-cfd2/","summary":"\u003cp\u003eOnce a year the Discovery channel has a Shark-a-palooza around all things sharks. Silly contests like having Olympic swimmers race CGI sharks, Dirty Jobs hosts doing gross stuff with sharks, people busting shark myths\u0026hellip; you get the picture. It\u0026rsquo;s one of my favorite weeks because you can learn stuff about cool animals and there is something to watch on TV during the summer.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/07/Shark-Week.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/07/Shark-Week.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eBut this year I gave up much of my Shark Week for a good cause. The folks over at \u003ca href=\"http://gestaltit.com\"\u003eGestaltIT\u003c/a\u003e were having their \u003ca href=\"http://techfieldday.com/event/cfd2/\"\u003eCloud Field Day 2\u003c/a\u003e in Silicon Valley and they were gracious enough to invite me to join them as a delegate.\u003c/p\u003e","title":"What's Worth Interrupting Shark week? CFD2"},{"content":" I had high expectations for the sessions being presented during Cloud Field Day 2 hosted by GestaltIT in Silicon Valley during the week of June 26th-28th. The first of the sessions presented was from a company that I hadn\u0026rsquo;t heard of before called Nirmata. I had no idea what the company did, but after the session I found out the name is an Indo-Aryan word meaning Architect or Director which makes a lot of sense considering what they do.\nWhat Does Nirmata Do? Nirmata is an orchestrator of orchestrators. Its value proposition is to make managing and deploying container applications on Kubernetes clusters easier. \u0026ldquo;Why do we need this\u0026rdquo; you might ask? Well, Kubernetes does a good job of managing individual environments but struggles if you need to manage more than one cluster at a time. This becomes much more difficult if we\u0026rsquo;re managing Kubernetes clusters in more than one place, like different clouds. Nirmata lets us manage multiple Kubernetes clusters in different cloud providers with a single orchestration layer.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Cloud Field Day 2. These expenses were paid with no expectations about the coverage through this blog or on social media and did not affect the content of this post.\nWhy do I Care? First of all, Kubernetes has plenty of daunting concepts that you\u0026rsquo;d have to learn to really manage containers correctly. If we\u0026rsquo;re focused on building our applications, we don\u0026rsquo;t really want to spend too much time learning how to manage the containers across hosts, let alone managing multiple clusters across clouds.\nNirmata lets us do this management from an easy to use interface. From a SaaS portal, we can define our container hosts, group them by instance types, cluster different instances together by policy and manage multiple registries. This makes it very easy for us to manage our container clusters from one interface without really even having to know much about Kubernetes (Obviously some experience would help). Once you start using Nirmata, we can deploy new containers in one cloud for testing, or move it between clouds or integrate it with existing applications in other clouds. We no longer have to make specific settings in one Kubernetes cluster and then different ones in other clusters. We can now orchestrate these containers in cluster between clouds. We can use our CI/CD tools to integrate with the Nirmata API to deploy our applications on any cloud that we want and tie them together or do whatever we need without having to manage individual silos of Kubernetes clusters.\nIf this isn\u0026rsquo;t enough, we can have Nirmata manage the resources in these clusters to scale as needed, or perform actions on our containers. We can now see all of those siloed Kubernetes clusters through a single portal which gives us a much better way to manage them at scale.\nThere are a variety of use cases that you could dream up, but assume you wanted your containerized applications to use the lowest cost possible but provide a really high availability. We could setup a few hosts on-premises in our vSphere environment for highly available always on hosts, while using spot instances in AWS to provide our additional scaling, and all of it can be orchestrated through Nirmata. Now, we\u0026rsquo;ve got stable vSphere hosts and not so reliable, but cheap, spot instances in AWS for our application to live on. We might even be able to take actions on our containers based on traffic patterns. If we get lots of traffic in one geography, we could spin up new hosts closer to the requests and deploy our containers on those hosts. Orchestration will really let us dream up tons of scenarios.\nIs it Hard to Setup? Not really! Once we sign up for an account we want to setup some container hosts, add some policies and then we\u0026rsquo;re off and running with our containers.\nThe initial wizard will get you started where you setup some hosts and policies and you\u0026rsquo;re ready to rock. I\u0026rsquo;m not a docker expert but even I was able to get it running within a few minutes with some EC2 instances. I used the Nirmata console to deploy a couple of basic containers in my environment, and then logged into one of my instances to see what happened. I have my two test containers along with some Nirmata containers that are used to manage the environment.\nThe Nirmata console shows my environments, containers, hosts, and resource utilization and I can manage everything else from here or through their API.\nPricing It doesn\u0026rsquo;t matter how good the solution is if it\u0026rsquo;s priced too high to be useful, so how is Nirmata\u0026rsquo;s solution priced in the market place? First of all, they have a free trial which you can go sign up for and it gives you a few users, a single public cloud provider and up to 20 GB of managed memory. Check it out for yourself and kick the tires on the platform. https://www.nirmata.io/security/signup.html\nIf you\u0026rsquo;re ready to buy you can move up to the professional version with unlimited memory and additional users, as well as support and multiple cloud providers as possibilities.\nEnterprise class is also available but would require you to speak with a representative and I\u0026rsquo;d assume that this pricing will vary depending on size, but this gets you all the bells and whistles including private cloud, single sign on, auditing and all the goodies.\nWhat\u0026rsquo;s Next? That\u0026rsquo;s entirely up to you. Check out the free trial for yourself and decide whether or not these capabilities are something that might be useful. Orchestration can be a really powerful tool for application availability as well as providing the development agility that you need to provision containers at scale. I\u0026rsquo;d love to hear what you think in the comments.\n","permalink":"https://theithollow.com/2017/07/27/orchestrating-containers-nirmata/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/07/logo-white-200x43.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/07/logo-white-200x43-150x32.png\"\u003e\u003c/a\u003e I had high expectations for the sessions being presented during \u003ca href=\"http://techfieldday.com/event/cfd2/\"\u003eCloud Field Day 2\u003c/a\u003e hosted by GestaltIT in Silicon Valley during the week of June 26th-28th. The first of the sessions presented was from a company that I hadn\u0026rsquo;t heard of before called \u003ca href=\"http://nirmata.io\"\u003eNirmata\u003c/a\u003e. I had no idea what the company did, but after the session I found out the name is an Indo-Aryan word meaning Architect or Director which makes a lot of sense considering what they do.\u003c/p\u003e","title":"Orchestrating Containers with Nirmata"},{"content":"Tech Field Day will be presenting Cloud Field Day 2 on July 26th through the 28th in Silicon Valley. If you have the time, please join in on the fun and watch the live stream right here.\nThe schedule will consist of nine great companies all explaining the ins and outs of their solutions and it\u0026rsquo;ll get real geeky. The schedule is found below and all times are Pacific US. So be sure to do the conversions.\n[table id=1 /]\nIf you\u0026rsquo;re looking for ways to interact, feel free to tweet one of the delegates. They will likely ask your questions to the presenters so you can play along at home.\n[table id=2 /]\n","permalink":"https://theithollow.com/2017/07/25/welcome-cloud-field-day-2/","summary":"\u003cp\u003eTech Field Day will be presenting Cloud Field Day 2 on July 26th through the 28th in Silicon Valley. If you have the time, please join in on the fun and watch the live stream right here.\u003c/p\u003e\n\u003cp\u003eThe schedule will consist of nine great companies all explaining the ins and outs of their solutions and it\u0026rsquo;ll get real geeky. The schedule is found below and all times are Pacific US. So be sure to do the conversions.\u003c/p\u003e","title":"Welcome to Cloud Field Day 2"},{"content":"Deploying security patches to servers is almost as much fun as managing backup jobs. But everyone has to do it, including companies that have moved their infrastructure to AWS. As we\u0026rsquo;ve learned with previous posts, Amazon EC2 Systems Manager allows us to use some native AWS tools for management of our EC2 instances, and patch management is no exception.\nEC2 Systems Manager allows you to do patch compliance where you can set a baseline and then based on a defined maintenance window a scheduled scan and deployment can be initiated on those EC2 instances. This assumes that you\u0026rsquo;ve already installed the SSM Agent and setup the basic IAM permissions for the instances to communicate with the Systems Manager service. The details can be found in the previous post.\nCreate A Baseline To get started with patch compliance we\u0026rsquo;ll want to configure a baseline. Open your EC2 dashboard and scroll down to find the \u0026ldquo;Patch Baseline\u0026rdquo; service on the left hand side. You\u0026rsquo;ll notice that a few patch baselines will be created by Amazon to use right away but we\u0026rsquo;ll create our own so that we understand what they\u0026rsquo;re doing. You\u0026rsquo;ll notice that those baselines are listed at \u0026ldquo;Default Baselines\u0026rdquo; meaning that if a patch baseline is not assigned to an EC2 instance, the default baseline would be used.\nNote: Three of the baselines specifically state that they\u0026rsquo;re for Amazon, Ubuntu or RHEL. The non-descriptive baseline is a Windows baseline.\nClick the \u0026ldquo;Create Patch Baseline\u0026rdquo; button.\nGive the Baseline a name and a description that will be easily identifiable by your administrative team. Then select the Operating System that the patches will be used for. I\u0026rsquo;ve chosen Windows for this post.\nNext we want to go through the approval rules. This step requires us to pick operating system(s) that we\u0026rsquo;ll want to scan and/or patch.\nMove on to the Classification of the update such as critical, security, rollups etc.\nThen select the severity such as Critical or Important. The last section allows you to place a delay in so that your patches aren\u0026rsquo;t deployed immediately after the release date. The default is 0 days to ensure that patches get applied as soon as possible. Beneath this there is an option to exclude certain patches as well so they aren\u0026rsquo;t included.\nAdd a Patch Group Once your patch baseline is created, select the baseline and select the \u0026ldquo;Modify Patch Groups\u0026rdquo; option.\nAdd at least 1 Patch Group. This is really just a tag that you\u0026rsquo;re placing on the baseline, but it will be important later.\nNext, go to your EC2 instances. Select one or more of your instances and add a tag. In the screenshot below I already have a \u0026ldquo;Name\u0026rdquo; tag used to identify this Windows Server instance. I want to add another tag named \u0026ldquo;Patch Group\u0026rdquo;. This is important! Amazon usually lets you create any key value pairs that you want, but the \u0026ldquo;Patch Group\u0026rdquo; tag is used specifically for patch management. The value of the tag should match the baseline patch group you entered earlier.\nCreate an IAM Role In order to run patches on registered instances a new role will need to be created. Open the Identity and Access Manager (IAM) service in AWS and create a new role.\nSelect the AmazonSSMMaintenanceWindowRole\u0026quot; policy from the list. Type in SSM to filter the list if that helps you out.\nGive the role a name and description.\nOnce you\u0026rsquo;ve created the role, find the role in the roles page and click trust relationships tab. Click \u0026ldquo;Edit Trust Relationships\u0026rdquo;. You\u0026rsquo;ll need to add a comma to the end of the ec2.amazonaws.com line and then add a new line with:\n\u0026ldquo;Service\u0026rdquo;:\u0026ldquo;ssm.amazonaws.com\u0026rdquo; to the existing policy as shown below. The click update trust policy.\nOne other piece to mention is that if the user you\u0026rsquo;re logged in as isn\u0026rsquo;t an administrator, you\u0026rsquo;ll need permissions to assign the role we created previously to a maintenance window. Essentially you have to have permissions to add permissions. So if you\u0026rsquo;re not an administrator, you\u0026rsquo;ll need to add the PassRole policy to the user account. This post assumes the user logged in is a super user.\nCreate a Maintenance Window As we know, our systems can\u0026rsquo;t usually be shut down whenever we want them to be. We have to define proper outage windows. We can define these in the Maintenance Windows section of the EC2 console under the \u0026ldquo;System Manager Shared Resources\u0026rdquo; group.\nOn the opening screen select the \u0026ldquo;Create a Maintenance Window\u0026rdquo; button.\nGive the window a name that will identify it. After that, you\u0026rsquo;ll want to build a schedule for this maintenance window to run. I\u0026rsquo;m being pretty lazy here and selected a window of every 30 minutes. I imagine you\u0026rsquo;ll have a much smaller window available to you. You can also set a duration for the window to last and when to stop maintenance routines before the window closes. This ensures that maintenance tasks don\u0026rsquo;t run over your window by accident.\nOnce you\u0026rsquo;ve created your maintenance window, select the object and select the \u0026ldquo;Register targets\u0026rdquo; action from the drop down.\nEnter an owner for record keeping purposes and then you can either specify the instances associated with this maintenance window, or add them based on tags. Since I\u0026rsquo;ve already added a tag for my Patch Group, I\u0026rsquo;m going to associate all instance with my patch group with this maintenance window. You can slice and dice these however you need to.\nNext, click the action window again and select the \u0026ldquo;Register task\u0026rdquo; option. Here you\u0026rsquo;ll see a familiar screen if you went through the Run Command post on this site. Select the \u0026ldquo;AWS-ApployPatchBaseline\u0026rdquo; document.\nIn the registered targets section, you should see the maintenance window target has already been selected.\nBelow this, select the operation, which will be either Scan, or Install. For this post, I\u0026rsquo;m only doing scans, but most likely you\u0026rsquo;ll want to do both. One window for scanning which can be done on a regular basis and an install window maybe monthly where the systems will be rebooted to apply patches. You will need to assign the Maintenance Window role with permissions to execute these command which we created earlier. Then select how many instances to run the commands on at once and when to stop if errors occur.\nMonitor Patch Compliance You\u0026rsquo;re set up at this point to scan your instances for missing patches. If you go to the Patch Compliance service from the EC2 console, you\u0026rsquo;ll start getting info about your instances once you\u0026rsquo;ve gone through at least 1 maintenance window. The scan task we\u0026rsquo;ve created in this post will have run and you can see if you have out of data instances. In my case I have only 1 instance and it\u0026rsquo;s missing patches.\nIf you do nothing else, you\u0026rsquo;ll only be able to see instances that are missing patches. But you can create another maintenance window with a task to install (instead of scan) patches for you. This will require a reboot of the instance so plan your install window accordingly. When your\u0026rsquo;edone, you\u0026rsquo;ll see that the instances are up to date and your next scan window will show everything in the green again.\nAlso, for any maintenance window you can look at the history to see if the tasks ran successfully or not. This is helpful at first to ensure that your permissions and roles were created correctly.\nSummary Patch management and compliance is a pain, but a necessary part of most environments. EC2 Systems Manager Patch Compliance can help teams manage their patches without the need for a more robust tool like Microsoft\u0026rsquo;s System Center Configuration Manager or WSUS server in the cloud. Stay tuned for more EC2 Systems Manager Posts.\n","permalink":"https://theithollow.com/2017/07/24/patch-compliance-ec2-systems-manager/","summary":"\u003cp\u003eDeploying security patches to servers is almost as much fun as managing backup jobs. But everyone has to do it, including companies that have moved their infrastructure to AWS. As we\u0026rsquo;ve learned with previous posts, Amazon EC2 Systems Manager allows us to use some native AWS tools for management of our EC2 instances, and patch management is no exception.\u003c/p\u003e\n\u003cp\u003eEC2 Systems Manager allows you to do patch compliance where you can set a baseline and then based on a defined maintenance window a scheduled scan and deployment can be initiated on those EC2 instances. This assumes that you\u0026rsquo;ve already installed the SSM Agent and setup the basic IAM permissions for the instances to communicate with the Systems Manager service. The details can be found in the previous post.\u003c/p\u003e","title":"Patch Compliance with EC2 Systems Manager"},{"content":"In a previous post we covered the different capabilities and basic setup of EC2 Systems Manager, including the IAM roles that needed to be created and the installation of the SSM Agent. In this post we\u0026rsquo;ll focus on running some commands through the EC2 Systems Manager Console.\nWe\u0026rsquo;ve already got an Amazon Linux instance deployed within our VPC. I\u0026rsquo;ve placed this instance in a public facing subnet and it is a member of a security group that allows HTTP traffic over port 80.\nLet\u0026rsquo;s deploy Apache and setup a very basic web server on this instance, but assume that we don\u0026rsquo;t have direct access to the instance. We can do this through EC2 Systems Manager and the Run Command capability. Here we can see my Linux Instance that I\u0026rsquo;ve tagged EC2SSM-Linux1, the security group is named WebServers, and we have a public IP Address. Click the \u0026ldquo;Run Command\u0026rdquo; link in the left hand corner of the screen.\nOn the Run a command screen, you can see a list of different types of command documents that can be used. Notice that they show you which platform they can be run on where some are Windows only, some are Linux only and some run on both. This screen will let you run docker commands, deploy packages, install updates and run scripts like we\u0026rsquo;ll do. I picked the AWS-RunShellScript for my machine.\nNext, we need to specify which instance this command will run on. This is really nice because you can select an instance manually or by tags. You can see already how easy it is to manage multiple machines in this manner. We could deploy Apache on 50 servers all with a \u0026ldquo;Web\u0026rdquo; tag if we wanted to. For the purposes of this example though, a single instance will do so I\u0026rsquo;ll select my EC2-SSM-Linux1 instance. Note that you can also specify how many instances to run this command on at a time or a percentage and you can specify a number of errors to stop the process after.\nIn the command window I\u0026rsquo;ve entered some commands that installs Apache and downloads my very basic web files before starting the apache service. Lastly I added a comment for the command so I can identity it in the Run Command console.\nOnce you get through the sections above, you\u0026rsquo;ll have an advanced options section which would allow you to grab the commands to use as part of the awscli tool. This is a really nice way for you to populate your git repo with some infrastructure as code stuff.\nAfter the command is executed you\u0026rsquo;ll see it listed in the Run Command console.\nIf you\u0026rsquo;ve been following along since the initial setup post, you\u0026rsquo;ll also remember that I setup CloudWatch Events for the Run Command that pushed the info to an SNS topic that notifies me via email. I got an email with the details in it as well. This was an optional setup task.\nIf we check the public IP Address of our instance, we will hopefully see our web page displayed, all without logging into our instance.\nSummary This was a pretty basic example of how you can use the Run Command capability within the EC2 Systems Manager service but should give you a ton of ideas about how you can use it to manage your environment.\n","permalink":"https://theithollow.com/2017/07/17/run-commands-ec2-systems-manager/","summary":"\u003cp\u003eIn a previous post we covered the different capabilities and basic setup of EC2 Systems Manager, including the IAM roles that needed to be created and the installation of the SSM Agent. In this post we\u0026rsquo;ll focus on running some commands through the EC2 Systems Manager Console.\u003c/p\u003e\n\u003cp\u003eWe\u0026rsquo;ve already got an Amazon Linux instance deployed within our VPC. I\u0026rsquo;ve placed this instance in a public facing subnet and it is a member of a security group that allows HTTP traffic over port 80.\u003c/p\u003e","title":"Run Commands through EC2 Systems Manager"},{"content":"We love Amazon EC2 instances because of how easy they are to deploy and we have a huge catalog of templates (AMIs) to choose from which really speeds up our provisioning. But once those instances are up and running it would be really nice to have some methods of managing those instances. Luckily, Amazon has developed several capabilities to help manage Amazon EC2 instances after they\u0026rsquo;ve been deployed. These capabilities are used to execute scripts, manage patches and kick off automation routines within an EC2 instance, directly from the AWS console.\nAWS EC2 Systems Manager Capabilities The following non-exhaustive list of capabilities are currently available through Amazon EC2 Systems Manager:\nRun Command - Allows you to execute commands directly on an EC2 Systems Manager enabled instance. State Manager - Allows you to specify a desired state for an EC2 Systems Manager enabled instance. Automations - Allows for the automation of deployment tasks. Patch Management- The ability to monitor and manage the deployment of patches. Inventory - Is a way to collect software inventory from your managed instances. Parameter Store - Centralizes configuration data such as passwords. Setting up Managed Instances SSM Agent The first thing to know is that not all instances deployed through AWS are considered managed instances. To be used with the EC2 Systems Manager capabilities, instances must have an SSM Agent installed on them. Out of the box, a Windows Server 2003-2012 R2 AMI published in November 2016 or later will automatically have the SSM Agent installed on them. If you have an EC2 instance deployed through an Amazon AMI published before this date you should update the instance with the latest agent or redeploy from a more recent AMI. For Linux instances, the SSM Agent is not installed by default. You must install the SSM Agent on a Linux instance before any of the cool toys can be used with them. Luckily for us this is pretty simple for deploying new instances.\nDuring the deployment of a Linux instance, under the Advanced Details, some user data can be added so that the agent is downloaded and installed upon initial provisioning. The example below is the user data that I used to deploy a CentOS instance on Amazon EC2. The commands for your favorite flavor of Linux can be found on the Amazon documentation page: http://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html#sysman-install-ssm-agent\nThe SSM Agent will need to be able to communicate back to the EC2 Systems Manager service which means that it requires outbound Internet access. The agent will check in with the service so outbound access is required while inbound access is not.\nSupported Operating Systems Not all operating systems are supported with Amazon EC2 Systems Manager capabilities. Obviously these will change so I recommend checking the latest documentation, but the current operating systems supported include:\nWindows\nWindows Server 2003 (including R2) Windows Server 2008 (including R2) Windows Server 2012 (including R2) Windows Server 2016 Linux\n64-bit and 32-bit Amazon Linux 2014.09, 2014.03 or later Ubuntu Server 16.04 LTS, 14.04 LTS, or 12.04 LTS Red Hat Enterprise Linux (RHEL) 6.5 or later CentOS 6.3 or later 64-bit Only Amazon Linux 2015.09, 2015.03 or later Red Hat Enterprise Linux (RHEL) 7.x or later CentOS 7.1 or later SUSE Linux Enterprise Server (SLES) 12 or higher Region Limitations As with most Amazon Web Services solutions, they are not available across all regions. EC2 Systems Manager is no exception. For a full list of regions please visit: http://docs.aws.amazon.com/general/latest/gr/rande.html#ssm_region\nSetting Up Permissions Before we start using the services we\u0026rsquo;ll want to have our permissions and roles ready to go. First, your login must have permissions to access the Amazon Systems Manager console. If you\u0026rsquo;re an administrator for the VPC in which the EC2 instances will live, then you already have sufficient permissions. If not, you\u0026rsquo;ll want to assign your user account the AmazonSSMFullAccess policy to the account.\nNow that we\u0026rsquo;ve confirmed that the user account will have the correct permissions to take actions on the instances, we also need to assign some permissions to the EC2 instances that will be managed. We\u0026rsquo;ll create a role and assign it to our instance so that it has the correct permissions to interact with the EC2 Systems Manager Service.\nTo do that we\u0026rsquo;ll create a new role through the IAM console. Create a new role and select the Amazon EC2 service role.\nFrom there type SSM in the policy type to filter down the list and then select the AmazonEC2RoleforSSM in the list.\nGive the role a name and pay attention to what it is. We\u0026rsquo;ll use the name again later when spinning up our instances. Click Create Role.\nSetup CloudWatch Events If we want to log our commands executed through Systems Manager, we can use CloudWatch Events as well. This is an optional setup task but useful for enterprise deployments. To get started go to the CloudWatch service and create a new rule under Events. In the service name select the EC2 Simple Systems Manager (SSM) and then under type select which event type you want to alert on. I selected just Run Command but you can pick whatever you\u0026rsquo;d like. In the Targets section I have selected a Simple Notification Services topic named \u0026ldquo;NotifyMe\u0026rdquo; which I setup previously.\nDeploy Some EC2 Instances We\u0026rsquo;ve got things ready to go but we\u0026rsquo;ll need some test instances to manage. Go to the EC2 instances page and select a supported server type from the list. I\u0026rsquo;ll choose the Amazon Linux AMI for this example. We\u0026rsquo;ll want to do the standard things for deploying an EC2 instance such as selecting a family and a type. The important steps for working with EC2 Systems Manager are configuring the instance details. On the \u0026ldquo;Configure Instance\u0026rdquo; page change the IAM role to the role that we created in the section above. In my case it\u0026rsquo;s EC2SystemsManagerRole.\nThen further down enter our user data to perform the installation of the SSM Agent during provisioning.\nComplete the EC2 deployment with your own settings for networks, tagging, storage and all the other properties that make sense in your environment.\nSummary We haven\u0026rsquo;t gotten to see the power of the EC2 Systems Manager services yet, but we have a good foundation ready to go and in the next posts we can start playing with the services. To read more about the real power of EC2 Systems Manager, check out the links below.\nAdditional Posts on Systems Manager: Run Command\nPatch Compliance\n","permalink":"https://theithollow.com/2017/07/10/amazon-ec2-systems-manager-services/","summary":"\u003cp\u003eWe love Amazon EC2 instances because of how easy they are to deploy and we have a huge catalog of templates (AMIs) to choose from which really speeds up our provisioning. But once those instances are up and running it would be really nice to have some methods of managing those instances. Luckily, Amazon has developed several capabilities to help manage Amazon EC2 instances after they\u0026rsquo;ve been deployed. These capabilities are used to execute scripts, manage patches and kick off automation routines within an EC2 instance, directly from the AWS console.\u003c/p\u003e","title":"Amazon EC2 Systems Manager Services"},{"content":"AWS is taking the virtualization world by storm. Workloads that used to get spun up on vSphere are now being deployed in AWS in many cases. But what if you\u0026rsquo;ve got workloads in vSphere that need to be moved? Sure, it probably makes sense to build new servers in AWS and decommission the old ones but sometimes it\u0026rsquo;s OK to lift and shift. Amazon has a service that can help with this process called the AWS Server Migration Service.\nIAM Users, Roles, and Service Accounts We\u0026rsquo;re going to need some accounts created to interact with both vSphere and AWS to do our replication and eventually failover. First, go to the IAM portal within AWS to setup some permissions for your SMS connector service. First, create a new user like \u0026ldquo;sms-account\u0026rdquo;. Make sure that the user has programmatic access since this account will be used by the connector to make API calls to the AWS services.\nOn the permissions screen, select the ServerMigrationConnector permissions. If you\u0026rsquo;re looking for these permissions search for servermigration in the filter box.\nReview your settings and be sure to note down your access key ID and Secret access key. You\u0026rsquo;ll need these later and can\u0026rsquo;t retrieve the secret access key again so pay attention!\nNext move down to the IAM Roles and create a new role.\nUnder the role type, find and select the \u0026ldquo;AWS Server Migration Service\u0026rdquo;.\nAttach the \u0026ldquo;ServerMigrationServiceRole\u0026rdquo; policy to this role.\nGive the role a name of \u0026ldquo;sms\u0026rdquo;. NOTE: You can use any role name you want here, but if it isn\u0026rsquo;t \u0026ldquo;sms\u0026rdquo; then you need to specify this name when you create a replication job. Making this sms makes things simpler.\nOnce you\u0026rsquo;ve setup the AWS Permissions, make sure you\u0026rsquo;ve got an account with administrative permissions in vSphere so you can manage the replication and export from the on-premises side.\nDeploy the SMS Connector Now that you\u0026rsquo;ve got permissions all laid out, you\u0026rsquo;ll want to go to your AWS account and find the AWS Server Migration Service. If you haven\u0026rsquo;t used it before you\u0026rsquo;ll click that \u0026ldquo;Get Started\u0026rdquo; button on the welcome page.\nFrom there, you\u0026rsquo;ll get some instructions on deploying an OVA file into your vSphere infrastructure. Download the OVA and install it into your vSphere environment.\nOnce your OVA has been deployed in vSphere, you\u0026rsquo;ll want to open a web browser and navigate to the IP address of the AWS-SMS-Connector VM that was deployed with the OVA. You\u0026rsquo;ll see a page like this one where you can click the \u0026ldquo;Get started now\u0026rdquo; button.\nAfter you read through the license agreement you\u0026rsquo;ll need to set a password for the management console.\nAfter the password is set, you\u0026rsquo;ll get a screen explaining how you can set some information for the connector appliance. Things like setting a static IP Address etc.\nIf you want to set a static IP you\u0026rsquo;ll need to open the VMware console for the machine to get a screen like the one below. You won\u0026rsquo;t be able to SSH into the machine so the VMware console is critical for this step.\nBack to the connector web page. Select whether or not to upload logs and to auto-upgrade the appliance. These are totally up to you.\nOn the step 5 screen you\u0026rsquo;ll need to select a region and then enter the access key and secret key for the IAM user we created earlier. You still have those credentials right?\nLastly, enter the vCenter service account with admin permissions in vSphere so that the connector can snapshot, create OVFs etc.\nA new screen will be opened to show you the status.\nAdd a Replication Job Now that we\u0026rsquo;ve got that silly setup stuff out of the way, we can login to the AWS portal and go to the Server Migration Service again. Look under the \u0026ldquo;Connectors\u0026rdquo; tab which should show a healthy connector communicating with the AWS SMS service. Click on \u0026ldquo;Import server catalog\u0026rdquo; to retreive the list of virtual machines available for replication.\nGo to the \u0026ldquo;Replication jobs\u0026rdquo; tab and then click \u0026ldquo;Create replication Job\u0026rdquo;. You\u0026rsquo;ll need to select a server from the list that was just imported.\nOnce you\u0026rsquo;ve selected the server(s) to replicate, the second step is to determine the license type. I\u0026rsquo;m providing my own so I selected the BYOL type and then clicked next.\nOn the next screen we need to select the job type. You can do a migration right from here, but I want to replicate on a schedule. The lowest setting that can be selected is every 12 hours so keep this in mind. You can delay the first replication or start it immediately and then must select an IAM service role. Remember if you used the \u0026ldquo;sms\u0026rdquo; role you don\u0026rsquo;t need to do anything here. I did add a description though. The last option is whether or not to delete the old AMIs when a new replication run is done.\nReview the3 settings and get started.\nWhat Happens During this Replication Phase? Several things are happening during this time. Every twelve hours a snapshot is taken on the vSphere virtual machine. Once the snapshot is taken, an OVF is created and then uploaded to Amazon S3. You can see that in the vSphere console, an export is taking place.\nWe can see that in S3, I\u0026rsquo;ve got a new bucket used by the sms role to store this OVF. Once the OVF has been uploaded, a new AMI is created based on this upload. You can see in your list of AMIs that a new AMI exists named \u0026ldquo;Created by (sms-job) and then a jobID.\nAt this point the replication is done. The entire process follows the flow shown below.\nIt\u0026rsquo;s important to note what happens after the first replication as well. This process is incremental because the previous snapshot is not deleted until after a new replication process is complete. You can see that I\u0026rsquo;ve run another replication job and I\u0026rsquo;ve got two snapshots on my migration instance. Once the full replication has completed, the older, un-needed snapshots are deleted, leaving only the most recent snapshot to prevent a long snapshot chain in our vSphere environment.\nIf we look in our list of AMIs, we\u0026rsquo;ll have an AMI for each of the replication runs for this machine unless you selected the option to delete AMIs automatically. If you did that, only the most recent AMI would be available, and would be the only point in time you can recover from.\nMigrate It\u0026rsquo;s come time to actually migrate our instance to AWS. Remember that replication only happens every twelve hours, so you may have old data ready to migrate. You can power off your VM and then run an on-demand replication first to move things over, but this process does take quite a bit of time, so don\u0026rsquo;t think it\u0026rsquo;ll be a quick task before migration, it\u0026rsquo;s slower than you might think.\nGo into your replication jobs and pick an AMI from the run history tab. If you\u0026rsquo;re deleting old AMIs, only one will be available. If you aren\u0026rsquo;t, you can select any replication job from the list. Select the \u0026ldquo;Launch instance\u0026rdquo; button.\nThe next screen should look very familiar to you if you\u0026rsquo;ve deployed EC2 instances from the AWS console in the past. Select the size instance, storage, tags, subnets, VPCs etc just like you\u0026rsquo;d normally do.\nWhen you\u0026rsquo;re all done, your server has been successfully migrated to Amazon Web Services and you can decommission your old vSphere vm at your convenience.\nSummary This is a pretty easy to use tool for moving workloads to AWS and the best part is that it\u0026rsquo;s free. I don\u0026rsquo;t know that I can suggest using it for mass migrations or for production workloads though, because of how it replicates data to Amazon. Snapping a VM, creating an OVF and converting it every twelve hours leaves a lot of time that changes may occur to your vSphere VM. If you\u0026rsquo;ve got a long change window where you can do some powered off on-demand replications then this should work pretty well, but i don\u0026rsquo;t see this happen much for enterprise environments. You may be looking for some third party tools to help out with this that can constantly stream those changes to AWS which would lower the time of the outage window. Hopefully this post helps you understand the capabilities and drawbacks of the AWS SMS solution.\n","permalink":"https://theithollow.com/2017/06/26/migrate-vsphere-vms-amazon-aws-server-migration-service/","summary":"\u003cp\u003eAWS is taking the virtualization world by storm. Workloads that used to get spun up on vSphere are now being deployed in AWS in many cases. But what if you\u0026rsquo;ve got workloads in vSphere that need to be moved? Sure, it probably makes sense to build new servers in AWS and decommission the old ones but sometimes it\u0026rsquo;s OK to lift and shift. Amazon has a service that can help with this process called the AWS Server Migration Service.\u003c/p\u003e","title":"Migrate vSphere VMs to Amazon with AWS Server Migration Service"},{"content":"Blogging has been a labor of love for me for a little over five years now. I started a blog to get my own ideas written down, to document my own experiences with technology and to try to give back to an industry who had helped me improve my own skills. But blogging hasn\u0026rsquo;t been an easy thing all of the time and often comes with challenges. If you\u0026rsquo;re new to blogging and thinking about getting started, this post may help you figure out how you want to blog and if you\u0026rsquo;re an experienced blogger, I expect that you can relate to this post.\nThe Rewards Before talking about some challenges, I wanted to reiterate that blogging does come with some perks.\nAdded Skills - There are soft skills that blogging has helped me with such as vocabulary, grammar and getting used to explaining things through pictures. Maybe for you grammar police out there, it still looks like I can\u0026rsquo;t write a coherent sentence, but imagine how bad of a writer I\u0026rsquo;d be if I didn\u0026rsquo;t write as often. My day job benefits from these skills since I write designs and documentation for customers and must articulate points in much the same way. In addition to this, I\u0026rsquo;m diving into technologies more deeply than I might normally, because I\u0026rsquo;m expecting to have to explain it in a post later to someone else. Being able to explain how to, or how should you, do something takes a deeper level of knowledge so this is a benefit. Networking - I\u0026rsquo;ve met a lot of great people because of my blogging. Being introduced to the Tech Field Day team has opened many doors, and just being involved in social media has been amazing. If it weren\u0026rsquo;t for my blog, I don\u0026rsquo;t know how involved I would have gotten in social media, but now that I am, I touch base with a lot of fellow bloggers, or people I\u0026rsquo;ve met at conferences. Blogging can open some doors for you if you\u0026rsquo;re looking for work and this point is not lost on me. Community Awards - Vendor programs such as VMware vExperts, Cisco Champions or Microsoft MVPs, is a nice perk if you routinely cover solutions in those ecosystems. While this isn\u0026rsquo;t a reason to blog, it is a nice thing to be acknowledged from a vendor and get access to licensing, or new products ahead of the rest of the pack. Balancing the Blogging Difficulties It\u0026rsquo;s not all sunshine and cake by the ocean though. There are plenty of reasons that people don\u0026rsquo;t blog.\nTime Away From Family - I have a great family that knows the importance of blogging for an outlet and as a benefit to my career. But writing a blog, researching technologies, and going to community events can put a real strain on relationships as well. It\u0026rsquo;s one thing to have a demanding job, but an entirely other thing to spend additional time writing a blog. I think I\u0026rsquo;ve failed in regards to this balance in the past by spending too much time blogging and not enough time ensuring that my personal responsibilities are met. Fear of Criticism - If you want to blog, you have to write things. I know this is a very profound statement, but you have to lay out what knowledge you have on the webpage for everyone to see. This can be a very intimidating prospect for new bloggers. What if I write something that\u0026rsquo;s inaccurate? What if I write something and people realize how little I know? What if people make mean comments about a post I worked hard writing? Well, all of these things can happen. You have to find that right balance about what you want to write about and what level of criticism you\u0026rsquo;re willing to take. I will tell you that the criticism can be very humbling at times, but if used correctly can also make you better at your job. Employer Conflicts - This will certainly be different for every employer. Does your company have a blog that they expect you to write for? If they do, how do you know what content goes on your blog vs the companies blog? If you\u0026rsquo;re writing up some cool new code or a runbook, what content is yours to use and what could be considered intellectual property by your employer. These are sometimes tough decisions that need to be dealt with. If you\u0026rsquo;ve already got an established blog, I\u0026rsquo;d recommend having some discussions with your employer when you interview for a new position to get this straightened out going in. Why do it? For me, its a way to document my training activity and give me a reference point for where I\u0026rsquo;m at technically. I can\u0026rsquo;t list all of the times I\u0026rsquo;ve gone to do something and found my own blog as a great technical resource to remind me how things were done and why I did them. Maybe this is a selfish reason but its one of the big ones. The other reason blogging is important to me is out of a sense of duty. When I was a Systems Administrator, I spent a lot of time on blogs figuring out how to do stuff for work. They were lighthearted, more direct, versions of technical documentation where the operator would tell you what really worked vs what should have worked. Being able to provide this type of resource to other Systems Administrators seems like something I owe to the community.\nThank You During my career, I\u0026rsquo;ve had different job roles ranging from level 1 technical support up to being a Senior Solutions Architect for a consulting company. My focus has changed and so have the types of blogs I\u0026rsquo;ve read. When I was starting with vSphere I read yellow-bricks.com and frankdenneman.nl. Later I found Wahlnetwork.com and virtuallyghetto.com to be the most useful to me. And now Keith Townsend\u0026rsquo;s linkedin posts or CTOAdvisor posts are the most relevant to me personally. I\u0026rsquo;d like to thank these bloggers and more for taking the risks and balancing the difficulties for the betterment of the community, regardless of the reasons they decided to start blogging. Thank you.\nIf you have bloggers that you appreciate, you still have an opportunity to go vote for them at vsphere-land.com. Go show them your appreciation.\n","permalink":"https://theithollow.com/2017/06/20/blogging-balance/","summary":"\u003cp\u003eBlogging has been a labor of love for me for a little over five years now. I started a blog to get my own ideas written down, to document my own experiences with technology and to try to give back to an industry who had helped me improve my own skills. But blogging hasn\u0026rsquo;t been an easy thing all of the time and often comes with challenges. If you\u0026rsquo;re new to blogging and thinking about getting started, this post may help you figure out how you want to blog and if you\u0026rsquo;re an experienced blogger, I expect that you can relate to this post.\u003c/p\u003e","title":"Blogging Balance"},{"content":"Amazon\u0026rsquo;s S3 is a cost effective way to store file but many organizations are used to mapping NFS shares to machines for file storage purposes. Amazon Storage Gateways are a good way to cache or store files on an NFS mount and then back them up to an S3 bucket. This post goes through the setup of an AWS Storage Gateway in an EC2 instance for caching files and storing them in an S3 bucket. This same solution (and a similar but different process) can be used to mount block devices through iSCSI or setup a Tape Gateway for backup products.\nPrerequisites We\u0026rsquo;ll need some basics up and running to follow along with this post. We\u0026rsquo;ll need a VPC in an AWS Account for starters. Once this is ready to go we\u0026rsquo;ll want to setup an S3 bucket so that we have a location to dump our files. I\u0026rsquo;ve created a bucket named hollows3gatewaybucket1 for this post. I also created a folder in the bucket just to have some content in it.\nNext up, we want to create a security group that will be used to filter the traffic to our storage gateway. I\u0026rsquo;ll be using two rules. The first is to allow HTTP traffic to the storage gateway. This port is used to activate the gateway. The second rule is to allow NFS access to the gateway and I\u0026rsquo;ll restrict that access to only machines from within my VPC.\nDeploy the Storage Gateway Now we\u0026rsquo;re ready to get started with the storage gateway. Go to the AWS console and open the Storage Gateway service. Since this is the first time we\u0026rsquo;ve used it, click the \u0026ldquo;Get started\u0026rdquo; button.\nOn the wizard screen, select the gateway type. I\u0026rsquo;ve selected \u0026ldquo;File gateway\u0026rdquo; but you can also do Volume gateway for block devices, and Tape gateway for a tape library. Click Next.\nOn the host platform select where you\u0026rsquo;ll place the storage gateway. You can download an OVA for VMware, or spin up an EC2 instance. For this post I used the EC2 instance. Click the \u0026ldquo;Launch instance\u0026rdquo; button to take you to that wizard.\nNow we walk through setting up an EC2 instance. The recommended starting size is m4.large but you can select what makes sense for your environment. Click Next.\nOn the next screen make sure you give the storage gateway a subnet and IP Address that is reachable. I\u0026rsquo;ve given mine a public subnet with a public IP Address but you probably wouldn\u0026rsquo;t want to do this for a production system for security reasons. When you\u0026rsquo;re done, click next.\nOn the storage page, add a new EBS volume for your caching. I used a 100 GB drive but this all depends on how much data you want to cache for your S3 bucket. Obviously, the larger the drive, the more data you can cache for the S3 bucket, but that comes at the cost of EBS storage.\nNext, you can add any tags to your instance.\nOn the security group screen select an existing security group and pick the one we created earlier. You can review our rules again below.\nReview your settings and launch the instance.\nSelect a keypair in case you need to SSH into the box later. We can\u0026rsquo;t really SSH in at this point because we didn\u0026rsquo;t open the SSH port on the security group, but maybe we\u0026rsquo;d change that later for troubleshooting purposes.\nIn a moment you\u0026rsquo;ll see your new instance starting up in the EC2 console.\nOnce the EC2 instance has had time to spin up, we can go back to our storage gateway wizard where we jumped out of to create our EC2 instance. Enter in the IP Address of the new EC2 instance. I used the public IP Address but you could use a private IP if you have connectivity from your workstation to the storage gateway. Clicking the \u0026ldquo;Connect to gateway\u0026rdquo; button just redirects your browser to activate the gateway.\nOnce you connect to the gateway, set the timezone and give the gateway a name. Then click \u0026ldquo;Activate gateway\u0026rdquo;.\nOnce you do that the gateway will be active and you need to set your cache. After a moment your spare disk should be recognized and you can allocate it to the \u0026ldquo;Cache\u0026rdquo;. Click the Save and Continue button.\nConfigure Shares Now that gateway is deployed and activated, we\u0026rsquo;ll be taken to our list of storage gateways. You should see a status of running. Click the \u0026ldquo;Create file share\u0026rdquo; to add an NFS share.\nThe gateway should be set already, but if you have multiple gateways, select the correct one. Then enter in the name of the S3 bucket we created at the beginning of this post. Don\u0026rsquo;t make the mistake of thinking that this wizard will create an S3 bucket for you. It won\u0026rsquo;t. Then select the storage class for new objects and you can leave the \u0026ldquo;Create a new IAM role\u0026rdquo; selected so that the permissions are automatically set correctly. Click next.\nReview the settings and click the \u0026ldquo;Create file share\u0026rdquo; button.\nIt may take a few minutes but the share status should be \u0026ldquo;Available\u0026rdquo; pretty soon. Once it is, select it and at the bottom of the screen the instructions for mounting the NFS share will be displayed for you.\nOn your client machine, mount the NFS share and you should be able to put your own files there after that.\nSummary The Amazon Storage Gateway is a nice utility to allow your users to leverage S3 through iSCSI, Tape or NFS mount points. It can become particularly useful when storing data close to other on-premises clients but having the data backed by S3 or a good way to migrate your data to S3 without having to think much about it.\n","permalink":"https://theithollow.com/2017/06/13/setup-amazon-storage-gateway/","summary":"\u003cp\u003eAmazon\u0026rsquo;s S3 is a cost effective way to store file but many organizations are used to mapping NFS shares to machines for file storage purposes. Amazon Storage Gateways are a good way to cache or store files on an NFS mount and then back them up to an S3 bucket. This post goes through the setup of an AWS Storage Gateway in an EC2 instance for caching files and storing them in an S3 bucket. This same solution (and a similar but different process) can be used to mount block devices through iSCSI or setup a Tape Gateway for backup products.\u003c/p\u003e","title":"Setup Amazon Storage Gateway"},{"content":"Preventing blueprint sprawl should be a consideration if you\u0026rsquo;re building out a new cloud through vRealize Automation. Too many blueprints and your users will be confused by the offerings and the more blueprints, the more maintenance needed to manage them. We\u0026rsquo;ve had custom methods for managing sprawl up until vRA 7.3 was released. Now we have some slick new methods right out of the box to cut down on the number of blueprints in use. These new out of the box configurations are called Component Profiles.\nCreate a Component Profile To get started, go into your vRA 7.3 or later version of vRA and navigate to the Administration tab and then click to expand the property dictionary tab. From there you\u0026rsquo;ll see the new \u0026ldquo;Component Profiles\u0026rdquo; option. You\u0026rsquo;ll notice that you can\u0026rsquo;t create new ones, but you can edit the existing ones. There are two by default. Let\u0026rsquo;s start with the \u0026ldquo;Size\u0026rdquo; component profile. Click Size.\nThere isn\u0026rsquo;t much to do on the General tab so move over the the \u0026ldquo;Value Sets\u0026rdquo; tab to get started. Here we\u0026rsquo;ll create three different Value Sets to add to our component profile. I\u0026rsquo;ve added a Small, Medium, and Large so that users can select a VM size during deployment. To create a new one click the green plus sign.\nGive your Value Set a name and a description. These will be important for your users to select the right thing so be descriptive. Make sure the status is set to active. Then set your CPU, Memory and Storage values that you wish to use. Once done, click finish and repeat this for any other values you might want to present to users.\nNow, that we\u0026rsquo;ve configured the \u0026ldquo;Size\u0026rdquo; component profile, lets move over to the \u0026ldquo;Image\u0026rdquo; component profile. This feature lets us specify different images for use with a blueprint. In the example here we\u0026rsquo;ll use a generic Linux blueprint and then let users pick which flavor of linux they\u0026rsquo;ll use during the request. Go into the Image value sets and add one for each image type you want to present. In my example I\u0026rsquo;ve got a Ubuntu Image that I\u0026rsquo;ll be cloning from a vSphere template and using a customization specification. These settings are no different from the \u0026ldquo;Build Information\u0026rdquo; tab on a blueprint. Finish your value set and create any other Linux Blueprints you might want to offer to your users.\nCreate the Blueprint Now we need to go into the Design tab and create a new blueprint. Use your normal blueprint setup like always, but once done, go to the \u0026ldquo;Profiles\u0026rdquo; tab. Click the green plus sign next to add and select both the Image and Size profiles. (Assuming you want to offer a choice for both sizing and image types of course). Now click the Size Value profile and click the \u0026ldquo;Edit Value Sets\u0026rdquo; option.\nSelect the values that you\u0026rsquo;d like to offer in this blueprint and specify which will be the default value. Repeat this process for the Image profile.\nWhen you\u0026rsquo;re all done, add the blueprint to the catalog as we always do, and entitle it to your users.\nRequest Your Server Now go into your catalog and make a new request for the blueprint. On the vSphere machine, you\u0026rsquo;ll see there are drop downs for Image and Size which can be shown to your users at request time.\nThis was all possible prior to version 7.3 of vRealize Automation, but this new out of the box feature makes it much easier and provides faster time to value.\n","permalink":"https://theithollow.com/2017/06/06/vra-7-3-component-profiles/","summary":"\u003cp\u003ePreventing blueprint sprawl should be a consideration if you\u0026rsquo;re building out a new cloud through vRealize Automation. Too many blueprints and your users will be confused by the offerings and the more blueprints, the more maintenance needed to manage them. We\u0026rsquo;ve had custom methods for managing sprawl up until vRA 7.3 was released. Now we have some slick new methods right out of the box to cut down on the number of blueprints in use. These new out of the box configurations are called Component Profiles.\u003c/p\u003e","title":"vRA 7.3 Component Profiles"},{"content":"vRealize Automation version 7.3 dropped a few weeks ago and you\u0026rsquo;re really excited about the new improvements that have been made with the platform. Release Notes for version 7.3 You\u0026rsquo;ve gone through the upgrade process which is constantly improving I might add but once you log in you find out that your endpoints that you spent so much time building are now missing. Kind of like the ones in my screenshot below.\nThe important thing to do at this point is not to panic. VMware has a very simple knowledge base article that you can follow to fix all your endpoints. I found that my fabric groups, reservations and compute resources (all of which depend on the underlying endpoint) were still in tact. Luckily VMware has put out a KB article about the issue and the fix is very simple. Remember again not to panic.\nLog into the Windows console of your IaaS Server with the model manager service running on it. Open a command prompt as an administrator. Change to the \u0026ldquo;C:\\Program FIles (x86)\\VMware\\vCAC\\Server\\Model Manager Data\\Cafe\u0026rdquo; directory. From there we\u0026rsquo;ll use the vcac-config tool to register the endpoints by running \u0026ldquo;vcac-config.exe RegisterCatalogTypesAsync -v\u0026rdquo;.\nOnce your endpoints have been registered, you can upgrade them by using the same tool. \u0026ldquo;vcac-config.exe UpgradeEndpoints -v\u0026rdquo;. You can see from the screenshot below how quick and easy the fix really is. Your endpoints obviously might look different than mine but it should be the same process.\nOnce you\u0026rsquo;re done, refresh your endpoints page and your endpoints should show up again.\nSide Note:\nIf you\u0026rsquo;re using NSX with version 7.3 remember that it\u0026rsquo;s now not found in the vSphere endpoints but rather has it\u0026rsquo;s own endpoint type now. If you drill into the NSX endpoint you\u0026rsquo;ll see that it has a new \u0026ldquo;Association\u0026rdquo; with an existing vSphere endpoint. This isn\u0026rsquo;t a bug just something new that you might notice when you check on your endpoints.\n","permalink":"https://theithollow.com/2017/05/30/vra-7-3-endpoints-missing/","summary":"\u003cp\u003evRealize Automation version 7.3 dropped a few weeks ago and you\u0026rsquo;re really excited about the new improvements that have been made with the platform. \u003ca href=\"http://pubs.vmware.com/Release_Notes/en/vra/73/vrealize-automation-73-release-notes.html\"\u003eRelease Notes for version 7.3\u003c/a\u003e You\u0026rsquo;ve gone through the upgrade process which is constantly improving I might add but once you log in you find out that your endpoints that you spent so much time building are now missing. Kind of like the ones in my screenshot below.\u003c/p\u003e","title":"vRA 7.3 Endpoints Missing"},{"content":"vRA is great at deploying servers in an automated fashion, but to really use the built in functionality for an organization some additional information should be requested to properly place the workloads in the environment. This post covers how to ask users for the correct information to properly determine the placement location of new server workloads.\nCluster Placement The first placement decision that needs to be made is which cluster the workload should be placed on. This can be done with reservations and reservation policies but often comes with some blueprint sprawl. We\u0026rsquo;d like to be able to ask the requester which environment the workload should be placed on. To specify a cluster (which could include a cluster on a different vCenter or datacenter) we\u0026rsquo;ll modify an xml document stored in the IaaS Server(s) which will describe our datacenters. In my example I\u0026rsquo;ve got two clusters in a single vCenter named \u0026ldquo;Management\u0026rdquo; and \u0026ldquo;Workload\u0026rdquo;. My clusters are shown below.\nBefore you begin the next section stop the VMware vCloud Automation Center Service on the IaaS Server when there are no deployments being executed.\nGo to the C:\\program files (x86)\\VMware\\vCAC\\Server\\Website\\XMLData folder and modify the \u0026ldquo;DataCenterLocations.xml\u0026rdquo; document stored there. Change the name and descriptions to match your naming. Save the file and when you\u0026rsquo;re done start the VMware vCloud Automation Center Service up again.\nOnce you\u0026rsquo;ve modified your XML file, you\u0026rsquo;ll want to go and tag your compute resources with the information in that xml file. Go to the Compute Resource for that cluster and change the \u0026ldquo;Location:\u0026rdquo; field to match to the correct tag. The example below shows my \u0026ldquo;Management\u0026rdquo; cluster being tagged with the \u0026ldquo;Management\u0026rdquo; location.\nOnce you\u0026rsquo;ve tagged your resources you can modify your blueprints to show the location on request if you\u0026rsquo;d like. We\u0026rsquo;ll use a different method but if this is all the further you\u0026rsquo;d like to go, you can do this here.\nIf you select that check box you\u0026rsquo;ll see the following on the request form. Again, this is an option but not used for the rest of this post.\nDynamic Requests for Network and Storage Locations Create vRealize Orchestrator Actions So we can now pick a cluster which is a great first step, but we\u0026rsquo;ll also want to pick the network and datastore to place the machine. This isn\u0026rsquo;t that difficult, we can use custom properties to do this, but what if the network or datastore selection depend upon the cluster we\u0026rsquo;ve selected previously? Well we can use vRO actions to help modify the list of resources in which can be used.\nTo make this selection we need to create a new action in vRealize Orchestrator. Login to your orchestrator instance and create a new module to house some placement actions.\nInside the new module create a new action that will help filter based on a location input. The example below takes a Location variable (which will come from vRA) and based on that information returns an Array\\string with the appropriate portgroups. The script below uses an array and each Cluster will have it\u0026rsquo;s own portgroup list specified and returned. If not a string should be returned asking for a location to be specified.\nThe same sort of script can be used in another action to specify the name of the datastore to be selected.\nIt should be noted that the script I\u0026rsquo;ve used is very basic and hard codes the datastore names and portgroup names for the clusters into the script. You can get fancier and dynamically grab this information if you\u0026rsquo;d like and return that data as well. This is meant as a quick way to specify information.\nCreate Property Definitions Now that we\u0026rsquo;ve got some scripts that will filter our portgroups and datastores, we need some property definitions in vRA. The first will be the cluster selection. Create a new vRA Property Definition named \u0026ldquo;Vrm.DataCenter.Location\u0026rdquo; and give it a label like Cluster. The display order should be 1 or at least very low since it should be selected first. The Data type is \u0026ldquo;String\u0026rdquo; and probably should be displayed as a \u0026ldquo;Dropdown\u0026rdquo; list. I\u0026rsquo;ve specified a static list for the values and updated the list with a name and a value for each of the clusters. The \u0026ldquo;values\u0026rdquo; listed in this list will be passed to vRO as the inputs for the scripts created earlier so name the appropriately.\nNow we will create two additional property definitions for the portgroup and the datastore selections.The portgroup definition should be named \u0026ldquo;VirtualMachine.Network0.Name\u0026rdquo; and your own label. Data type is string again and displayed as a dropdown list. The important part is to select \u0026ldquo;External values\u0026rdquo; and then click the button to use a script action. Navigate to the vRO script you created earlier in this post and then click edit in the input parameters. Bind the input parameter to the Location property that you specified in the last step. Then click OK.\nRepeat the process for datastores by specifying the \u0026ldquo;VirtualMachine.Disk0.Storage\u0026rdquo; property for the name and the script action should match your datastore action in vRO. It can also be bound to the location property previously created.\nCreate a Property Group Now that we\u0026rsquo;ve got three different property definitions we can group them together into a property group to easily assign them to our blueprints later on. Create a new property group with your own naming methodology and add the following custom properties:\nVirtualMachine.Disk0.Storage VirtualMachine.Network0.Name Vrm.DataCenterLocation.Location Each of the properties should have the \u0026ldquo;Show in Request\u0026rdquo; option selected and the \u0026ldquo;Vrm.DataCenter.Location\u0026rdquo; property should have the values of your Clusters listed as a comma separated list.\nWhen you\u0026rsquo;re done, you can go to one of your blueprints and add the property group to it\u0026rsquo;s list of properties.\nResults Now if we go to request our blueprint we\u0026rsquo;ll be prompted for several different options. Cluster which will have our statically assigned options listed in the drop down. But then we\u0026rsquo;ll have PortGroup and Datastore which will also have dropdowns but their values will depend on the selection made in the cluster field. For example, here you can see that I\u0026rsquo;ve selected the \u0026ldquo;Management\u0026rdquo; cluster and my datastore has a single option listed.\nWhile if I select the \u0026ldquo;Workload\u0026rdquo; cluster I have different options listed for the datastore.\nSummary Placement decisions are a big part of providing flexibility for your automated deployments. Without them you\u0026rsquo;ll have to manage extra blueprints, reservations and policies to ensure that workloads are placed in the right spot and this can be an administrative nightmare. Hopefully this post shows you some easy ways to manage placement decisions for your environment.\n","permalink":"https://theithollow.com/2017/05/22/vra-placement-decisions-dynamic-form/","summary":"\u003cp\u003evRA is great at deploying servers in an automated fashion, but to really use the built in functionality for an organization some additional information should be requested to properly place the workloads in the environment. This post covers how to ask users for the correct information to properly determine the placement location of new server workloads.\u003c/p\u003e\n\u003ch1 id=\"cluster-placement\"\u003eCluster Placement\u003c/h1\u003e\n\u003cp\u003eThe first placement decision that needs to be made is which cluster the workload should be placed on. This can be done with reservations and reservation policies but often comes with some blueprint sprawl. We\u0026rsquo;d like to be able to ask the requester which environment the workload should be placed on. To specify a cluster (which could include a cluster on a different vCenter or datacenter) we\u0026rsquo;ll modify an xml document stored in the IaaS Server(s) which will describe our datacenters. In my example I\u0026rsquo;ve got two clusters in a single vCenter named \u0026ldquo;Management\u0026rdquo; and \u0026ldquo;Workload\u0026rdquo;. My clusters are shown below.\u003c/p\u003e","title":"vRA Placement Decisions with a Dynamic Form"},{"content":"It\u0026rsquo;s a pretty common design request these days to have a single authentication source. I mean, do you really want to have to manage a bunch of different logins instead of having to remember one? Also, five different accounts give attackers five different avenues to try to exploit. So many times we use our existing Active Directory infrastructure as our single source of authentication. Amazon Web Services (AWS) needs a way for people to login and will allow you to use your own Active Directory credentials through Security Assertion Markup Language (SAML). This post will walk you through the setup of Active Directory Federation Services (ADFS) on Windows Server 2016 and configuring it to be your credentials for AWS.\nFor this post we\u0026rsquo;ll take a few AD security groups with a prefix of \u0026ldquo;theITHollow-\u0026rdquo; and allow the users within it to authenticate with AWS Roles similarly named.\nHere are the AD Groups I\u0026rsquo;ll be working with for this post:\nAnd here are the associate AWS Roles that will map to them:\nHow Does SAML Work? To use the process explained in this blog post several things will happen. The user who logs in will navigate to the ADFS Portal which will authenticate agains local Active Directory. The Authentication will be sent back to the user\u0026rsquo;s browser which will then POST the token to the AWS portal and then the browser will be redirected to the AWS Console if authenticated correctly. The diagram below shows how the process would work.\nInstall ADFS The first thing we need to do is install Active Directory Federation Services on your Windows Server 2016 server. This procedure requires an SSL Certificate for you to upload to the server during the configuration so you may want to have one of these ready. I purchased a certificate from a public CA prior to these steps.\nTo install ADFS open the Server Manager and Add a new server role. Select the Active Directory Federation Service Role.\nNo other features need to be added through this process so you can mosey on through the wizard until you get to the end of the installation.\nBasic ADFS Configuration Steps Now that ADFS has been installed we\u0026rsquo;re prompted to complete the configuration through the wizard. Select the option to \u0026ldquo;Create the first federation server in a federation server farm\u0026rdquo; option. Of course this assumes that you don\u0026rsquo;t already have ADFS running already in your environment. Click next.\nSelect an account to use to connect to your Active Directory Domain Services. Typically you\u0026rsquo;d want to use a service account to bind with AD. Click next again.\nOn the Service Properties tab, you\u0026rsquo;ll need to import your ADFS Certificate. This is the SSL Certificate I alluded to earlier in this post. Then give the federation service a display name. This name will show up on your ADFS Login portal.\nYou\u0026rsquo;ll need to then specify another service account that will run the ADFS services on your server. Click Next.\nThen you\u0026rsquo;ll need to create a database for storing configuration data. You can create a new on using the Windows Internal Database or specify your own SQL server. I went the easy route mainly so I could snapshot my ADFS server and be sure that the configuration information comes along with the snapshot.\nReview the configuration options and click next to start Pre-requisite checks.\nIf all of the prerequisite checks complete successfully, click the configure button to begin the configuration.\nOnce the configuration is done, you\u0026rsquo;ll notice some results. The main one is that you need to restart the server, but you may also find that the port is using 49443 if you didn\u0026rsquo;t add the certauth.servername Subject Alternate Name (SAN) as part of your certificate. Thats perfectly fine, it just means that a standard SSL Port of 443 won\u0026rsquo;t be used. Click Close and be sure to restart your ADFS server.\nSetup AWS for SAML Authentication While we\u0026rsquo;re waiting for that server to reboot, we want to go to our AWS portal with some local login credentials. (I know, local credentials are so lame but we need them for now until we get our SAML authentication working. Just be patient.) Go to the IAM Service and click on the \u0026ldquo;Identify Providers\u0026rdquo; link. There you\u0026rsquo;ll have the opportunity to add a new SAML provider by clicking the \u0026ldquo;Create Provider\u0026rdquo; button. Click that and on the first screen select the provider type of SAML. Give the provider a name and then you\u0026rsquo;ll need to upload your MetaData Document. This is a document on your ADFS server which explains how the federation should work. You can get this by going to https://[your-server-name-here/FederationMetadata/2007-06/FderationMetadata.xml in a web browser. In my example I went to: https://adfs.theithollow.com/FederationMetadata/2007-06/FederationMetadata.xml.\nNote: If you added an SSL Certificate with a public DNS Entry in it but not a local DNS name this might fail. To get around that ensure that you\u0026rsquo;ve got Split-DNS working so that you can reach the ADFS server with it\u0026rsquo;s public name from within the local area network.\nChoose your metadata file and complete the setup.\nNow you should have an identity provider listed in your AWS IAM console.\nCreate AWS Roles Next, we\u0026rsquo;re going to setup a few roles in AWS. The roles here are the same listed at the beginning of this post that match up with our Active Directory Security Groups. Note that you can use IAM Users or IAM Groups to set this stuff up, but an AWS role is an object with authorization to do stuff without credentials. This makes it very secure since something else must assume a role, like a user or even an EC2 instance. To create a role we\u0026rsquo;ll go into the IAM console and under the roles heading click \u0026ldquo;Create new role\u0026rdquo;. Under the role type, select the \u0026ldquo;Role for identity provider access\u0026rdquo; option and then click the \u0026ldquo;select\u0026rdquo; button next to \u0026ldquo;Grant Web Single Sign-On (WebSSO) access to SAML providers\u0026rdquo; option.\nUnder the Establish trust step, select the SAML provider we created previously and click \u0026ldquo;Next Step\u0026rdquo;\nThen verify the Role that\u0026rsquo;s listed and click \u0026ldquo;Next Step\u0026rdquo;.\nNow we need to attach a policy. The policy will depend on what permissions you want the role to have. One of my roles has administrator access so I\u0026rsquo;ve selected that policy for this role. Click \u0026ldquo;Next Step\u0026rdquo;.\nNext, give the role a name. The name will be important later. I\u0026rsquo;m matching up my AWS role names with my AD Group names. This means they must match or the claim rules created later won\u0026rsquo;t work correctly so the name can be important for this post. If you follow the post exactly you\u0026rsquo;ll understand what I mean and can modify it accordingly later on.\nNow, I\u0026rsquo;ve repeated this last section for a general users account so I can show both Admins and Users. Feel free to use one or multiple AD Groups and roles as you see fit.\nAdding a Relying Party Trust Now we must go back to ADFS again. If you open up the ADFS management console, you\u0026rsquo;ll likely notice a link stating that you need to add a relying party trust. Click the link to open the wizard or go to the \u0026ldquo;Relying Party Trust folder and right click it to add one. We want to select a \u0026ldquo;Claims aware\u0026rdquo; trust for our wizard. A claim is used to pass attributes about the object authenticating over to the relying trust for authorization purposes. We\u0026rsquo;ll want that cause one of our claims will be a security group membership.\nOn the data source step, we want to select the \u0026ldquo;Import data about the relying party published online or on a local network. Remember that metadata file we uploaded to AWS earlier? Now we want to do the reverse with ADFS but this metadata file is publicly accessible so we can enter in the URL https://signin.aws.amazon.com/static/saml-metadata.xml.\nSpecify a Display Name. This name will show up in the ADFS web portal as the resource you\u0026rsquo;re authenticating into so make it descriptive.\nNext you can select who will have access to login to the ADFS portal for authentication to take place. You can select \u0026ldquo;Permit everyone\u0026rdquo; if you\u0026rsquo;d like which will allow anybody to login. I\u0026rsquo;ve selected \u0026ldquo;Permit Specific Group\u0026rdquo; and selected an AD Security Group as the members. Be aware though if you use this option. Even if your users are in an AD group that matches our AWS Roles, they\u0026rsquo;ll get a permissions error if they\u0026rsquo;re not in a security group with permissions to attempt to login through the ADFS portal. Permitting everyone is the easiest way to prevent this but specifying other options allows for more security.\nOn the Ready to Add Trust step you can review the settings before clicking next.\nSelect the checkbox to configure claims issuance policy to open our claims rules when finished.\nAdding Claims After a user authenticated to the Identity Provider (ADFS in this case) we must send the authentication response to the AWS SAML endpoint. This is basically an authentication token and a POST REST call. AWS Requires certain things to be added to that POST call which you can find at: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_assertions.html The rest of this section shows how we modify the information in that POST call so that we have the proper attributes passed to the AWS SAML endpoint.\nIf you didn\u0026rsquo;t check the box in the last step you can always go to the \u0026ldquo;Relying Party Trusts\u0026rdquo; folder and click \u0026ldquo;Edit Claim Issuance Policy\u0026hellip;\u0026rdquo; by right clicking on the trust. Click Add Rule to add a new claim. The first claim we should do is the NameId. Select the \u0026ldquo;Transform an Incoming Claim\u0026rdquo; from the dropdown and click Next.\nGive the rule a name of \u0026ldquo;NameId\u0026rdquo;, select \u0026ldquo;Windows account name\u0026rdquo; from the claim type drop down and then select \u0026ldquo;Name ID\u0026rdquo; in the \u0026ldquo;outgoing claim type\u0026rdquo; dropdown. The \u0026ldquo;Outgoing name ID format\u0026rdquo; should be a \u0026ldquo;Persistent Identifier and we\u0026rsquo;ll select the \u0026ldquo;Pass through all claim values\u0026rdquo; radio button.\nClick Finish\nWe must add a couple of other claims to ensure all of the required information is passed. Add another claim and select the \u0026ldquo;Send LDAP Attributes as Claims\u0026rdquo; this time.\nThis new Rule should be named \u0026ldquo;RoleSessionName\u0026rdquo; and the Attribute store value will be \u0026ldquo;Active Directory\u0026rdquo;. Under the attributes select E-Mail-Addresses and the Outgoing Claim Type must be: \u0026ldquo;https://aws.amazon.com/SAML/Attributes/RoleSessionName\u0026quot; exactly.\nNote: We used E-Mail-Addresses as the RoleSession Attribute. This means that you want to have the email address attribute populated in Active Directory as well. It will be used in the AWS Console for identification.\nNow comes the magic part. We want to take the Group Membership of the person who has authenticated and match it to a role in AWS. Example the AD Group named \u0026ldquo;theITHollow-Admins\u0026rdquo; will map to the AWS role \u0026ldquo;theITHollowAdmins\u0026rdquo;. We can do this by using two additional claim rules. An explanation of this process is much better explained here: http://molikop.com/2014/04/adfs-claim-rules-filtering-groups/\nSo, Let\u0026rsquo;s add another claim rule to grab the group membership of the person who authenticated. Add a new claim rule using the custom rule template.\nName the rule something and then add the following code using the claim rule language.\nc:[Type == \u0026#34;http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname\u0026#34;, Issuer == \u0026#34;AD AUTHORITY\u0026#34;] =\u0026gt; add(store = \u0026#34;Active Directory\u0026#34;, types = (\u0026#34;http://theITHollow/groups\u0026#34;), query = \u0026#34;;tokenGroups;{0}\u0026#34;, param = c.Value); Now add a second claim rule using the custom rule language again. This time we\u0026rsquo;ll use the claim rule language and retrieve the variables placed in the prior claim rule, we\u0026rsquo;ll filter it based on our group names (Looking for groups named like theITHollow) and then we\u0026rsquo;ll transform them into an IAM role name.\nc:[Type == \u0026#34;http://theITHollow/groups\u0026#34;, Value =~ \u0026#34;(?i)^theITHollow-\u0026#34;] =\u0026gt; issue(Type = \u0026#34;https://aws.amazon.com/SAML/Attributes/Role\u0026#34;, Value = RegExReplace(c.Value, \u0026#34;theITHollow-\u0026#34;, \u0026#34;arn:aws:iam::YOURARNNUMBERGOESHERE:saml-provider/adfs,arn:aws:iam::YOURARNNUMBERGOESHERE:role/theITHollow\u0026#34;)); NOTE: In your code, you must replace the group prefixes with your own group names and the second piece of the RegExReplace command must be the arn of your SAML Provider/Group Role arn with prefix.\nFor example I\u0026rsquo;ve used the following because my AWS roles begin with theITHollow.\n\u0026#34;arn:aws:iam::YOURARNNUMBERGOESHERE:saml-provider/adfs,arn:aws:iam::YOURARNNUMBERGOESHERE:role/theITHollow\u0026#34; When you\u0026rsquo;re all done you\u0026rsquo;ll have a claim list like the one below from my lab.\nCustomizing your ADFS Portal You can skip this section if you\u0026rsquo;d like, but I wanted to customize my ADFS login portal just a bit to make it look fancy. You can do this by opening up a PowerShell console on your ADFS server and using the set-adfswebtheme cmdlets. Here is an example where I changed the illustration.\nSet-AdfsWebTheme -TargetName default -Illustration @{path=\u0026#34;C:\\path\\adfslogo.jpg\u0026#34;} Authenticate I found that I needed to make one more PowerShell request before I could login to the portal. On the ADFS server you may need to run the command:\nSet-AdfsProperties -EnableIdPInitiatedSignonPage $true This enables the login page and by default wasn\u0026rsquo;t set to true.\nOnce you\u0026rsquo;ve run that you can go to your ADFS portal and try to login. Go to https://[YOURADFSSERVERNAME.DOMAIN.NAME]/adfs/ls/IdpInitiatedSignOn.aspx\nHere you\u0026rsquo;ll hopefully see your branded page, and on the right side of the screen the opportunity to login to your relying trust. Select the trust and click the \u0026ldquo;Sign in\u0026rdquo; button.\nEnter your Active Directory Credentials and see if it worked. In my case I logged in and was redirected to my AWS portal and assigned the appropriate role based on my Active Directory Group membership. Remember that this login user must be in your AWS Groups, in my case named \u0026ldquo;theITHollow-Admin\u0026rdquo;.\nYou can see from my login attempt that my account was logged in through federation and that my login is based on my email address.\nSummary Now we can login to our AWS console using locally authenticated credentials and pass a SAML token to AWS for authorization. Roles are used in AWS to provide our authorizations so we can have all of our credentials stored in our protected Active Directory environment and never have to store them in two places making the solution more secure and easier to manage. Good luck on your tests, I hope this post will be useful while you\u0026rsquo;re setting up AWS Federation.\n","permalink":"https://theithollow.com/2017/05/15/setup-adfs-amazon-web-services-saml-authentication/","summary":"\u003cp\u003eIt\u0026rsquo;s a pretty common design request these days to have a single authentication source. I mean, do you really want to have to manage a bunch of different logins instead of having to remember one? Also, five different accounts give attackers five different avenues to try to exploit. So many times we use our existing Active Directory infrastructure as our single source of authentication. Amazon Web Services (AWS) needs a way for people to login and will allow you to use your own Active Directory credentials through Security Assertion Markup Language (SAML). This post will walk you through the setup of Active Directory Federation Services (ADFS) on Windows Server 2016 and configuring it to be your credentials for AWS.\u003c/p\u003e","title":"Setup ADFS for Amazon Web Services SAML Authentication"},{"content":"Version 7.2 of vRealize Automation introduced containers to the vRA solution. This post is designed to get you up and running with some basic containers and give you the tools needed to deploy some of your own containers in your environment.\nThe steps involved in this post assume that you have the \u0026ldquo;Container Administrator\u0026rdquo; and \u0026ldquo;Container Architect\u0026rdquo; roles. These are administrative roles that you\u0026rsquo;d need to get things started in your vRA instance.\nHost Setup Before you can deploy any containers, you\u0026rsquo;ll need to deploy some container hosts. To make things simple, I\u0026rsquo;ve downloaded the swarm blueprint from Ryan Kelly\u0026rsquo;s blog vmtocloud.com and imported the blueprint into my vRA instance through the cloud client. You can deploy one of your existing blueprints, but I love Ryan\u0026rsquo;s blueprint for deploying swarm and docker through the blueprint designer so why re-create the wheel. However you want do deploy your container hosts through vRA is fine.\nThe thing to note here is that vRA will automatically detect container hosts and add them to the inventory if one of the built in-property groups is used with the blueprint. The screenshot below shows the default property groups that came with vRA 7.2 and you\u0026rsquo;ll notice a third property group that I\u0026rsquo;m using which is a copy of the \u0026ldquo;Container host properties with user/password authentiation\u0026rdquo;. I like to copy the build in stuff before I make any modifications to them.\nYou can see that I modified Ryan\u0026rsquo;s blueprint to add both my new container property group as well as one to pass some info over to vRO for DNS purposes. That one is not necessary for containers, but nice if you\u0026rsquo;ve got any event subscriptions that run during deployment like I do. (Adding to DNS is particularly useful). Also note, that I\u0026rsquo;ve only added the property group to the \u0026ldquo;Swarm Node\u0026rdquo; machines and not the manager, even though you could add it as a container host as well.\nHere are the changes to the Containers property group that I made from the default settings. The connection port is 2375 and the scheme is http. The default property group assumes you\u0026rsquo;re doing things property and using an encrypted https connection over port 2376. Yeah, I cheated, but this is why I made a copy of the default property groups and modified them.\nPublish your blueprint and entitle it for your users to request from the service catalog. When you\u0026rsquo;re done, request the catalog item and wait for it to provision.\nOnce your catalog item has been requested, you can take a look in the \u0026ldquo;Containers\u0026rdquo; tab and you\u0026rsquo;ll see your container hosts listed there. I\u0026rsquo;ve deployed three in this example.\nPlacements Placements are used in much the same way a reservation is used with deploying virtual machines through vRA. Placements carve up the resources on a container and set limits on how many resources can be used. For instance, you might only allow 5 GB of memory to be used out of a container host that has 6 GB of memory. This would leave you memory for the host operating system to use.\nPlacements have a priority which is used to determine which placement might take precedence and a placement is also tied to a business group, just like a reservation is on the IaaS side of vRA.\nThese placements are then placed into a placement zone which is simply a grouping of hosts. So you might have three different placements part of the same zone. When a user goes to deploy a container, the priority, zone and business group will determine which hosts the container may run on. The diagram below shows how you can split up placements across multiple placement zones to manage which hosts are available for the business group.\nUnder the \u0026ldquo;Placements\u0026rdquo; menu in the \u0026ldquo;Containers\u0026rdquo; tab, click the plus sign to add a new placement. Give it a name, and either select the default placement zone or create your own. Then set a priority (lower number means higher priority) and assign the business group. If you\u0026rsquo;d like, you may assign it some limits, or by default, the entire container host is available to be used.\nYou can further restrict your placements by using a deployment policy. If you\u0026rsquo;re saavy with VM deployment already, think of a deployment policy like a reservation policy. When a deployment policy is tied to a placement and a template, it will ensure it runs on that hosts you specify in the placement zone.\nTemplates Now that you\u0026rsquo;ve got the container hosts and placement zones all figured out we can use one of the default templates to deploy as a test. I selected the hello-world-nginx template. Find your container template and click provision.\nAfter you click the \u0026ldquo;Provision\u0026rdquo; button on the template you want to deploy, a new box will appear for you to select the business group that\u0026rsquo;s requesting it. In my case I only belong to a single business group so I\u0026rsquo;ve got one option. Click the \u0026ldquo;Provision\u0026rdquo; button again to start the process.\nOnce the provisioning starts, don\u0026rsquo;t blink or you\u0026rsquo;ll miss it. The right hand side of the screen will show you your provisioning status.\nAfter the provisioning has completed, you can look at the containers tab under resources to see what containers are running.\nIf you click on the container you\u0026rsquo;ll get additional information such as the ports, address and any logs from the container host.\nIf you then go to the web address of the container, you\u0026rsquo;ll see that my container is running, although only a simple hello-world container running nginx.\nSummary I\u0026rsquo;m not sure that container purists or developers will be in love with the containers options in vRA. I think most of them would prefer to use the command line, but for an operations team this might be a nice feature. You can add your own templates as well if you have a docker-compose file so this does give you some nifty options. Take a peek when you have time and try it out for yourself.\n","permalink":"https://theithollow.com/2017/05/08/containers-vrealize-automation/","summary":"\u003cp\u003eVersion 7.2 of vRealize Automation introduced containers to the vRA solution. This post is designed to get you up and running with some basic containers and give you the tools needed to deploy some of your own containers in your environment.\u003c/p\u003e\n\u003cp\u003eThe steps involved in this post assume that you have the \u0026ldquo;Container Administrator\u0026rdquo; and \u0026ldquo;Container Architect\u0026rdquo; roles. These are administrative roles that you\u0026rsquo;d need to get things started in your vRA instance.\u003c/p\u003e","title":"Containers on vRealize Automation"},{"content":"To me, a home lab is an important piece of my ongoing education. It\u0026rsquo;s one thing to watch videos and take classes but getting some time to build, configure or run solutions in your own setting is an invaluable resource. In my life, I\u0026rsquo;ve never learned anything REALLY well until I\u0026rsquo;ve had to operate and troubleshoot it. Having a mission critical system crash and having to learn how to fix it is a great way to learn things very quickly but also pretty painful. So to me, a home lab is critical.\nSo what goes in my home lab?\nHardware Networking Core Switch: HP v1910-24G Ethernet Switch Wireless Switch: Ubiquiti UniFi 8 POE-150W Storage vMotion Switch: Netgear XS708E 10 Gigabit Perimeter Firewall: Cisco ASA Wireless Firewall: Ubiquiti UniFi Security Gateway Wireless: Ubiquiti AC Pro Storage vSphere Storage Array: Synology DS1815+ 8 TB available of spinning disks with dual 256 GB SSD for Caching File Storage and Backup Array: Synology DS1513+ 3.6 TB available of spinning Disk Compute ESXi Hosts: Five ESXi hosts Automation OK, I\u0026rsquo;m an automation freak so some of my hardware obviously has to be related to automation too right? I\u0026rsquo;m not sure you\u0026rsquo;d call this lab equipment, but you know you can write code for this automation too, don\u0026rsquo;t you? OK Fine! This isn\u0026rsquo;t really for learning stuff, but more for playing around with the house and turning stuff on with my voice or phone. Are you happy now!?\nAmazon Echo and Echo Dots Nest Protects for smoke and CO2 detection Nest Thermostat Wink Hub 2 Phillips Hue Bulbs WeMo Plugs GE Dimmer Switches and Add-on Switches Logitech Harmony Elite Remove Control Withings Body - Body Composition Wi-Fi Scale 9Lights, TV, Thermostats etc are mostly controlled through Amazon Echo or through the IFTTT.com service.\nArchitecture I\u0026rsquo;ve got wireless networks setup for my home devices for laptops, ipads and guests with a firewall between them and the lab core switches. My perimeter firewall is connected with Azure VNets and AWS VPCs.\nSolutions The hardware is fun to mull over, but the real value is what sits on the hardware. As you might know I do a decent bit of work on vRealize Automation and cloud. Cloud is a pretty easy one since it doesn\u0026rsquo;t even have to run on my hardware. If this is your desired path you can even get away with not having a lab at all since you can just swipe a credit card.\nFor me, though I do a lot of Hybrid cloud work, which requires several different solutions all ready to go whenever I need to test something else out.\nvRealize Automation: My Cloud Management Portal where new resources are deployed. vRA can deploy across my VPN tunnels to both AWS and Azure. My instance includes vRealize Business. vRealize Orchestrator and vRealize Code Stream as well for chargeback and code pipeline testing. I\u0026rsquo;ve written a Guide for vRealize Automation here on the blog. Cisco UCS Director: Cisco UCS Director is another popular cloud management platform. I\u0026rsquo;ve written a guide to getting started on Cisco UCS Director as well. Ansible: Sometimes you need to do some basic configuration management. Ansible is a simple tool to get up and running and gives me a free configuration management tool. Server 2016 Domain Controllers: The infrastructure needs a domain to run on. Server 2016 is the latest Windows OS so that\u0026rsquo;s what I\u0026rsquo;m running. Bitbucket: Git is a requirement for anything stored as code these days. I like to have my own Git server so I can store some bad code with passwords directly in my scripts. You shouldn\u0026rsquo;t do this, but it\u0026rsquo;s OK for me to do it. Jenkins: Adding a CI/CD tool to you deployments is coming up more and more. All your code may need to go through a testing pipeline so Jenkins is a must. I\u0026rsquo;ve written a \u0026quot; Getting Started Guide\u0026quot; here on the blog. Houdini: The vRealize Code Stream for IT DevOps solutions to move blueprints around between instances. Certificate Authority: I have a Microsoft Certificate Authority that I can issue new certificates to all of my storage arrays, vRA instances and other vSphere solutions. Management Server: I\u0026rsquo;ve got Wireless APs that need a controller, someplace to deploy Windows Updates from and sometimes I need a jump host to install software on. This server does the job. Monitoring Server: Sometimes I want to monitor my bandwidth and install other monitoring tools. This server does that job for me. NSX: Software Defined Networking and dynamic firewalling is a common use case for cloud. NSX manager and controllers are installed in my management cluster, and the workload cluster can use these resources. I also have a series of post on setting up NSX with vRA 6. SQL Server: You never know when you\u0026rsquo;ll need a database. Emulators: UCS Platform Emulator, VNX Emulator and the list goes on. Having a cluster to deploy these emulators is pretty useful in a pinch. vCenter: I\u0026rsquo;ve got a vCenter server appliance running version 6.5 VSAN: VMware\u0026rsquo;s virtual san solution is a pretty popular storage solution. My workload cluster is a 2 node VSAN cluster. vROps: Another part of the vRealize Suite. vROps plugins can be tried out here and my resources can be viewed. Veeam Server: Veeam server used to backup some of my solutions and used to write vRealize Orchestrator packages against the API. Kemp Load Balancer: Sometimes you need a load balancer to test things out and Kemp provides a free one for vExperts. Any any given time new solutions could also be installed, tested, deleted, or upgraded. Recently I\u0026rsquo;ve installed other cloud management solutions, ServiceNow midservers, partner appliances and other servers. It\u0026rsquo;s a lab, this is what it\u0026rsquo;s for!\nCode Writing up some code to test things out is great. Code is so important anymore. But no matter how minor your code snippets, it is a good idea to store these in a GIT repo for recall later. I don\u0026rsquo;t know how many times I\u0026rsquo;ve gone back to look for a single line of code in a different module. It saves a lot of time and prevents googling.\nSummary The home lab is my place for learning, testing and building solutions before I need to do them in a production environment. It\u0026rsquo;s some work to keep maintained and certainly some money involved in ensuring I can run the things I need to run in it but I feel like it pays for itself with the marketable skills gained from training.\n","permalink":"https://theithollow.com/2017/05/01/whats-lab-2017/","summary":"\u003cp\u003eTo me, a home lab is an important piece of my ongoing education. It\u0026rsquo;s one thing to watch videos and take classes but getting some time to build, configure or run solutions in your own setting is an invaluable resource. In my life, I\u0026rsquo;ve never learned anything REALLY well until I\u0026rsquo;ve had to operate and troubleshoot it. Having a mission critical system crash and having to learn how to fix it is a great way to learn things very quickly but also pretty painful. So to me, a home lab is critical.\u003c/p\u003e","title":"Whats in the Lab for 2017?"},{"content":"Rubrik has announced their latest revision of their Cloud Data Management solution, version 3.2. The new release has some \u0026ldquo;Snazzy\u0026rdquo; new features according to one unnamed source from the Rubrik technical marketing team, but I\u0026rsquo;m focused mainly on one specific capability in this post.\nI\u0026rsquo;ve written about Rubrik several times before and have written some of the vRealize Orchestrator workflows for automating deployments with the Rubrik appliance. The main reason I like the solution is how easy it is to manage and that everything is API first, which is a must for automation these days.\nThe latest version added another cool capability for cloud workloads. This one being, a cloud instance. Yes, version 3.2 is ready to have a Rubrik cluster deployed in either AWS or Azure. In previous versions, Rubrik has allowed you to archive your data to AWS S3 buckets or Azure blob storage accounts which is a really nice way to extend how long your backup appliance can store data. Let\u0026rsquo;s face it, backup appliances have finite storage, but being able to use cloud storage to augment the appliance storage can really help to make your backup solution more cost effective. Why buy more appliances and all their features, just for more disk space?\nWhat Do I Need to Get Started? Rubrik has had a remote office, branch office (ROBO) appliance for a while called \u0026ldquo;Rubrik Edge\u0026rdquo;, which lets you place a virtual appliance in a smaller branch office to do some basic backups and then replicate that data back to the corporate office where your big boy Rubrik Cluster lives. It really saves some customers who have ROBOs that have trouble backing up over the WAN.\nThink of the new Cloud Rubrik appliance just like the edge only here you\u0026rsquo;ll be deploying a minimum of 4 EC2 instances (for AWS) or 4 Azure VMs (for Microsoft Azure) as a Rubrik cluster. Having four instances allows the cluster to perform erasure coding and provide the necessary data availability and integrity you\u0026rsquo;d be accustomed to of any enterprise backup solution.\nWhy is a Rubrik Cluster Deployed in a Public Cloud so Useful? Big whoop right? Rubrik took their virtual backup appliance and shoved it in AWS and Azure. Is that really such a big deal? Well, maybe this wasn\u0026rsquo;t the most difficult of technological feats, but the applications you might use it for are pretty nice. Lets take a few use cases.\nThe biggest one I see is that now I can have my backup array close to my data. I can backup any SQL, Windows or Linux workloads in the public cloud and never have the data leave traverse a WAN link. Cloud providers are notorious for billing outgoing network traffic, so this could be a big deal. You\u0026rsquo;ll need to do some math to see if it pays for having the four Rubrik instances or not. If we do want to replicate our public cloud backups, we can do that just like we might have with the Rubrik Edge appliance. But thanks to another new feature in this release, we can replicate to multiple targets. Meaning, my Azure backups could be replicated to AWS and on-prem for the ultimate in copy data redundancy. OK, maybe this sounds overly redundant, but some customers will have this requirement and this might be a really nice way to migrate workloads between clouds, regions or accounts. Reading the Tea Leaves. The new features are certainly useful in some cases, but one does have to speculate about what might come next for Rubrik. I have no more information about this than you do, but currently this new Rubrik cloud instance can only backup SQL, Windows or Linux endpoints. I\u0026rsquo;m wondering if EC2 instances or Azure VM instance backups are in the future. It would sure be nice to backup and restore an entire cloud instance with one of these appliances, down the road, wouldn\u0026rsquo;t it? I wonder if Rubrik is thinking the same thing? (ANY RUBRIK EMPLOYEES ARE FREE TO TELL US ROADMAP DETAILS IN THE COMMENTS!)\nOther Enhancements in 3.2 I get excited about cloud, but Rubrik did have a few other enhancements which I\u0026rsquo;ve ignored in this post. If you\u0026rsquo;re interested, here is a bulleted list of enhancements beside cloud the new cloud instance.\nMulti-Target Replication Setting Cluster Time Zones Support for External Encryption Key Managers Retention times can be set for Local, Remote and Archival data independently Assigning SLAs to On-Demand Snapshots ","permalink":"https://theithollow.com/2017/04/25/your-rubrik-for-the-cloud/","summary":"\u003cp\u003eRubrik has announced their latest revision of their Cloud Data Management solution, version 3.2. The new release has some \u0026ldquo;Snazzy\u0026rdquo; new features according to one unnamed source from the Rubrik technical marketing team, but I\u0026rsquo;m focused mainly on one specific capability in this post.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve written about Rubrik several times before and have written some of the vRealize Orchestrator workflows for automating deployments with the Rubrik appliance. The main reason I like the solution is how easy it is to manage and that everything is API first, which is a must for automation these days.\u003c/p\u003e","title":"Your Rubrik for the Cloud"},{"content":"vRealize Code Stream is a tool that is used to operationalize infrastructure code blueprints for release management. Code Stream plugs into vRealize Automation and includes a testing framework though Jenkins and vRealize Orchestrator as well as using JFrog Artifactory and Xenon for storing artifacts. This post is used to organize several blog posts on helping you to get started with vRealize Code Stream and Houdini.\nSetting up Code Stream and Jenkins Setting up Code Stream and Artifactory Installing vRealize Code Stream for IT DevOps Configuring Endpoints for vRealize Code Stream for IT DevOps Using vRealize Code Stream for IT DevOps Unit Testing with vRealize Code Stream for IT DevOps Official Documentation: vRealize Code Stream Information Center VMware vRealize Code Stream Management Pack for IT DevOps Installation Guide\n","permalink":"https://theithollow.com/2017/04/24/getting-started-vrealize-code-stream/","summary":"\u003cp\u003evRealize Code Stream is a tool that is used to operationalize infrastructure code blueprints for release management. Code Stream plugs into vRealize Automation and includes a testing framework though Jenkins and vRealize Orchestrator as well as using JFrog Artifactory and Xenon for storing artifacts. This post is used to organize several blog posts on helping you to get started with vRealize Code Stream and Houdini.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/04/Houdini-UT7.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/04/Houdini-UT7.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"setting-up-code-stream-and-jenkins\"\u003e\u003ca href=\"/2016/05/09/using-jenkins-vrealize-code-stream/\"\u003eSetting up Code Stream and Jenkins\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"setting-up-code-stream-and-artifactory\"\u003e\u003ca href=\"/2016/05/23/code-stream-artifactory/\"\u003eSetting up Code Stream and Artifactory\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"installing-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/03/27/installing-code-stream-management-pack-devops/\"\u003eInstalling vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"configuring-endpoints-for-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/\"\u003eConfiguring Endpoints for vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"using-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/04/10/using-vrealize-code-stream-management-pack-devops/\"\u003eUsing vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"unit-testing-with-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/04/18/vrealize-code-stream-management-pack-devops-unit-testing/\"\u003eUnit Testing with vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch2 id=\"official-documentation\"\u003eOfficial Documentation:\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"http://pubs.vmware.com/vrcs-22/index.jsp\"\u003evRealize Code Stream Information Center\u003c/a\u003e \u003ca href=\"https://c368768.ssl.cf1.rackcdn.com/product_files/25094/original/vRealize_Code_Stream_Management_Pack_for_IT_DevOps_1.0.0-Installation_Guide3618beabffbd8e695216793ec30aaf6f.pdf\"\u003eVMware vRealize Code Stream Management Pack for IT DevOps Installation Guide\u003c/a\u003e\u003c/p\u003e","title":"Getting Started with vRealize Code Stream"},{"content":"vRealize Code Stream Management Pack for IT DevOps (code named Houdini by VMware) allows us to treat our vRealize Automation Blueprints, or other objects, as pieces of code that can be promoted between environments. In previous posts we\u0026rsquo;ve done just this, but a glaring piece was missing in during those articles. Promoting code between environments is great, but we\u0026rsquo;ve got to test it first or this process is only good for moving code around. A full release pipeline including unit tests can make your environment much more useful for organizations trying to ensure consistency.\nHoudini Unit Testing The first thing that has to be done is to add a testing endpoint if not done already. The steps for creating endpoints can be found in a previous post on Houdini endpoints. Once this is done, we can start by writing up some unit tests in vRealize Orchestrator. The types of tests you\u0026rsquo;ll be running will depend on the type of blueprints, and the functions they provide. For the purposes of this post, we\u0026rsquo;ll be working with an XaaS Blueprint that snapshots a VM. We\u0026rsquo;ll write a new unit test that requests the XaaS Blueprint in our testing endpoint.\nHere\u0026rsquo;s what you need to know before you write your unit tests in vRO. Each of the package types that can be tested have their own folder under \u0026ldquo;Content Management Tasks\u0026rdquo; that were created when you import the Houdini package during installation. There is a common folder under each of those package types. Anything in the common folder will be run during the testing phase. By default, there is a workflow created that checks to make sure that a blueprint was created and exists in the testing endpoint. You can create additional workflows if you\u0026rsquo;d like and place them in this common folder to be run during the testing phase. As you can see from the screenshot, I\u0026rsquo;ve created an additional workflow named \u0026ldquo;theITHollow-Request ASD Blueprint\u0026rdquo;. Now when a test is run, each of the two blueprints in the common folder will be run.\nNow let\u0026rsquo;s request another package and test it. Go to the Houdini catalog and request a \u0026ldquo;Single Package Request\u0026rdquo; and run it through our testing framework.\nUnder the Actions tab select at least the capture content and deploy to test actions. You could also release to production if you feel like it but we only need to demonstrate the testing during this post.\nSelect the package type, source endpoint and the specific package you want to test.\nOn the \u0026ldquo;Test Details\u0026rdquo; tab, select the testing endpoint.\nEnter a version comment and then submit the request.\nAs the package is requested we can switch over to the code stream tab to view the pipeline execution. You can see that our deployment was successful.\nIf we drill into the \u0026ldquo;Test Content\u0026rdquo; task we can see the details about the tests. You can see the package name, type etc.\nIf we have look at the \u0026ldquo;Raw Input Properties\u0026rdquo; we can see the JSON data that is used as inputs for the unit tests.\n[ { \u0026#34;name\u0026#34;: \u0026#34;workflowId\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;ae3cfd13-0dd9-4046-bdb0-d33b2a2d4f6e\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;workflowName\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;String\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;RP Test Content Remote\u0026#34; }, { \u0026#34;name\u0026#34;: \u0026#34;properties\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;JSON[]\u0026#34;, \u0026#34;value\u0026#34;: [ { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;packageVersionLink\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;/houdini/package-versions/80915bbf-243d-4282-92a3-945ec647a438\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;clmRequestId\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;745be98d-1b9a-4121-b23a-c564a04c647c\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;requestId\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;2766389f-cb55-4257-8073-38e64c9059ef\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;unitTestConfig\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Default\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;packageId\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;1927fb4c-6ae9-4c0b-b807-bcf3edc22ccd\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;packageName\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;ES_Create a snapshot\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;testContentEndpoints\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;houdinitests\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;branch\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;master\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;packageType\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;Automation-XaaSBlueprint\u0026#34; }, { \u0026#34;type\u0026#34;: \u0026#34;string\u0026#34;, \u0026#34;name\u0026#34;: \u0026#34;version\u0026#34;, \u0026#34;value\u0026#34;: \u0026#34;2\u0026#34; } ] } ] Lastly, if we look at our workflows in vRO we can see how the testing turned out and which workflows were executed during our deployment. Notice that all of the workflows in the common folder were executed. Also, take a look at the two other workflows in the parent folder. The one named \u0026ldquo;ES_Create a snapshot - Test1\u0026rdquo; executed while the \u0026ldquo;Other Tests - Test1\u0026rdquo; did not. This is because any workflows in the parent folder that are named the same as the package will run during the testing phase. You may further use this hierarchy to augment your testing methodology.\nSummary A code pipeline is relatively worthless without some sort of test to ensure that the functionality is working after moving environment. Now that you understand how to use these unit tests, your code pipeline can really start working to your advantage. Good luck with your unit testing and code promotion.\n","permalink":"https://theithollow.com/2017/04/18/vrealize-code-stream-management-pack-devops-unit-testing/","summary":"\u003cp\u003evRealize Code Stream Management Pack for IT DevOps (code named Houdini by VMware) allows us to treat our vRealize Automation Blueprints, or other objects, as pieces of code that can be promoted between environments. In \u003ca href=\"/2017/04/10/using-vrealize-code-stream-management-pack-devops/\"\u003eprevious posts\u003c/a\u003e we\u0026rsquo;ve done just this, but a glaring piece was missing in during those articles. Promoting code between environments is great, but we\u0026rsquo;ve got to test it first or this process is only good for moving code around. A full release pipeline including unit tests can make your environment much more useful for organizations trying to ensure consistency.\u003c/p\u003e","title":"vRealize Code Stream Management Pack for IT DevOps Unit Testing"},{"content":"In previous posts we covered how to install, configure and setup vRealize Code Stream Management Pack for IT DevOps (code named Houdini) so that we could get to this point. During this post we\u0026rsquo;ll take one of our vRA blueprints in the development instance and move it to the production instance. Let\u0026rsquo;s get started.\nTo set the stage, here is my development instance where I have several blueprints at my disposal. Some of them even work! (That was a joke) For this exercise, I want to move the \u0026ldquo;Server2016\u0026rdquo; catalog from my development instance to my production instance because I have it working perfectly with my vSphere environment.\nPackage Request Login to the default tenant where the Houdini solution was deployed with your Houdini admin account. Under the \u0026ldquo;Packages\u0026rdquo; service request the \u0026ldquo;Single Package Request\u0026rdquo; to being the move.\nJust like we saw in parts 1 and 2 the first screen that shows up during a request is an information page. Read the information and move on to the next tab. On the \u0026ldquo;Choose Actions\u0026rdquo; tab select the \u0026ldquo;Capture content from endpoint\u0026rdquo; and \u0026ldquo;Release content to production\u0026rdquo; check marks. NOTE: you could also run unit tests as part of this process but you\u0026rsquo;d also need to specify a test endpoint as well as setting up some unit tests. This functionality isn\u0026rsquo;t covered in this blog post, instead we\u0026rsquo;re focused on promoting our code from the development instance to the production instance only. Stay tuned for doing tests on your blueprints as part of this process. On the \u0026ldquo;Content Details\u0026rdquo; tab select the package type. Since I\u0026rsquo;m moving a vRA 7 blueprint I\u0026rsquo;ve chosen \u0026ldquo;Automation-CompositeBlueprint\u0026rdquo;. After this, select the source endpoint which will be our development instance. After the screen refreshes you should see a list of your vRA blueprints and you can then select the desired blueprint from the list. Select the \u0026ldquo;Include dependencies\u0026rdquo; check box to ensure any blueprints that this composite blueprint depends on will also be moved. On the \u0026ldquo;Release Details\u0026rdquo; tab you\u0026rsquo;ll need to select the content endpoint where you\u0026rsquo;ll be deploying the new blueprints. you can also select tags and place a comment on the request. The \u0026ldquo;Additional Details\u0026rdquo; tab will let you place some version information and decide if you want to make a mock request. When you\u0026rsquo;re ready, click Submit. After the request has been made you can move over to the \u0026ldquo;Code Stream\u0026rdquo; tab. Under \u0026ldquo;Pipeline Executions\u0026rdquo; you\u0026rsquo;ll be able to monitor the request through each phase. You\u0026rsquo;ll notice that the pipeline below is successful, but also notice that the \u0026ldquo;Test\u0026rdquo; phase was skipped because we didn\u0026rsquo;t enable that during our request.\nOnce the pipeline has executed successfully you can login to your vRA production instance. You\u0026rsquo;ll see that the Server2016 Blueprint has been moved over and is available as it is here in my production instance. Summary Preventing user error when migrating blueprints or workflows between instances can be mitigated by process. Using vRealize Code Stream for IT DevOps helps to ensure that code is properly moved and tested before going into a production environment. If you\u0026rsquo;ve got vRealize Code Stream licenses, I recommend trying out Houdini in your environment so you too can prevent those user errors when moving blueprints around. Happy coding!\n","permalink":"https://theithollow.com/2017/04/10/using-vrealize-code-stream-management-pack-devops/","summary":"\u003cp\u003eIn previous posts we covered how to \u003ca href=\"/2017/03/27/installing-code-stream-management-pack-devops/\"\u003einstall\u003c/a\u003e, \u003ca href=\"/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/\"\u003econfigure and setup\u003c/a\u003e vRealize Code Stream Management Pack for IT DevOps (code named Houdini) so that we could get to this point. During this post we\u0026rsquo;ll take one of our vRA blueprints in the development instance and move it to the production instance. Let\u0026rsquo;s get started.\u003c/p\u003e\n\u003cp\u003eTo set the stage, here is my development instance where I have several blueprints at my disposal. Some of them even work! (That was a joke) For this exercise, I want to move the \u0026ldquo;Server2016\u0026rdquo; catalog from my development instance to my production instance because I have it working perfectly with my vSphere environment.\u003c/p\u003e","title":"Using vRealize Code Stream Management Pack for IT DevOps"},{"content":"In the previous post we covered the architecture and setup of the vRealize Code Stream Management Pack for IT DevOps (also known as Houdini). In this post we\u0026rsquo;ll cover how we need to setup Houdini\u0026rsquo;s endpoints so that we can use them to release our blueprints or workflows to other instances.\nRemote Content Server Endpoint Setup To setup our endpoints we can use nicely packaged blueprints right in vRA. It\u0026rsquo;s pretty nice that our setup deployed some blueprints for us to use, right in the default tenant of our vRA server. Login to the vRA default tenant with your Houdini Administrator that you setup in part 1. Then go to the catalog and request the \u0026ldquo;Add Remote Content Endpoint\u0026rdquo; under the \u0026ldquo;Administration\u0026rdquo; service. A remote content server (RCS) is a vRA appliance that will cache your packages. It\u0026rsquo;s a pretty useful thing to have if you\u0026rsquo;ve got vRA appliances in different sites and you need to move vSphere VMs or other large objects over a WAN. Future releases can be copied from the remote content server instead of always copying from the source.\nThe first screen of the blueprint will show you information about the request. Read through it and then move on to the \u0026ldquo;Add Remote Content Server\u0026rdquo; tab. In the Remote Content Server Name field, give it a name. Then enter in the connection information for the content server.\nNext, vRA will do some verification checks before you submit the request. Check the SSH fingerprint and accept it. If an error is occurring, go back to the previous step and check your connection information. On the last screen, you can have the request data emailed to you and you can do a \u0026ldquo;mock request\u0026rdquo; which won\u0026rsquo;t really do anything other than test the request outcome to see if it would work. I\u0026rsquo;ve skipped those steps in my screenshot. Click Submit and you can watch the request happen in the requests tabs of vRA. Setup the vRealize Orchestrator/Automation Endpoints Now we need to setup our development server endpoints. These endpoints will be considered our \u0026ldquo;source\u0026rdquo; endpoint which is where our blueprints are built and then captured by Houdini. The endpoint setup requires us to add endpoints for both vRealize Orchestrator and vRealize Automation, IN THAT ORDER! This is done in the same manner that the remote content server endpoint is setup. This time go to the catalog and request the \u0026ldquo;Add Content Endpoint\u0026rdquo; blueprint in the \u0026ldquo;Endpoints\u0026rdquo; service.\nAgain, the first tab is the information tab which you can read through, when you\u0026rsquo;re ready, go to the Add Content Endpoint tab. Give the endpoint a name and select the Orchestration category. Under the supported package types select all of them and click the arrows to move them into the box on the right side that isn\u0026rsquo;t really labeled. After that you\u0026rsquo;ll want to enter the FQDN of your development vRealize Orchestrator instance (which may be embedded in your vRA appliance). Then enter connection information about your vRO instance and you can assign tags for describing the instance.\nOn the \u0026ldquo;Endpoint Policy\u0026rdquo; tab, select \u0026ldquo;Allow capturing from this endpoint\u0026rdquo; check box. You could select all of the check boxes but in my example this is a one way operation, meaning that I\u0026rsquo;ll capture objects from this endpoint and release them to another endpoint. This setup won\u0026rsquo;t support the opposite method in my configuration but your situation may be different so check the boxes that make sense in your environment. You\u0026rsquo;ll also notice that you can configure a vRA tenant for testing your blueprints before releasing if you\u0026rsquo;d like. The tests are run from vRealize Orchestrator and you\u0026rsquo;ll need to build some of your own tests for it to be truly useful.\nOn the \u0026ldquo;Connection Test\u0026rdquo; tab, click the \u0026ldquo;Test connection\u0026rdquo; check box to ensure that the endpoint is responding. When the certificate fingerprint comes back, verify that it\u0026rsquo;s correct and then accept the certificate.\nIn the \u0026ldquo;Submit Request\u0026rdquo; tab we have the same options as before. Do as you like here and then click submit.\nAfter you\u0026rsquo;ve configured the vRealize Orchestrator Endpoint, you\u0026rsquo;ll do the exact same operation for a vRealize Automation Endpoint. Run the same endpoint blueprint and fill out the endpoint name. This time in the content category drop down select \u0026ldquo;Automation\u0026rdquo; from the list. After that you\u0026rsquo;ll need to select the vRA version for the development vRA instance. Again select whichever vRA packages that you\u0026rsquo;ll want to manage and push them to the right hand box. Under the attached vRO servers, select the endpoint we setup in the previous steps. Then enter the connection information for the development vRA instance and the tenant which you\u0026rsquo;ll manage.\nAgain, we\u0026rsquo;ll select the \u0026ldquo;Allow capturing from this endpoint\u0026rdquo; check box since we\u0026rsquo;ll export blueprints out of this instance and not into it.\nTest the connection again, just like before and accept the certificate fingerprint. Submit the request.\nAdd vRealize Orchestration/Automation Production Endpoints Perform the same steps as above again. Setup an orchestrator endpoint and an automation endpoint for the production instance. The process for this is exactly the same except on the \u0026ldquo;Endpoint Policy\u0026rdquo; tab we need to select \u0026ldquo;Allow releasing to this endpoint\u0026rdquo; instead of the capturing from this endpoint check box. You can also select our remote content server which is optional for your setup. I\u0026rsquo;ve also gone through this setup again and added a testing endpoint which is just an additional tenant on my development instance. You could use an entirely different vRA instance or whatever you think would work best for your situation.\nSummary If you\u0026rsquo;ve made it this far, you\u0026rsquo;re ready to start using Houdini to move your blueprints around between instances. In the next post we\u0026rsquo;ll do just that by leveraging the vRealize Code Stream Management Pack for IT DevOps.\n","permalink":"https://theithollow.com/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/","summary":"\u003cp\u003eIn the \u003ca href=\"/2017/03/27/installing-code-stream-management-pack-devops/\"\u003eprevious post\u003c/a\u003e we covered the architecture and setup of the vRealize Code Stream Management Pack for IT DevOps (also known as Houdini). In this post we\u0026rsquo;ll cover how we need to setup Houdini\u0026rsquo;s endpoints so that we can use them to release our blueprints or workflows to other instances.\u003c/p\u003e\n\u003ch1 id=\"remote-content-server-endpoint-setup\"\u003eRemote Content Server Endpoint Setup\u003c/h1\u003e\n\u003cp\u003eTo setup our endpoints we can use nicely packaged blueprints right in vRA. It\u0026rsquo;s pretty nice that our setup deployed some blueprints for us to use, right in the default tenant of our vRA server. Login to the vRA default tenant with your Houdini Administrator that you setup in \u003ca href=\"/2017/03/27/installing-code-stream-management-pack-devops/\"\u003epart 1\u003c/a\u003e. Then go to the catalog and request the \u0026ldquo;Add Remote Content Endpoint\u0026rdquo;  under the \u0026ldquo;Administration\u0026rdquo; service. A remote content server (RCS) is a vRA appliance that will cache your packages. It\u0026rsquo;s a pretty useful thing to have if you\u0026rsquo;ve got vRA appliances in different sites and you need to move vSphere VMs or other large objects over a WAN. Future releases can be copied from the remote content server instead of always copying from the source.\u003c/p\u003e","title":"Configuring vRealize Code Stream Management Pack for IT DevOps Endpoints"},{"content":"Deploying blueprints in vRealize Automation is one thing, but with all things as code, we need to be able to move this work from our test instances to development and production instances. It\u0026rsquo;s pretty important to be sure that the code being moved to a new instance is identical. We don\u0026rsquo;t want to have a user re-create the blueprints or workflows because it\u0026rsquo;s prone to user error. Luckily for us, we have a solution. VMware has the vRealize Code Stream Management Pack for IT DevOps which I though about nicknaming vRCSMPITDO but that didn\u0026rsquo;t really roll off the tongue. VMware previously nicknamed this product \u0026ldquo;Houdini\u0026rdquo; so for the purposes of this post, we\u0026rsquo;ll use that too! This article will kick off a few more posts on using the product but for now we\u0026rsquo;ll focus on installing it.\nHoudini Architecture You could set this up in a few ways but we\u0026rsquo;ll keep this pretty simple for a starting point. Houdini needs a few things. For our scenario we\u0026rsquo;ll have two separate vRA instances, development and production and we\u0026rsquo;ll deploy a third vRA appliance (No IAAS Serves required) to serve as a remote content server which will cache our blueprints. These remote content servers can be used when passing artifacts over the WAN or between security zones can be an issue.\nWe\u0026rsquo;ll also need to install Houdini in the default tenant of one of our instances. This is important since right now, Houdini can only be installed in the default tenant.\nHoudini Prerequisites Before we click those install buttons, we\u0026rsquo;ll need to ensure that our default tenant is in tip top shape. We\u0026rsquo;ll need a vRealize Code Stream license applied since vRCS is the major mechanism used by Houdini. After this, we\u0026rsquo;ll need to ensure that we have a user with the appropriate permissions on the default tenant to operate Code Stream. Be sure that your Houdini user has the following roles:\nRelease Engineer Release Manager Tenant Administrator XaaS Architect Houdini Installation Houdini gets installed from vRealize Orchestrator and not as a windows installer package or OVA file or something. Download the vRealize Code Stream Management Pack for IT DevOps Package from the VMware product downloads page. Once done, open your vRealize Orchestrator console and change to the design view. Switch over to the packages tab and click the import button. Find the com.vmware.cse.clm.all.package file that you\u0026rsquo;ve downloaded from the VMware product page.\nImport and trust the provider of the package.\nReview all of the workflows, actions, and resources used by Houdini and click \u0026ldquo;Import selected element\u0026rdquo;.\nBasic Configuration Once your package has been imported, you can begin some of the configuration with vRealize Automation. This configuration is done by running one of those shiny new vRO workflows that got imported as part of the packages. Navigate to Library \u0026ndash;\u0026gt; Content Management \u0026ndash;\u0026gt; Configuration \u0026ndash;\u0026gt; Install Content Management. Run the workflow.\nOn the first screen of the workflow, accept (after thoroughly reading) the EULA and then click \u0026ldquo;Next\u0026rdquo;.\nGo ahead and read through the pre-requisites screen which will give you a pretty good idea of what the architecture will look like. Click Next when you\u0026rsquo;re ready.\nOn the next screen we need to enter in our vRealize Automation information that will house our Houdini blueprints. I\u0026rsquo;ve selected my development instance. Fill in the vRA FQDN, tenant admin user, password and SSH username and password of the vRA appliance. Also be sure to select the tenant which will need to be the default tenant \u0026ldquo;vsphere.local\u0026rdquo;.\nNext enter the vRealize Orchestrator username and password. I\u0026rsquo;ve changed my authentication groups from the default so yours might still be administrator@vsphere.local.\nNow enter in our Houdini user account information which has Code Stream permissions on the default tenant.\nNow we need to enter in the root username/password for our vRA content server which is just a simple vRA deploment. We didn\u0026rsquo;t even need the IaaS servers installed for that one. You can select \u0026ldquo;stop unused component services on the Primary Content Server\u0026rdquo; so that unnecessary services can be stopped. Enter in a password to be used for a repository admin account and a repository auditor account for the Xeon repository. This is where content will be cached on the content repository.\nYou can also move your artifactory content to the Xenon repository if you wish, but I\u0026rsquo;ve decided to skip this step. Click Submit to kick off the workflow. Be patient while the workflow runs. It took me about 15 minutes for the workflow to run. If the workflow fails, check to see if you have 403 errors in the vRO log. If you do, make sure that Houdini has the correct permissions.\nSummary This is only the first of a few posts on vRealize Code Stream Management Pack for IT DevOps. In future posts we\u0026rsquo;ll cover setting up endpoints for our Orchestrator instances and vRA instances and eventually move some blueprints. Stay tuned!\n","permalink":"https://theithollow.com/2017/03/27/installing-code-stream-management-pack-devops/","summary":"\u003cp\u003eDeploying blueprints in vRealize Automation is one thing, but with all things as code, we need to be able to move this work from our test instances to development and production instances. It\u0026rsquo;s pretty important to be sure that the code being moved to a new instance is identical. We don\u0026rsquo;t want to have a user re-create the blueprints or workflows because it\u0026rsquo;s prone to user error. Luckily for us, we have a solution. VMware has the vRealize Code Stream Management Pack for IT DevOps which I though about nicknaming vRCSMPITDO but that didn\u0026rsquo;t really roll off the tongue. VMware previously nicknamed this product \u0026ldquo;Houdini\u0026rdquo; so for the purposes of this post, we\u0026rsquo;ll use that too! This article will kick off a few more posts on using the product but for now we\u0026rsquo;ll focus on installing it.\u003c/p\u003e","title":"Installing Code Stream Management Pack for IT DevOps"},{"content":"As of vRealize Automation 7.2, you can now deploy workloads to Microsoft Azure through vRA\u0026rsquo;s native capabilities. Don\u0026rsquo;t get too excited here though since the process for adding an endpoint is much different than it is for other endpoints such as vSphere or AWS. The process for Azure in vRA 7 is to leverage objects in vRealize Orchestrator to do the heavy lifting. If you know things like resource mappings and vRO objects, you can do very similar tasks in the tool.\nAzure Prerequisite Setup Before you get going with vRA you\u0026rsquo;ll have to have some basic things setup in the Microsoft Azure Portal. For this post, I\u0026rsquo;m expecting that you\u0026rsquo;ve got the following things setup in Azure already. The list below shows what should already be up and running in your Azure portal and if it isn\u0026rsquo;t already, will have a link to setting them up from my Azure guide.\nAzure Account and Subscription(s) Virtual Networks (VNet) Storage Account for Virtual Machines Azure Resource Group Microsoft Azure PowerShell Module installed Azure Information Gathering Unfortunately, vRA won\u0026rsquo;t be our first stop in setting up an Azure endpoint. We\u0026rsquo;ll need to do some research first on our own Azure portal just to get some IDs and ensure we have the proper names for our networks, storage accounts etc. We\u0026rsquo;ll also need to setup programmatic access to our Azure subscriptions as well so that vRA can deploy resources. I\u0026rsquo;m providing a handy PowerShell script here for you to automatically get this information but if you\u0026rsquo;d rather use the Azure portal then I recommend using Jon Schulman\u0026rsquo;s blog , vaficionado.com, on setting this up which is a great resource for this procedure.\nBelow is a script that I\u0026rsquo;ve used to do the following things prior to our vRA Setup. Again, it assumes that you\u0026rsquo;ve already got your subscriptions, VNets, Storage Accounts, Resource Groups and PowerShell Modules installed. To give you a quick overview of what\u0026rsquo;s happening in it, we\u0026rsquo;re logging into the Azure Portal and setting up an application registration and granting permissions to your subscription so that vRA may use the API to deploy resources. After it\u0026rsquo;s done this, the script gathers your VNets, Storage Accounts, Resource Groups, as well as your tenant and subscription IDs which will be needed as part of the vRA Setup later on. Copy the output of the script to a text file so you can enter it into vRA.\n#Install-Module AzureRM #Uncomment this if you need to Install the Azure Module ############### Variables - Update ME! ############### $subscr = \u0026#34;vRASubscription\u0026#34; #This is the subscription name that you\u0026#39;ll be using to deploy vRA workloads $password = \u0026#34;!QAZxsw23edc\u0026#34; #This will be your secret access key used for programmatic access to Azure\u0026#39;s API $appname = \u0026#34;vRAApp\u0026#34; #The application name used to deploy resources in Azure. Make it whatever you want. $url = \u0026#34;http://blogname.com\u0026#34; #URL for your application name. Required, but you can make it whatever you want. Login-AzureRMAccount -SubscriptionName $subscr #Login to the Azure Portal through PowerShell ############### DO NOT UPDATE BELOW THIS LINE ############### ###################################################################### ############### App Registration Information ############### Write-host \u0026#34;Setting up Application Registration\u0026#34; $Azure_app = New-AzureRmADApplication -DisplayName $appname -HomePage $url -IdentifierUris $url -Password $password New-AzureRmADServicePrincipal -ApplicationId $Azure_app.ApplicationId | Out-Null Write-host \u0026#34;Application Registration Done. Starting to create Azure Role Assignment. Please wait!!!!\u0026#34; Start-Sleep 60 #Wait a bit to make sure the Application is created New-AzureRmRoleAssignment -RoleDefinitionName \u0026#34;Contributor\u0026#34; -ServicePrincipalName $Azure_app.ApplicationId | Out-Null Write-host \u0026#34;Role Assignment complete\u0026#34; ############### Gather Information ############### $vnets = Get-AzureRmVirtualNetwork $storageaccounts = Get-AzureRMStorageAccount $subscription = Get-AzureRMSubscription -SubscriptionName $subscr $tenant = Get-AzureRMSubscription -SubscriptionName $subscr $resourcegrps = Get-AzureRMResourceGroup ############### LIST Info for vRA ############### Write-host \u0026#34;Use the following information for vRA Setup\u0026#34; -ForegroundColor Green Write-output \u0026#34;`n\u0026#34; Write-host \u0026#34;Azure Service URI is likely: https://management.azure.com\u0026#34; Write-host \u0026#34;Azure Login URL is likely: https://login.windows.net\u0026#34; Write-output \u0026#34;`n\u0026#34; Write-output \u0026#34;Subscription ID: \u0026#34; $subscription.SubscriptionID Write-output \u0026#34;`n\u0026#34; Write-Output \u0026#34;Keys: \u0026#34; $password Write-output \u0026#34;`n\u0026#34; Write-output \u0026#34;Tenant ID: \u0026#34; $subscription.TenantId Write-output \u0026#34;`n\u0026#34; Write-output \u0026#34;Storage Accounts: \u0026#34; $storageaccounts | Select StorageAccountName, Location | FT write-output \u0026#34;ApplicationID (Also called Client ID): \u0026#34; $Azure_app.ApplicationId write-output \u0026#34;Resource Groups: \u0026#34; $resourcegrps | select ResourceGroupName, Location | FT write-output \u0026#34;VNETS and Subnets: \u0026#34; $vnets | select Name, Subnets | FT vRealize Automation Setup Now we can move on to setting up some stuff in vRA. The steps in vRA are a bit different from other endpoints so we\u0026rsquo;ll walk through each piece of this. Before you begin down this road we\u0026rsquo;ll want to make sure we have the following information so that we can plug it in at the appropriate time:\nSubscription TenantID ApplicationID Keys ResourceGroups StorageAccount VNet Location There are also two other pieces of information you might need which should be:\nAzure service URI: https://management.azure.com/\nLogin URL: https://login.windows.net\nAll of this information should have come out of the PowerShell script.\nCreating an Azure Endpoint in vRA Normally, endpoints are created in the Infrastructure tab under endpoints. In the case of Azure, we go into the Administration tab \u0026ndash;\u0026gt; vRO Configuration \u0026ndash;\u0026gt; Endpoints. Click the \u0026ldquo;New\u0026rdquo; button to add a new endpoint. On the first screen select the Azure plug-in in the drop down.\nIn the Endpoint tab, give the endpoint a descriptive name and a good description. Next, we come to the details tab. Here we\u0026rsquo;ll need to enter a connection name and fill in the information we\u0026rsquo;ve collected from our work in Azure. Fill in the subscription ID, Tenant ID, Client ID, Client secret (keys) as well as two settings we didn\u0026rsquo;t get from Azure. This info should be available from the PowerShell script. Note that the ClientID is also called the ApplicationID in the script.\nCreate an Azure Reservation Now our next step after creating an endpoint is usually to add resources to our fabric groups. With an Azure endpoint we can skip that step and go right to reservations. Go to the Infrastructure tab \u0026ndash;\u0026gt; Reservations \u0026ndash;\u0026gt; Reservations (yep, I said reservations twice).\nHere you\u0026rsquo;ll need to give the reservation a name, select a business group that it belongs to and add things like reservation policies as appropriate. Be sure to enable it.\nNext on the Resources tab, we\u0026rsquo;ll need some of our information again. Enter the subscription ID that we used earlier when adding the endpoint and then select the drop down from the Location tab that matches your setup in Azure. Click \u0026ldquo;New\u0026rdquo; under resource groups and add a resource group that you\u0026rsquo;ll be using in Microsoft Azure. In the box below that you\u0026rsquo;ll want to click new and add your storage account you setup in Azure earlier.\nUnder the Network tab, you\u0026rsquo;ll need to add your VNet that was setup in Azure. After this, click \u0026ldquo;Finish\u0026rdquo; to finalize your reservation setup.\nCreating an Azure Blueprint Now that the infrastructure pieces are setup in vRA we can focus on creating our blueprints. Open the design tab and drag in the Azure Machine object. After that the typical ID and description should be added.\nOn the Build Information tab you\u0026rsquo;ll need to add quite a bit of information. The first of which is the Location in which the machine will be deployed as well as how we\u0026rsquo;ll name the machine.\nBelow this, we\u0026rsquo;ll have a bit more work to do. We need to specify the image that will used to deploy our server. The \u0026ldquo;Virtual Machine Image\u0026rdquo; if set to type \u0026ldquo;Stock\u0026rdquo; will be an identifier (called a URN) specified by Microsoft. The format of this image is:\npublisher:offer:sku:version You can get this information through the Azure PowerShell module like I did, with the command:\nInstall-Module AzureRM #To Install the Azure Module Login-AzureRMAccount #To Login to your Azure Account Get-AzureRmVMImagePublisher -Location \u0026#39;East US\u0026#39; | Get-AzureRmVMImageOffer | Get-AzureRmVMImageSku | Get-AzureRMVMImage | Get-AzureRmVMImage | where {$_.PublisherName -eq “Canonical”} | Select PublisherName, Offer, Skus, Version This command takes a bit to come back but you can modify the publisher and export the data to a text file if you like. You may also use the Azure command line tools like Jon does in his post, if you\u0026rsquo;d prefer to go that route. If you do, the command to find the same publisher would be:\nvm image list --publisher canonical --location eastus After you\u0026rsquo;ve entered your URN for the Virtual Machine Image, you\u0026rsquo;ll need to add some authentication information so that when you deploy your machine, you\u0026rsquo;ll have a way to login to it. Either SSH or a user/password combination. Lastly, you\u0026rsquo;ll pick a \u0026ldquo;Series\u0026rdquo; and a \u0026ldquo;Size\u0026rdquo; which determines how big the new machine will be.\nOn the Machine Resources tab, you\u0026rsquo;ll enter what resource group the machine will belong to or you can create a new on on the fly. You\u0026rsquo;ll also be able to add the machine to an availability set if necessary.\nOn the Storage tab, enter the storage account we found earlier in the Azure portal and entered into our reservations.\nThe Network tab lets you specify the VNet, Subnet and additional networking information for the virtual machine. Fill out the desired configuration and save the blueprint.\nSummary Setting up an Azure Virtual Machine through vRealize Automation isn\u0026rsquo;t quite as simple as a vSphere machine but it can be done. After you\u0026rsquo;ve built your blueprint, you\u0026rsquo;ll still need to publish it, add it to a catalog and entitle it appropriately, but this is all standard operating procedures. If you need help with any of those tasks, I\u0026rsquo;ve got a vRA guide for that as well. Good luck to you in deploying your own hybrid cloud environments with Microsoft Azure.\n","permalink":"https://theithollow.com/2017/03/20/adding-azure-endpoint-vrealize-automation-7/","summary":"\u003cp\u003eAs of vRealize Automation 7.2, you can now deploy workloads to Microsoft Azure through vRA\u0026rsquo;s native capabilities. Don\u0026rsquo;t get too excited here though since the process for adding an endpoint is much different than it is for other endpoints such as vSphere or AWS. The process for Azure in vRA 7 is to leverage objects in vRealize Orchestrator to do the heavy lifting. If you know things like resource mappings and vRO objects, you can do very similar tasks in the tool.\u003c/p\u003e","title":"Adding an Azure Endpoint to vRealize Automation 7"},{"content":"Recently, I\u0026rsquo;ve been going through and updating my lab so that I\u0026rsquo;m all up to date with the latest technology. As part of this process, I\u0026rsquo;ve updated my certificates so that all of my URLs have the nice trusted green logo on them. Oh yeah, and because it\u0026rsquo;s more secure.\nI updated my vSphere lab to version 6.5 and moved to the vCenter Server Appliance (VCSA) as part of my updates. However, after I replaced the default self-signed certificates I had a few new problems. Specifically, after the update, NSX wouldn\u0026rsquo;t connect to the lookup service. This is particularly annoying because as I found out later, if I\u0026rsquo;d have just left my self-signed certificates in tact, I would never have had to deal with this. I thought that I was doing the right thing for security, but VMware made it more painful for me to do the right thing. I\u0026rsquo;m hoping this gets more focus soon from VMware.\nWhen I tried to connect my NSX Manager to my vCenter I\u0026rsquo;d get an error stating that the certificate change was not verified, like the following:\nIt turns out that NSX won\u0026rsquo;t connect because it\u0026rsquo;s getting the wrong fingerprint back from vCenter. The following KB article covers how to fix this. https://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=2132645\nMy steps to fix it While the KB article does an adequate job of describing the fix, hopefully my screenshots will help add some additional color if you\u0026rsquo;re trying to go through the same thing. The first thing you want to do is go to your PSC\u0026rsquo;s managed object browser to find the old fingerprint. To do this go to https://PSCADDRESS/lookupservice/mob?moid=ServiceRegistration\u0026amp;method=List in a web browser.\nOnce you\u0026rsquo;ve logged in, you\u0026rsquo;ll want to modify the information in the \u0026ldquo;Value\u0026rdquo; box so that it only contains the filterCriteria tags like the screenshot below. When you\u0026rsquo;ve removed the other tag information in that window click \u0026ldquo;Invoke Method\u0026rdquo;\nAfter you\u0026rsquo;ve done this, to a find on that page for \u0026ldquo;sts/STS\u0026rdquo;. You\u0026rsquo;ll find this string in a URL down on the page. When you find this, copy the sslTrust value from the line preceding this sts/STS string you searched for. It\u0026rsquo;ll look like a long string of garbled text. Copy this text.\nOpen up a text editor and paste this string into the file and save it as sts.cer someplace safe.\nNow open the sts.cer file with certmgr.exe on a windows machine or similar program. Find the fingerprint by going to the details tab and looking for the \u0026ldquo;Thumbprint\u0026rdquo; attributes. Copy the thumbprint value into your clipboard.\nOpen a new txt file and paste the string into the text editor. Remove all of the spaces that may be present so that it\u0026rsquo;s a single string with no spaces in it. Save the file as old.fprint.txt.\nNow, ssh into your PSC and change the directory and create a new directory named \u0026ldquo;certs\u0026rdquo; in the root folder. After that we need to run the command:\n/usr/lib/vmware-vmafd/bin/vecs-cli entry getcert \u0026ndash;store MACHINE_SSL_CERT \u0026ndash;alias __MACHINE_CERT \u0026ndash;output /certs/new_sts.crt\nNow we can update our thumbprints on the existing certificates. Run the command:\npython /usr/lib/vmidentity/tools/scripts/ls_update_certs.py \u0026ndash;url VCENTERLOOKUPURL \u0026ndash;fingerprint FINGERPRINTFROM_OLD.FPRINT.TXT \u0026ndash;certfile /certs/new_sts.crt \u0026ndash;user administrator@vsphere.local \u0026ndash;password YOURPASSWORD\nWhen you execute this command, it may take a bit but it will run for a while returning data. When complete you can try to connect your NSX manager again.\nOnce I ran through the steps, I was able to successfully connect NSX to my lookup service again.\nSummary I hope that if you\u0026rsquo;ve reached this page, you\u0026rsquo;ll be able to find the answer to your question about connecting NSX to vCenter. And more importantly, hopefully VMware will fix this issue so that replacing your certificates won\u0026rsquo;t cause further issues in your environment.\n","permalink":"https://theithollow.com/2017/03/13/nsx-issues-replacing-vmware-self-signed-certs/","summary":"\u003cp\u003eRecently, I\u0026rsquo;ve been going through and updating my lab so that I\u0026rsquo;m all up to date with the latest technology. As part of this process, I\u0026rsquo;ve updated my certificates so that all of my URLs have the nice trusted green logo on them. Oh yeah, and because it\u0026rsquo;s more secure.\u003c/p\u003e\n\u003cp\u003eI updated my vSphere lab to version 6.5 and moved to the vCenter Server Appliance (VCSA) as part of my updates. However, after I replaced the default self-signed certificates I had a few new problems. Specifically, after the update, NSX wouldn\u0026rsquo;t connect to the lookup service. This is particularly annoying because as I found out later, if I\u0026rsquo;d have just left my self-signed certificates in tact, I would never have had to deal with this. I thought that I was doing the right thing for security, but VMware made it more painful for me to do the right thing. I\u0026rsquo;m hoping this gets more focus soon from VMware.\u003c/p\u003e","title":"NSX Issues After Replacing VMware Self-Signed Certs"},{"content":"Packer is a free tool from Hashicorp that allows you to build new images. Keeping base vSphere templates up to date is not too difficult of a task for many, but as we add things like AWS accounts and regions, it\u0026rsquo;s pretty easy to have sprawl to deal with. We\u0026rsquo;d like to make sure that an image in our vSphere datacenter looks the same as an image in our public clouds.\nPacker gives us a great way to do this. The tool takes a JSON file and builds a new image based on the information contained within. This allows us to move our template patching and updating processes as a piece of code that can be stored in a version control repository like git. Now our builds are deployed in a repeatable fashion and we know that each month when our images are updated, that they\u0026rsquo;ll be consistent and the same across our organization. It\u0026rsquo;s pretty awesome to be able to use a file like the one below to build all of your images across your infrastructure.\nAWS images update pretty straight forward by cloning an existing Amazon Machine Image (AMI), updating it and creating a new AMI from a new machine. The vSphere process can be a bit different. For example, A VMware Workstation VM can be cloned, updated and then exported to vSphere through the OVFtool. This process is slightly different but achieves the same effect. The diagram below demonstrates how the processes look.\nThe video below should give you a pretty good idea about how to implement this in your own environment. Good luck, and happy coding.\nhttps://youtu.be/e3UuIqj7amg\n","permalink":"https://theithollow.com/2017/03/06/using-packer-create-vsphere-aws-images/","summary":"\u003cp\u003e\u003ca href=\"https://www.packer.io/\"\u003ePacker\u003c/a\u003e is a free tool from \u003ca href=\"https://www.hashicorp.com/\"\u003eHashicorp\u003c/a\u003e that allows you to build new images. Keeping base vSphere templates up to date is not too difficult of a task for many, but as we add things like AWS accounts and regions, it\u0026rsquo;s pretty easy to have sprawl to deal with. We\u0026rsquo;d like to make sure that an image in our vSphere datacenter looks the same as an image in our public clouds.\u003c/p\u003e","title":"Using Packer to Create vSphere and AWS Images"},{"content":"Many cloud initiatives require having a portal for users to choose which workloads can be deployed. Think of this as a supermarket full of servers, networks, databases, or all of the above. There are product offerings from VMware, Cisco, RightScale and Redhat, used for these deployment methodologies. If you\u0026rsquo;re an AWS customer though, you\u0026rsquo;ve got your own catalog available from the native AWS tools called the \u0026ldquo;Service Catalog\u0026rdquo; service. This service enables you to deploy and publish CloudFormation templates for your users so that they don\u0026rsquo;t have to know how RDS, or EC2 instances work. They can select from the catalog and deploy anything you can build in an Amazon CFT. Think of the possibilities.\nBuilding a Catalog If you open up the Service Catalog service for the first time you\u0026rsquo;ll get a pretty familiar \u0026ldquo;Get Started\u0026rdquo; page where you can click the button to start building.\nAdd a Portfolio The first step is to create a portfolio. The portfolio will be a list of available offerings to our users. For example, I might have one portfolio for the production environment and another for development. Or maybe I have a different portfolio by business unit. Really however you feel like you should group your products together will work here. Fill out a name, description and an owner for future reference, and then click \u0026ldquo;Create\u0026rdquo;\nAt this point you\u0026rsquo;ll see a portfolio listed in your AWS console. Add a Product Go back to the \u0026ldquo;Service Catalog\u0026rdquo; drop down and look for \u0026ldquo;Products list\u0026rdquo;.\nFrom here we\u0026rsquo;ll add a new product. The products are the items that will be listed in the dashboard. For instance, here you might have an EC2 instance or an entire VPC with VMs, Databases, Security Groups, NACLs, and custom applications. Whatever items you want your users to be able to request will be added as a product. If you haven\u0026rsquo;t guessed yet, we\u0026rsquo;ll later add these products to our portfolio. Click the \u0026ldquo;Upload new product\u0026rdquo; button to get started.\nThe first step to adding a product is to give it some product details. Enter a descriptive product name, a description, the person who publishes the product and a vendor if applicable. When you\u0026rsquo;ve named your product, click \u0026ldquo;next\u0026rdquo;.\nThe next screen we want to add some support details. You can totally skip over this page if it\u0026rsquo;s a lab, but some support information is good for handling issues later and being able to get the right help for the product deployments. I highly recommend adding info here. When you\u0026rsquo;re done, click \u0026ldquo;next\u0026rdquo;.\nThe version details screen is where the magic happens. On this screen you\u0026rsquo;ll need to either upload a new template file, or specify an S3 bucket URL where your CloudFormation template lives. Give the product a version and a description. The version might not seem to important right now, but you can guess later on you\u0026rsquo;ll need to update this CloudFormation template with an additional version and you\u0026rsquo;ll want your users to be informed. Once done, click \u0026ldquo;next\u0026rdquo;.\nReview the summary screen and click the \u0026ldquo;create\u0026rdquo; button.\nOnce the product has been uploaded, you\u0026rsquo;ll see it in your products list.\nAdd the Product to the Portfolio Now that we\u0026rsquo;ve got a product and a portfolio built, we want to go back into our portfolio and add our products. Once you open your portfolio screen, click the \u0026ldquo;Upload new product\u0026rdquo; button.\nSelect the products that should belong in the portfolio and click the \u0026ldquo;Add product to portfolio\u0026rdquo; button.\nAdding Constraints Once you\u0026rsquo;ve added your product to the portfolio, we can add constraints for it. Constraints are rules applied when a user launches a product from the portfolio. For instance, here we\u0026rsquo;ll add a launch constraint that specifies what IAM role will be used to do the deployment. Think of it this way, you may not let your end users deploy a brand new VPC, but if you specify how the VPC is deployed as part of a CloudFormation template, it might be OK. In that case you need to make sure the product launches with the correct permissions to add a new VPC while still not granting those permissions to your end users. Click the \u0026ldquo;constraints\u0026rdquo; section to add a new contraint.\nSelect a product and a constraint type from the dropdown. In my case I\u0026rsquo;m worries about the launch permissions. Click the \u0026ldquo;continue\u0026rdquo; button when ready.\nThe launch constraint I\u0026rsquo;ve selected requires some additional permissions so I\u0026rsquo;ve selected my VPCAdmin role.\nNow in your portfolio, you should see a new launch constraint listed. Add Permissions The last step we need to do with our portfolio is to assign it to our users. We can specify a single user, role or group to have access to request these catalog items. In the portfolio, expand the Users, Groups and roles section and click the \u0026ldquo;Add user, group or role\u0026rdquo; button. Select the appropriate IAM object and click the \u0026ldquo;Add Access\u0026rdquo; button. End Result When all said and done you can have multiple products like the ones shown in the screenshot below. Users can select the product and click the \u0026ldquo;Launch product\u0026rdquo; button to deploy new CloudFormation templates in the AWS console. Any products that have been deployed will show up in the \u0026ldquo;provisioned products\u0026rdquo; section of the dashboard. From there, users will have the opportunity to update or terminate the provisioned products. I look forward to seeing future updates from AWS on the Service Catalog. Right now, it can only do CloudFormation templates but would love to see if AWS adds Step Functions or Lambda calls as part of this as well. Stay tuned for updates!\n","permalink":"https://theithollow.com/2017/02/27/aws-service-catalog/","summary":"\u003cp\u003eMany cloud initiatives require having a portal for users to choose which workloads can be deployed. Think of this as a supermarket full of servers, networks, databases, or all of the above. There are product offerings from VMware, Cisco, RightScale and Redhat, used for these deployment methodologies. If you\u0026rsquo;re an AWS customer though, you\u0026rsquo;ve got your own catalog available from the native AWS tools called the \u0026ldquo;Service Catalog\u0026rdquo; service. This service enables you to deploy and publish \u003ca href=\"https://aws.amazon.com/cloudformation/\"\u003eCloudFormation templates\u003c/a\u003e for your users so that they don\u0026rsquo;t have to know how RDS, or EC2 instances work. They can select from the catalog and deploy anything you can build in an Amazon CFT. Think of the possibilities.\u003c/p\u003e","title":"AWS Service Catalog"},{"content":"It\u0026rsquo;s the moment you\u0026rsquo;ve all (really a few of you) been waiting for! The long anticipated sequel to the \u0026quot; Getting Started vRealize Automation Course\u0026quot; is now live on the Pluralsight catalog. This new course will join the likes of other sequels that were even better than the originals including:\nStar Trek: The Wrath of Khan Batman: The Dark Knight Star Wars: The Empire Strikes Back Indiana Jones and the Temple of Doom Predator 2 (Lol, Just kidding) The first course covered the basics behind vRealize Automation, but this new course will cover things like:\nIntegrating with VMware NSX Using Event Subscriptions Learning Custom Properties Understanding Placement Decisions Using Custom Actions Check out the trailer below to get more information or hop on over to Pluralsight to watch the new course on \u0026ldquo;Extending Capabilities of vRealize Automation 7\u0026rdquo;.\n\u0026amp;feature=youtu.be\n","permalink":"https://theithollow.com/2017/02/20/intermediate-vra-course-pluralsight/","summary":"\u003cp\u003eIt\u0026rsquo;s the moment you\u0026rsquo;ve all (really a few of you) been waiting for! The long anticipated sequel to the \u0026quot; \u003ca href=\"/2016/11/28/getting-started-vrealize-automation-course/\"\u003eGetting Started vRealize Automation Course\u003c/a\u003e\u0026quot; is now live on the \u003ca href=\"http://pluralsight.com\"\u003ePluralsight\u003c/a\u003e catalog. This new course will join the likes of other sequels that were even better than the originals including:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eStar Trek: The Wrath of Khan\u003c/li\u003e\n\u003cli\u003eBatman: The Dark Knight\u003c/li\u003e\n\u003cli\u003eStar Wars: The Empire Strikes Back\u003c/li\u003e\n\u003cli\u003eIndiana Jones and the Temple of Doom\u003c/li\u003e\n\u003cli\u003ePredator 2 (Lol, Just kidding)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThe first course covered the basics behind vRealize Automation, but this new course will cover things like:\u003c/p\u003e","title":"Intermediate vRA Course on Pluralsight"},{"content":"Not everyone who encrypts data uses a key management solution. Since the days we started worrying about storage of personally identifiable information (PII) we\u0026rsquo;ve had different methods of protecting it. In a small environment, simple PGP (Pretty Good Privacy) keys were used to manually encrypt data and decrypt it. Storing keys for a few different partners that you routinely exchange data with was simple enough to do. But what about today when we\u0026rsquo;re storing sensitive data in databases, on storage volumes and in other people\u0026rsquo;s data centers, like Amazon Web Services. How do we manage numerous keys and make sure that those keys are properly maintained?\nAmazon has a couple of ways to centrally manage encryption keys including the AWS Key Management Service or KMS. The KMS option from AWS gives some pretty simple mechanisms to centrally manage keys, but they also have a few drawbacks. If you\u0026rsquo;re trying to figure out what key management solution you should be using, consider these pros and cons.\nKMS Usage The first topic to discuss is how the keys are used. The KMS solution makes it incredibly easy to do things like encrypt EC2 volumes, Lambda variable or S3 buckets. In fact you may be using KMS without even knowing it. If you\u0026rsquo;ve used Lambda or S3 encryption, the keys come from KMS.\nEnvironment Variable in Lambda Encrypted by KMS in the Advanced Settings\nSSE of AWS S3 Folders\nSo an obvious advantage to using KMS is that you might even being using it without knowing it. This qualifies as easy to use if you ask me. Chalk that up as a positive for KMS. Now, if you plan any client side encryption, you can still use KMS but you\u0026rsquo;ll need to use the SDK to have KMS encrypt the data before storing it. Client side encryption might include encrypting files before storing them on an EBS volume, or encrypting field level data into a database. While this would take more work than simply checking a box, it is still able to do it so no real downfalls here.\nKey Rotation KMS allows you to use customer master keys (CMKs) created from withink KMS or you can import your own. If you choose to have KMS create the CMKs the keys can be rotated automatically by AWS on a yearly basis. This key rotation does not require you to decrypt and then re-encrypt the data that was encrypted by the key. Old keys are still available to decrypt the data and newly encrypted data will use new keys. If you are doing some client side encryption with KMS though, you may need to decrypt and re-encrypt during the key rotation process.\nThere are a couple of negatives about this key rotation, depending on your own perspective. KMS makes the rotation very easy but you don\u0026rsquo;t have much control over that rotation. AWS will rotate it for you on their own schedule (annually) but you can\u0026rsquo;t force that rotation. Also, if you import your own keys, you must manage this key rotation by yourself which negates the nice rotation service provided by AWS and even worse, you must decrypt and re-encrypt the data ciphered by those keys. So keep this in mind if you\u0026rsquo;re thinking about using your own keys.\nKey Storage Types The KMS solution available for use in AWS manages symmetric keys only. To review symmetric key specifics, it refers to using the same key to both encrypt and decrypt data. Cryptography techniques these days also use asymmetric keys such as X.509 certificates and PGP keys. These types of keys can\u0026rsquo;t be stored in the KMS managed service from AWS.\nAWS Services If you\u0026rsquo;re planning on using AWS for a bulk of your workloads you should also consider what services won\u0026rsquo;t work with any other encryption key management solutions. These are the AWS services that must use AWS KMS as the encryption key store:\nAWS Lightsail AWS EC2 SSM AWS CodeCommit I would expect that as time moves forward the above services will allow for other key management solutions to be used but new services all seem to start with KMS first.\nAvailability KMS, for the most part, should be a considered a highly available solution. When you setup your encryption keys, they are stored by region. So if you encrypt EC2 volumes in the US-East region, there will be a CMK stored in KMS for that region. If you switch to another region to encrypt EC2 volumes, a different key will be used. While this segmentation is great, it means that if there was a disaster that caused an outage to a large section of the East coast of the US, then these keys may be affected. This is a pretty extreme case and you might not care anyway, cause the workloads you have in that region were probably affected as well.\nKMS keys are managed by AWS and as such are protected from an availability zone failure. This makes them pretty highly available since the loss of an entire data center won\u0026rsquo;t cause an outage for the service. Let\u0026rsquo;s face it, if you can\u0026rsquo;t access your encryption keys, the availability of your data may be irrelevant.\nSummary AWS KMS is a really easy way to centrally manage your keys. It comes with the audibility (through CloudTrail) that you\u0026rsquo;d need for an encryption key management solution. It has automatic key rotation which is a must for compliance purposes. It requires no setup to get started. If you\u0026rsquo;re looking for a more robust solution that can store asymmetric keys, or requires a Hardware Security Module, then KMS might not be for you. You\u0026rsquo;ll have to make your own decisions on whether or not KMS makes sense for your organization.\n","permalink":"https://theithollow.com/2017/02/13/pros-cons-amazons-key-management-service/","summary":"\u003cp\u003eNot everyone who encrypts data uses a key management solution. Since the days we started worrying about storage of personally identifiable information (PII) we\u0026rsquo;ve had different methods of protecting it. In a small environment, simple PGP (Pretty Good Privacy) keys were used to manually encrypt data and decrypt it. Storing keys for a few different partners that you routinely exchange data with was simple enough to do. But what about today when we\u0026rsquo;re storing sensitive data in databases, on storage volumes and in other people\u0026rsquo;s data centers, like Amazon Web Services. How do we manage numerous keys and make sure that those keys are properly maintained?\u003c/p\u003e","title":"Pros and Cons of Amazon's Key Management Service"},{"content":"It\u0026rsquo;s one of those \u0026ldquo;first world problems\u0026rdquo; where you have either not enough wireless coverage at home, or you\u0026rsquo;re getting too much interference from the neighbors to have satisfactory wireless coverage.\nI had an Linksys AC3200 providing all of my house\u0026rsquo;s wireless connectivity and for the most part, it did a good job. I have about twenty-five devices connected to it through wireless and all four of the 1Gbps network jacks filled up as well. Occasionally I found that I needed to restart the router but it was pretty good, no real complaints. However I did have it located in my office which is at the opposite side of my house from my bedroom, which meant some sketchy wireless over the 5Ghz band if working from bed which I did often. I\u0026rsquo;d have to switch over to the 2.4GHz band and then I was getting interference from neighbors. It was time to try something else. OK sure, I could\u0026rsquo;ve moved the router closer to the middle of the house, but let\u0026rsquo;s over engineer the solution instead right?\nI bought a pair of the Ubiquiti UAP AC Pro access points and ran some cabling. I wanted to make sure that the entire house is covered so I placed one access point in the basement and another in the upstairs bedroom, on opposite sides of the house. Then I ran cat 6 cabling to the access points and connected them to a new Ubiquiti UniFi 8 POE-150W switch so I could power the APs over ethernet instead of having to have a power cable for each access point. I mounted my basement access point, patch panel and Ubiquiti UniFi switch to a piece of plywood and screwed it to the wall.\nThe Ubiquti Switch connects upstream through a new Ubiquiti UniFi Security Gateway (USG) so that I can firewall my wireless access off from my home lab. Yep, this is completely overkill, but it turns out that without this device you\u0026rsquo;ll see some missing widgets in the UniFi dashboard. Ubiquiti, you have a pretty annoying way of making me want more. Kind of like my cable provider showing me whats on the channels that I don\u0026rsquo;t have access too. Well played! My compulsive need to have everything perfect wouldn\u0026rsquo;t allow this, so I purchased the security gateway as well. The USG does deep packet inspection and having a firewall on both sides of my lab isn\u0026rsquo;t a horrible idea, but is for sure unnecessary. If you don\u0026rsquo;t have a firewall in your lab though it\u0026rsquo;s worth looking at and is very affordable.\nI\u0026rsquo;m very happy with the APs thus far and really impressed with user interface from Ubiquiti. Using a controller based solution for wireless comes with a few challenges but it does make it nice to manage all of the devices from a single interface. Deep packet inspection at a low price for the USG is a nice feature to have though.\nThe home lab feels a lot more complete now that I\u0026rsquo;ve got some robust access points in the house and cleaned up some other cabling that was bothering my perfectionist tendencies. Looks pretty good to me.\n","permalink":"https://theithollow.com/2017/02/06/ubiquiti-in-the-lab/","summary":"\u003cp\u003eIt\u0026rsquo;s one of those \u0026ldquo;first world problems\u0026rdquo; where you have either not enough wireless coverage at home, or you\u0026rsquo;re getting too much interference from the neighbors to have satisfactory wireless coverage.\u003c/p\u003e\n\u003cp\u003eI had an \u003ca href=\"http://amzn.to/2gWQ36Y\"\u003eLinksys AC3200\u003c/a\u003e providing all of my house\u0026rsquo;s wireless connectivity and for the most part, it did a good job. I have about twenty-five devices connected to it through wireless and all four of the 1Gbps network jacks filled up as well. Occasionally I found that I needed to restart the router but it was pretty good, no real complaints. However I did have it located in my office which is at the opposite side of my house from my bedroom, which meant some sketchy wireless over the 5Ghz band if working from bed which I did often. I\u0026rsquo;d have to switch over to the 2.4GHz band and then I was getting interference from neighbors. It was time to try something else. OK sure, I could\u0026rsquo;ve moved the router closer to the middle of the house, but let\u0026rsquo;s over engineer the solution instead right?\u003c/p\u003e","title":"Ubiquiti in the Lab"},{"content":"My father was an electrician for over thirty years and has worked on houses, power plants, and manufacturing facilities for most of his life. When travelling around the region near the small town where I grew up, you can see physical structures that my Dad has helped to construct. There must be a certain sense of pride to see something that you built thirty years ago still standing and still being used today.\nIf you\u0026rsquo;re in the information technology field, can you still have this sort of sense of accomplishment? Let\u0026rsquo;s face it, things in this industry move far too fast for anything to last for a long time. Yes, there are some fundamental concepts still in use that were developed decades ago, and there are IT corporations that have been around a long time, but this is a pretty small percentage of the industry as a whole.\nMy current focus area revolves around cloud and automation so I talk to customers and colleagues about being able to provision and then destroy services quickly and easily, in a repeatable fashion.\nLiterally, all of the work that I do could be deleted in minutes in order to save on operational costs. It would be like building a house out of Lincoln logs and then putting them back in the box at the end of the day. So, if you\u0026rsquo;re in IT, especially cloud, would you have a thing that you\u0026rsquo;ve built that you\u0026rsquo;re especially proud of, or would it be destroyed just as fast as it was created?\nIf your family is anything like mine, they\u0026rsquo;re very supportive of your work and career goals, but let\u0026rsquo;s face it, they don\u0026rsquo;t really understand what you do for a living. Think about how hard it is to explain what virtualization, cloud, and DevOps are to people that are not in the industry and these are just the terms of today. Tomorrow there will be a whole new set of terms that will be equally as difficult to explain. With these types of language barriers, are you even able to really share in the accomplishments with them? Sure, they know that you obtained that shiny new certification or promotion at work, but they probably don\u0026rsquo;t really know the importance (or lack of) this accomplishment really is. Since you can\u0026rsquo;t share in the moment with them, you end up celebrating on your own.\nOK, enough whining from theITHollow guy. I\u0026rsquo;m not sure if anyone feels this way or not, but I can tell you this much for sure. The people that come and say hello at conferences, touch base over Linkedin or Twitter, or read my blog, are my houses, power plants and manufacturing facilities. I may not have a tangible thing to look back on and show my son when he grows up, but in some way I hope to have helped someone else, who does understand the industry, to move their career forward.\nIf we touched base recently at a conference, shared a hello over social media, or you completed a project due to one of my blog posts, thank YOU for being my legacy.\n","permalink":"https://theithollow.com/2017/01/30/proud-youve-done/","summary":"\u003cp\u003eMy father was an electrician for over thirty years and has worked on houses, power plants, and manufacturing facilities for most of his life. When travelling around the region near the small town where I grew up, you can see physical structures that my Dad has helped to construct. There must be a certain sense of pride to see something that you built thirty years ago still standing and still being used today.\u003c/p\u003e","title":"Are You Proud of What You've Done?"},{"content":"Cisco UCS Director Catalog Requests are the entire reason for having a cloud management platform in the first place. It\u0026rsquo;s the end user\u0026rsquo;s store for where they can request machines and services. To request a service, login to the UCS Director Portal with an account that has the \u0026ldquo;Service End-User\u0026rdquo; role. This role provides a different portal when logging in that only shows the user\u0026rsquo;s orders and catalogs and removes all of the administration options.\nOnce logged into the the user portal, look for a folder with your catalog requests in it. In the examples we\u0026rsquo;ve used in this series, I\u0026rsquo;ve got a folder called \u0026ldquo;Standard\u0026rdquo;. Click your folder.\nOnce you open the folder, you should see the catalog items that have been assigned to this user account. Select the catalog item from the menu.\nAs part of the standard request forms, select who will own the virtual machine. This could be the specific user, or a group of users who may roll back the requests and manage the VM later. Make your decision and click \u0026ldquo;Next\u0026rdquo;.\nOn the next screen, select the Virtual Data Center that his catalog should be deployed in. You may also add comments to the request and select a time to provision the virtual machine. If your VDC has a lease policy, this will be set for you, and depending on how you setup your polices, you may be able to change this lease. Click \u0026ldquo;Next\u0026rdquo;.\nSelect the size of the virtual machine. This will be dependent upon how you setup your computing policy in the VDC. Make your selections and click \u0026ldquo;Next\u0026rdquo;.\nAgain, depending upon how you setup the policies in the VDC, you may be able to select a post-deployment workflow to be executed after the VM is provisioned. Click \u0026ldquo;Next\u0026rdquo;.\nReview your summary page for any issues and click \u0026ldquo;Submit\u0026rdquo;.\nAfter the request has been submitted, go back to the user dashboard and look for your requests. You\u0026rsquo;ll see the status of any \u0026ldquo;In Progress\u0026rdquo; requests so that you can keep an eye on it. The dashboard will then show you any machines that you\u0026rsquo;ve provisioned, etc and be your default landing page for all things related to your cloud. The administrator will be able to see these requests as well so that he/she may help you administer these objects. Congratulations, you\u0026rsquo;ve just deployed your first virtual machine through the UCS Director request portal!\n","permalink":"https://theithollow.com/2017/01/23/cisco-ucs-director-catalog-request/","summary":"\u003cp\u003eCisco UCS Director Catalog Requests are the entire reason for having a cloud management platform in the first place. It\u0026rsquo;s the end user\u0026rsquo;s store for where they can request machines and services. To request a service, login to the UCS Director Portal with an account that has the \u0026ldquo;Service End-User\u0026rdquo; role. This role provides a different portal when logging in that only shows the user\u0026rsquo;s orders and catalogs and removes all of the administration options.\u003c/p\u003e","title":"Cisco UCS Director Catalog Request"},{"content":"This year at AWS re:Invent Amazon announced a new service called Step Functions. According to AWS, Step Functions is an easy way to coordinate the components of distributed applications and microservices using visual workflows. That pretty much sums it up! When you\u0026rsquo;ve got a series of small microservices that need to be coordinated, it can be tricky to write this code into each lambda function to call the next function. Step Functions gives you a visual editor to manage the calls to multiple Lambda functions to make your life easier. I\u0026rsquo;ve written about this before on the AHEAD blog.\nThis post will cover how to create a very basic AWS Step Function to get you started and on your way with your server-less application.\nPrerequisites Lambda Functions To get started, you\u0026rsquo;ll want at least one AWS Lambda function. I have two main Lambda functions that I\u0026rsquo;ll be using as examples. I have \u0026ldquo;AutoOn-Tagged\u0026rdquo; and \u0026ldquo;AutoOff-Tagged\u0026rdquo; Lamda functions that are written in Python. These functions look for any EC2 instances in my VPC that are tagged with a key name of \u0026ldquo;AutoOff\u0026rdquo; and a value of \u0026ldquo;True.\u0026rdquo; The idea behind this is that you can tag all your instances that can be shut down without affecting services, like a dev environment at the end of the workday, and power them all off to save money. If you want to see more specifics about these Lambda functions go see this Equinix link where you too can borrow some code.\nLambda Functions to be used in Step Function\nIAM Policy Before you create your Step Functions you will want to create an IAM role that has permissions to run your code. Open your AWS Console and go to the IAM section. Click Policies and then click \u0026ldquo;Create Policy.\u0026rdquo; Click \u0026ldquo;Select\u0026rdquo; next to \u0026ldquo;Create Your Own Policy. Give the policy a name and a description and then paste in the policy document. I\u0026rsquo;ve saved you some time and put the IAM Policy in my github repo so you can copy it. Validate and then create the policy.\nIAM Role Once you have your IAM Policy created, click on the Roles link to attach the policy you just created and add it to a role. Click \u0026ldquo;Create New Role\u0026rdquo; to get started. Give the role a descriptive name and then click \u0026ldquo;Next Step.\u0026rdquo;\nOn the Role type screen select \u0026ldquo;AWS Lambda.\u0026rdquo;\nOn the Attach Policy screen find and select the policy you created earlier. and then click \u0026ldquo;Next Step.\u0026rdquo;\nReview the role setup and click the \u0026ldquo;Create Role\u0026rdquo; button.\nCreate Step Function Once you\u0026rsquo;ve got your Lambda functions tested and working, we can place them behind our Step Functions. Open up the Step Functions menu in the AWS Console and it will take you to the State Machine dashboard. Click \u0026ldquo;Create a State Machine\u0026rdquo; button. You\u0026rsquo;ll need to name the state machine something descriptive and then we can use some of the blueprints below to get started. The square boxes below the name field will show you different methods on using Step Functions. Feel free to click any of those blueprints to see different ways of arranging your Step Functions. The \u0026ldquo;Preview\u0026rdquo; pane will take the blueprint you\u0026rsquo;ve selected and visually draw the logic gates that would be used in that blueprint. If you look below this, you\u0026rsquo;ll see the \u0026ldquo;Code\u0026rdquo; pane and the JSON that would be used to create those logic gates.\nPlease note though, that you don\u0026rsquo;t need to use these blueprints at all, they\u0026rsquo;re just useful if you\u0026rsquo;re getting started. You can paste your JSON code in the code pane and click the refresh button in the preview pane at any time to see your code visually. I\u0026rsquo;ve decided that for this example I would use the \u0026ldquo;Choice State\u0026rdquo; blueprint as my getting started template and modify it to meet my needs. If you would like to download this code, check out my github repo here. I\u0026rsquo;ve made some notes in the screenshot to get you started on learning how the code is used.\nI\u0026rsquo;ve modified the code pane for my own purposes and then clicked the refresh button next to the preview pane to show what my State Machine will do. It begins with a choice where we\u0026rsquo;ll take some input and make a decision based on whether I entered On or Off. If the input is on, we\u0026rsquo;ll go to the OnMatchState, if Off we\u0026rsquo;ll go to the OffMatchState and if i were to enter anything else then we go to the Default State.\nPreview of Step Functions\nOnce you\u0026rsquo;ve uploaded your code and are happy with it, click the \u0026ldquo;Create State Machine\u0026rdquo; button at the bottom of the screen. The next window that pops up will be to select an IAM role with permissions to run code. Select the IAM role that we created earlier in this post.\nSelect the IAM Role with Lambda Execute Permissions\nExecute Step Functions Now that you\u0026rsquo;ve created the \u0026ldquo;State Machine\u0026rdquo; you can execute the Step Functions. Click on the \u0026ldquo;New Execution\u0026rdquo; button if you just created your Step Function. Otherwise, you can go to the Step Functions dashboard and find the one you want to run. On the first screen, you\u0026rsquo;ll be asked to enter some inputs. Depending on your Step Functions, you may or may not need any inputs. For the example code that I used, we need a variable named \u0026ldquo;onoff\u0026rdquo; and a value which should be on or off. If you enter in any other value like \u0026ldquo;banana\u0026rdquo; the Step Function will default to the end.\nAs the Step Function runs, you\u0026rsquo;ll be able to watch the state in the console, as well as the detailed audit log and inputs/outputs.\nIf you look at the State Machine dashboard again, you\u0026rsquo;ll see the execution runs so you can keep an eye on the successes and failures.\nSummary You can use Step Functions for a number or reasons but they are a great way to manage Lambda functions. The pricing of step functions breaks down to $.025 cents per 1,000 state transitions and you get your first 4000 for free in the free tier. Your Lambda functions are also not free, so remember you\u0026rsquo;re paying for those still. For more information check out the AWS documentation\n","permalink":"https://theithollow.com/2017/01/17/aws-step-functions/","summary":"\u003cp\u003eThis year at AWS re:Invent Amazon announced a new service called \u003ca href=\"https://aws.amazon.com/step-functions/\"\u003eStep Functions\u003c/a\u003e. According to AWS, Step Functions is an easy way to coordinate the components of distributed applications and microservices using visual workflows. That pretty much sums it up! When you\u0026rsquo;ve got a series of small microservices that need to be coordinated, it can be tricky to write this code into each lambda function to call the next function. Step Functions gives you a visual editor to manage the calls to multiple Lambda functions to make your life easier. I\u0026rsquo;ve written about this before on the \u003ca href=\"https://www.thinkahead.com/blog/visual-orchestration-aws/\"\u003eAHEAD blog\u003c/a\u003e.\u003c/p\u003e","title":"AWS Step Functions"},{"content":"Many of you read my previous post about leaders being removed from VMUG for working for vendors that compete with VMware. My call to action was to get a response from VMUG about what was actually happening. I recently received a phone call from VMUG CEO Brad Tompkins to discuss what was actually happening and I\u0026rsquo;d like to pass on some information to clear the air.\nVMUG Leader Status To get started, yes some leaders were removed from leadership roles in their respective VMUG. And yes, some people will not be allowed to become a VMUG leader based on which company is their employer. What I would like to make clear is that this decision was not made to single out Nutanix. Most of the comments that I saw on twitter were focused on Nutanix employees who had been removed from their local VMUGs. While it\u0026rsquo;s true that Nutanix is one of these companies, they are not the only one so I want to make it clear that this was not directed solely at Nutanix. This was a decision focused on companies that compete directly with VMware\u0026rsquo;s products and comes down to a decision about business and competition.\nThe next logical question that comes to mind is, \u0026ldquo;Which companies are those that aren\u0026rsquo;t allowed to have employees as leaders?\u0026rdquo; Well, that is a good question but still not public knowledge. There is no such public list, but if you put your thinking cap on, you\u0026rsquo;ll probably come up with a few that might be on a list of competitors. Microsoft, Citrix, Turbonomics all sort of jump out in front in my mind but were not confirmed by Mr. Tompkins.\nAs the 2017 VMUG Leader Guidelines document states:\nThis leaves all companies subject to a review before any new leaders are added.\nVMUG Independence Mr. Tompkins made it very clear to me on the phone that the decisions that were made were NOT those of VMware but rather VMUG itself which is an independently run organization. This gave me some pause for a second because if it were really an independent organization, why would VMUG care at all about the competitors of VMware? Obviously, my questioning is very black and white here and I\u0026rsquo;m not naive enough to think that VMUG and VMware aren\u0026rsquo;t closely tied together. After another look at the VMUG description, I\u0026rsquo;ve come to grips with the reality that the group is about maximizing members use of VMware and partner\u0026rsquo;s solutions. This is difficult to do when you\u0026rsquo;re inadvertently furthering competitor\u0026rsquo;s solutions, so I can reconcile how the decision comes from an independent organization.\nSo where are we now? I don\u0026rsquo;t feel much better about the situation now than I did before I spoke with Brad. I still feel sorry for the leaders who have dedicated time and effort into their local chapters and are no longer able to keep doing this. It\u0026rsquo;s not fair to them, or their members, but this isn\u0026rsquo;t really about \u0026ldquo;fair\u0026rdquo; is it? Business is business and thats the way it goes. I\u0026rsquo;m sure the leaders will land on their feet and maybe start their own independent group that could be about any technology. I\u0026rsquo;m also positive that Brad Tompkins didn\u0026rsquo;t like to have to explain to these leaders about the decision either. Brad knows these guys/girls and what commitments they\u0026rsquo;ve made to the VMUG organization. This was probably a difficult thing to do.\nDo I fault VMUG? No. Am I sad that any of this has to go on at all? Yes I am.\nI want to also make sure that I show some appreciation to Mr Tompkins for taking time out of his schedule to call a former leader and blogger to explain the situation. This was certainly not required on his part but I appreciate that the information was passed along for more transparency. I hope that you will also respect the fact that he did this whether you agree with the decisions that were made or not. At least the decision was owned, right or wrong and we can all move on with more pressing issues.\nMy site tends to focus on how to help users learn about technology or provide ideas about ways to build solutions. My goal is not to provide a fake news service for people. I\u0026rsquo;m hoping the pair of posts on my site have helped to clear up any misconceptions that might have been overheard over social media and I can get back to blogging about nerdy stuff.\n","permalink":"https://theithollow.com/2017/01/12/vmug-response-clearing-air/","summary":"\u003cp\u003eMany of you read my \u003ca href=\"/2017/01/07/dont-like-mommy-daddy-fight-vmug-edition/\"\u003eprevious post about leaders being removed from VMUG\u003c/a\u003e for working for vendors that compete with VMware. My call to action was to get a response from \u003ca href=\"http://vmug.com\"\u003eVMUG\u003c/a\u003e about what was actually happening. I recently received a phone call from \u003ca href=\"https://twitter.com/VMUG_CEO\"\u003eVMUG CEO Brad Tompkins\u003c/a\u003e to discuss what was actually happening and I\u0026rsquo;d like to pass on some information to clear the air.\u003c/p\u003e\n\u003ch1 id=\"vmug-leader-status\"\u003eVMUG Leader Status\u003c/h1\u003e\n\u003cp\u003eTo get started, yes some leaders were removed from leadership roles in their respective VMUG. And yes, some people will not be allowed to become a VMUG leader based on which company is their employer. What I would like to make clear is that this decision was not made to single out \u003ca href=\"http://nutanix.com\"\u003eNutanix\u003c/a\u003e. Most of the comments that I saw on twitter were focused on Nutanix employees who had been removed from their local VMUGs. While it\u0026rsquo;s true that Nutanix is one of these companies, they are not the only one so I want to make it clear that this was not directed solely at Nutanix. This was a decision focused on companies that compete directly with VMware\u0026rsquo;s products and comes down to a decision about business and competition.\u003c/p\u003e","title":"A VMUG Response - Clearing the Air"},{"content":"At this point I assume everyone is tired of hearing about storage arrays. They seem to have saturated the market to the point where the new storage companies have all but evaporated, or got bought by a larger company. Couple that with a focus on moving to public clouds and the storage array seems to have been beaten to death.\nWhile I was at Tech Field Day 12 I had the opportunity to see the folks over at StorageOS present on their fancy new storage solution. I was fully prepared to be lulled to sleep with another storage device but StorageOS had an interesting new take on the storage array. Their solution is to use containers to provide a global namespace to a clustered file system. Having a lightweight 40MB container acting as a controller for your virtual storage array could be an interesting topic all by itself. Off of the top of my head the use cases would include:\nUsing containers in public cloud environments to provide enterprise features blue/green storage array upgrades to ensure compatibility spinning up a storage array along with your application even. add a storage array to a local desktop for testing a solution before moving to prod. (Think of that build, ship, run strategy) StorageOS was tackling the idea of portability of the underlying storage systems and how flexible you need storage to be with a container application farm.\nWhat struck me the most about their TFD12 presentation was something completely glossed over. Clearly, there are technical challenges with creating a distributed storage array leveraging a container operating system, but don\u0026rsquo;t forget that we need to have all of those enterprise capabilities to make it usable.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Tech Field Day 12. In addition, Igneous Systems provided a gift to all delegates but with no expectations about the coverage through this blog or social media.\nThe list of capabilities that the solution already has, blew me away. The company started in 2013 and has only recently started selling a product. Despite this fact the following is a non-exhaustive list of capabilities that the storage solution has already.\nBlock Level De-Duplication Compression Volume Resizing and Thin Provisioning Replication Snapshots Cloning Volume High Availability Volume Encryption Advanced Erasure Coding Caching SSD Optimization Cluster Scale Out Restful API (There better be for a container OS) Role Based Access Control Reporting Tag Based rules and placements There are capabilities that the enterprise demands in today\u0026rsquo;s era, but most storage companies I\u0026rsquo;ve seen go to market with a subset of the capabilities listed above. If you\u0026rsquo;ve got a storage array in your environment, you\u0026rsquo;ve probably seen some of these features added through upgrades but not as part of the initial release, I know that I have.\nSo it begs to have the question answered from the title of this post: \u0026ldquo;What Capabilities are Needed for a Startup Storage Company?\u0026rdquo; Is this the new standard? If you take a storage solution to market in 2017 can you even survive without these capabilities? I\u0026rsquo;m interested in hearing what you thing, post your feedback in the comments section for this post.\nFind Out More about StorageOS If you want to find out more about StorageOS and their solution, go sign up for a free version of their product. You can use their AWS AMI, Azure Marketplace VM, Downloadable OVA or Native Container. If you\u0026rsquo;re looking for some of their advanced features, try the professional version which gives you up to 1TB of storage at $29.95 per month. You can also check out what the other TFD12 delegates are thinking by visiting their blogs:\nPacket Pushers ActualTech.IO\n","permalink":"https://theithollow.com/2017/01/10/capabilities-needed-startup-storage-company/","summary":"\u003cp\u003eAt this point I assume everyone is tired of hearing about storage arrays. They seem to have saturated the market to the point where the new storage companies have all but evaporated, or got bought by a larger company. Couple that with a focus on moving to public clouds and the storage array seems to have been beaten to death.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/01/storage-os-logo.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/01/storage-os-logo-300x64.png\"\u003e\u003c/a\u003e While I was at \u003ca href=\"http://techfieldday.com/event/tfd12/\"\u003eTech Field Day 12\u003c/a\u003e I had the opportunity to see the folks over at StorageOS present on their fancy new storage solution. I was fully prepared to be lulled to sleep with another storage device but StorageOS had an interesting new take on the storage array. Their solution is to use containers to provide a global namespace to a clustered file system. Having a lightweight 40MB container acting as a controller for your virtual storage array could be an interesting topic all by itself. Off of the top of my head the use cases would include:\u003c/p\u003e","title":"What Capabilities are Needed for a Startup Storage Company?"},{"content":"VMware Users Group (VMUG) has been an important part of my career and an institution that has been close to my heart for many years. I\u0026rsquo;ve written about my experiences before and served as a leader for several years here in Chicago. Thats why there was some concern when I saw this tweet from Anton Zhbankov last week.\nNow, at first this didn\u0026rsquo;t surprise me too much because there has been a rule that each VMUG chapter is supposed to comprise of more customers than partners as leaders. So, naturally I assumed that this was just fixing an imbalanced VMUG chapter. But as I asked more questions found out that it really was because Anton worked for Nutanix and working at this specific company excludes you from being a VMUG leader. Fellow blogger Matt Crape also wrote a post about this on his site. So now this tweet that had me concerned, has just simply made me sad.\nLet me be up front with you. I\u0026rsquo;m a VMware vExpert, a Nutanix Technology Champion, and a former VMUG leader. I have ties to each of these organizations and feel about as impartial as a person could be about these companies. I do however want to see the best thing for the virtualization community, no matter what that means for these two companies.\nWhy is this a sad thing? VMware and Nutanix have often been at odds, and this is no big secret. Nutanix built their own hypervisor (Acropolis) and VMware built their own Hyper-Converged storage platform (VSAN), so the two companies have overlapping customer bases. You can certainly see why they would consider each other competitors. Competition can be healthy, but in this case, the community members are the ones who will suffer.\nI can tell you that it is difficult to find passionate people willing to sacrifice their time to put together meetings and organize a group like this. It\u0026rsquo;s even more difficult to find customers willing to lead. In my experience it\u0026rsquo;s tough to get customers willing to stand up in front of people to talk, let alone organize meetings and conferences. So partners help to fill that void. Unfortunately, in some cases the partners have used the groups as a platform to see their own product. Maybe Nutanix was guilty of this and VMware had to put a stop to it, but my guess is that this isn\u0026rsquo;t the reason. If specific leaders were using the platform to sell their own solutions, VMUG should expel those leaders instead of an entire company. Excluding passionate people just because of who they work for a certain company can only hurt the user group.\nIs VMUG Really Independent? From the VMUG website, here is the description of the organization:\nFrom what I\u0026rsquo;ve read on social media, VMware has made the decision to ban Nutanix employees from being leaders. If this is the case, then is VMUG really an independent organization?\nWhy am I writing this? Generally, I write posts to comment on things or educate, but in this case I really hope to see a response from VMUG. What are the rules for being a leader? Why did you make this decision? Are there other companies with restrictions on VMUGs?\nIf VMware is mandating this change about their competitors\u0026rsquo; involvement with the group, then fine. I disagree with it, but this is VMware\u0026rsquo;s prerogative. If there are companies with restrictions though, be transparent about it. Own the decision that was made and make it clear to everyone what the rules are. If you are confident in your decisions, there should be no problem making a statement about it. The lack of any public statement about this is making the situation worse and breeds distrust of VMware in my opinion.\nVMUG has a specific Leader Guidelines document that was update on January 1st of 2017. There are no mentions of any companies being denied the ability to be leaders unless there are violations which are clearly listed:\nThere are guidelines for partners being leaders which does speak to the customer/partner ratio but there is a disclaimer in it:\nThe disclaimer does not mention any specific companies but does say it could be anyone:\nI\u0026rsquo;m not defending VMware nor Nutanix and clearly don\u0026rsquo;t know the whole story but sunlight is the best disinfectant so I\u0026rsquo;d like to see a formal explanation.\nWhat is the future of VMUG now? Like I said, VMUG is near and dear to my heart and I don\u0026rsquo;t want to see something bad happen to it. I also don\u0026rsquo;t want to see members leaving because of a loss of good leaders or bickering. It\u0026rsquo;s uncomfortable to see people fighting and people just don\u0026rsquo;t want to be around that kind of atmosphere.\nLast year I saw a significant drop in attendance for many VMUGs across the US and we certainly had more difficulty securing sponsors for our meetings. I worry that excluding other companies will make the group meeting even more difficult to organize.\nPlease work this out so nothing happens to this group that has taught me how to speak in public, share ideas, and learn from complete strangers who work in my profession. #SaveOurVMUG\n","permalink":"https://theithollow.com/2017/01/07/dont-like-mommy-daddy-fight-vmug-edition/","summary":"\u003cp\u003eVMware Users Group (VMUG) has been an important part of my career and an institution that has been close to my heart for many years. I\u0026rsquo;ve \u003ca href=\"/2016/09/26/a-farewell-to-vmug/\"\u003ewritten about my experiences\u003c/a\u003e before and served as a leader for several years here in \u003ca href=\"http://chicagovmug.com\"\u003eChicago\u003c/a\u003e. Thats why there was some concern when I saw this tweet from \u003ca href=\"https://twitter.com/antonvirtual\"\u003eAnton Zhbankov\u003c/a\u003e last week.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2017/01/VMUGTweet1.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2017/01/VMUGTweet1-300x110.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eNow, at first this didn\u0026rsquo;t surprise me too much because there has been a rule that each VMUG chapter is supposed to comprise of more customers than partners as leaders. So, naturally I assumed that this was just fixing an imbalanced VMUG chapter. But as I asked more questions found out that it really was because Anton worked for Nutanix and working at this specific company excludes you from being a VMUG leader. Fellow blogger \u003ca href=\"https://twitter.com/MattThatITGuy\"\u003eMatt Crape\u003c/a\u003e also wrote a post about this on \u003ca href=\"https://t.co/DhzO4jHRS2\"\u003ehis site\u003c/a\u003e. So now this tweet that had me concerned, has just simply made me sad.\u003c/p\u003e","title":"I Don't Like it When Mommy and Daddy Fight - VMUG Edition"},{"content":"Another year has come and gone. The aspirations that we had for our past trip around the sun have been extinguished and a new set of goals wrapped in confident optimism are on our horizons. For many, the end of the year is used to recharge and take a break from work to celebrate with our families. Now with rejuvenated ambition we can set our backlog of objectives for a new year\u0026rsquo;s worth of challenges. This post attempts to relate some agile principles used for work in your everyday struggle to meet the new year\u0026rsquo;s goals.\nFeedback Loop If we\u0026rsquo;ve learned anything from agile methodologies, we know that to be successful we need to have constant feedback. What\u0026rsquo;s working and what isn\u0026rsquo;t. Maybe you set a new year\u0026rsquo;s resolution last year and you can see how you did. Did you accomplish what you set out to do or not? If you didn\u0026rsquo;t, what held you back? For me I had both personal as well as work goals. I wanted to finish my second VCDX, run my first 5k, complete a Pluralsight course, be home with family more, and lose some weight. I was able to get four of my goals done last year. (If you\u0026rsquo;re wondering, I left the weight loss goal alone so that I could use it as an example of adding it to my backlog. It\u0026rsquo;s my story, just let it go.) I think I was pretty productive last year since I worked on some big projects for my employer, wrote about 100 blog posts and completed a few other certifications.\nNow that we\u0026rsquo;ve identified what last years objectives were and if we me the goals or not, take a closer look at those objectives. If you didn\u0026rsquo;t meet your goals, why not? This could be due to any number or reasons including poor planning, an unforeseen issue like an illness, or just about anything. The important part of this exercise is to try to understand why it didn\u0026rsquo;t happen, and what could be done to fix it if anything. Please don\u0026rsquo;t think I\u0026rsquo;m lecturing here, I don\u0026rsquo;t have all the answers either, but from my past experiences I know that you can\u0026rsquo;t fix things when you don\u0026rsquo;t know why they\u0026rsquo;re broken in the first place. Well, except when turning it off and on again actually works.\nUpdate the Backlog While you\u0026rsquo;re analyzing last years goals, try to decide what should be added to the new year\u0026rsquo;s backlog of activities. Obviously, new things that you want to do will be added but also the stuff that didn\u0026rsquo;t get accomplished for the previous year? It\u0026rsquo;s worth analyzing last years goals to determine if the work you did was of value to you. Maybe that shiny certification you were going for was more trouble than it was worth. Maybe you spent too much time trying to hit your goals which took you away from family or other things that you find more important?\nI like to pick goals that I think are reasonably attainable but sometimes a really lofty goal is needed to push yourself. If you\u0026rsquo;re going to pick a goal that seems daunting, consider adding milestones for that task. For example, instead of saying \u0026ldquo;VCDX Certification\u0026rdquo; as a goal, set smaller milestones to get you to your goal like \u0026ldquo;Pick a VCDX Design Project by March 1st\u0026rdquo; and \u0026ldquo;Submit VCDX Design by August\u0026rdquo;. This way you can keep your focus on a smaller achievable task rather than the whole thing which might feel impossible at first.\nAdd Sprints Whatever your goals may be, set them in your backlog and work on them throughout the year. Consider checking in with them on a periodic basis or even setting smaller goals in sprints. Maybe each week has a goal of losing one pound, or learning one small topic of a bigger subject. Then check back in with yourself and take stock with some honest feedback on how you did. Remember that you can always add something to the backlog and start it again if you need to and you can change directions if something isn\u0026rsquo;t working.\nWhatever your goals may be in the new year good luck and have an agile new year!\n","permalink":"https://theithollow.com/2017/01/02/agile-new-year/","summary":"\u003cp\u003eAnother year has come and gone. The aspirations that we had for our past trip around the sun have been extinguished and a new set of goals wrapped in confident optimism are on our horizons. For many, the end of the year is used to recharge and take a break from work to celebrate with our families. Now with rejuvenated ambition we can set our backlog of objectives for a new year\u0026rsquo;s worth of challenges. This post attempts to relate some \u003ca href=\"http://www.allaboutagile.com/what-is-agile-10-key-principles/\"\u003eagile\u003c/a\u003e principles used for work in your everyday struggle to meet the new year\u0026rsquo;s goals.\u003c/p\u003e","title":"To an Agile New Year"},{"content":"If you follow me on twitter, you\u0026rsquo;ve probably seen a little bit of back and forth between myself and a Seattle fellow named Jason Langer. Jason and I have known each other for several years now over social media channels due to our similar interests in VMware technologies. I usually run into Jason only once a year at VMworld, but it\u0026rsquo;s one of these situations where I feel like we chat often enough just because of twitter conversations.\nIn any case, Jason and I both have home labs to test out software or strategies that we use for work. Now, a home lab is for sure a luxury because they\u0026rsquo;re not cheap, but in Jason and my case, really useful for testing out new strategies and getting our hands dirty. Home labs in the VMware circles are one of those things where the bigger toys wins. Much like some guys brag about how big the engine of their car is or something. Home labs are the geeks version of, \u0026ldquo;I\u0026rsquo;ve got the cooler toys\u0026rdquo; game.\nThis year at VMworld, Jason had decided to take this game up a notch and was taunting me about not having 10 gigabit networking in my lab. He had been posting things on twitter to needle me about this missing functionality. Enough that even other AHEAD colleagues of mine, and random twitter followers would tease me about only have 1 gigabit in my lab. Jason even posted pictures of my logo, with 10GbE switches during VMworld.\nThis was all good natured fun. I could\u0026rsquo;ve purchased a switch, but didn\u0026rsquo;t have a great reason to add 10 gigabit networking to my lab other than to rub it in Jason\u0026rsquo;s face so I ignored it. Well a few weeks ago, right before the Holiday season, a package arrived on my door step. You guessed it, it was a 10 gigabit switch. It turns out that Jason got my address somehow (creepy) and sent me a switch that perhaps he had no need for anymore.\nWhat an incredibly thoughtful gesture, but I can\u0026rsquo;t say that I\u0026rsquo;m totally surprised. Like Jason, I\u0026rsquo;ve met plenty of great people in the VMware or VMUG community that are really willing to help each other out and teach, as well as sometimes drop a 10 GbE switch on their doorstep.\nI added the switch to my home lab (After getting some 10 GbE Nics and a power cable. You\u0026rsquo;d think he could\u0026rsquo;ve taken care of that wouldn\u0026rsquo;t you?) and I\u0026rsquo;ve got some pretty fast vMotion\u0026rsquo;s happening now.\nI replaced some of my cables with a green color scheme to denote my storage and vMotion networks on 10 GbE. The Synology arrays are not 10 GbE but it seemed to make sense to plug them into the 10 GbE switch since I could only afford a few 10 GbE NICs for servers and had empty ports. The Netgear XS708E switch will auto sense so it worked well.\nIn response to Jason\u0026rsquo;s generous gift, I\u0026rsquo;ve decided to donate $700 of theITHollow.com\u0026rsquo;s revenue towards Lake County PADS. PADS is an organization that helps the homeless get on their feet, give them a warm place to stay and help them with job placement and interviews. Thanks Jason, Happy HollowDays to you, your family and theITHollow.com readers.\nJason and I at VMworld 2016\n","permalink":"https://theithollow.com/2016/12/19/unbelievable-gift-home-lab/","summary":"\u003cp\u003eIf you follow me on \u003ca href=\"https://twitter.com/eric_shanks\"\u003etwitter\u003c/a\u003e, you\u0026rsquo;ve probably seen a little bit of back and forth between myself and a Seattle fellow named \u003ca href=\"https://twitter.com/jaslanger\"\u003eJason Langer\u003c/a\u003e. Jason and I have known each other for several years now over social media channels due to our similar interests in VMware technologies. I usually run into Jason only once a year at VMworld, but it\u0026rsquo;s one of these situations where I feel like we chat often enough just because of twitter conversations.\u003c/p\u003e","title":"Unbelievable Gift for the Home Lab"},{"content":"A recent vendor product briefing during Tech Field Day 12 got me thinking about the term \u0026ldquo;pay-as-you-go\u0026rdquo;. In my line of work, I talk about public cloud a decent amount and maybe I take pay-as-you-go for granted. When I think about this term it means that as soon as I\u0026rsquo;m done with a resource, I can destroy it and no longer have to pay for it anymore. It also means that I can scale when I need to and just start paying for the new resources as I start consuming them.\nIgneous Systems presented at TFD12 and provided a briefing of their \u0026ldquo;Igneous Data Service\u0026rdquo; which was a solution to provide you a content store based on S3. This solution was a managed service that provided you an S3 appliance to run in your own private datacenter. The appliance was \u0026ldquo;zero touch\u0026rdquo; meaning that all updates, and management of the appliance was handled by Igneous Systems themselves. The appliance would report health back to Igneous Systems to ensure that everything was running correctly and that no drives needed replaced or anything. It\u0026rsquo;s a pretty interesting solution if you are a service provider or can\u0026rsquo;t put your data in a public cloud solution like AWS S3.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Tech Field Day 12. In addition, Igneous Systems provided a gift to all delegates but with no expectations about the coverage through this blog or social media.\nMy issue was that this solution used a 212 TB appliance and required a one-year commitment to get started. If you ran out of storage, you could add another appliance to your service and would be billed accordingly. Igneous Systems considers this \u0026ldquo;pay-as-you-go\u0026rdquo; because you can always add more appliances and pay the difference. This mentality is even demonstrated on the front page of their website.\nI know that everything has to have a base scale unit. Amazon EC2 won\u0026rsquo;t let you add a half of a CPU to your instances for example. In addition, everything has to have a time scale, such as Amazon EC2 started instances are billed for a full hour. So in that sense nothing is really pay as you go, because you have to make jumps between 1 CPU and 2 CPUs and you\u0026rsquo;ll have to pay for a full hour even if you don\u0026rsquo;t use the entire hour. But with Igneous Systems I need to pay for an entire year and use 212TB increments of S3 based storage!? That seems like a stretch of the \u0026ldquo;pay-as-you-go\u0026rdquo; rule to me. This seems more like how traditional storage arrays are purchased.\nI asked another question of the Igneous Systems team which was \u0026ldquo;Do your customers treat this as an operational expense or a capital expense?\u0026rdquo; I was told that their customers see this as an operational expense due to how flexible Igneous Systems works with the customers. See the video here: http://techfieldday.com/video/igneous-systems-hyperscale-management-with-jeff-hughes/\nSo I Ask You Where is the line that gets drawn when you determine something is \u0026ldquo;pay-as-you-go\u0026rdquo; vs buying a solution in increments? In Igneous Systems\u0026rsquo; mindset, their customers have so much data, that a 212 TB increment as the base unit is small enough to meet this requirement. Is that a fair assessment? You decide.\nI\u0026rsquo;d love to hear your feedback in the comments.\n","permalink":"https://theithollow.com/2016/12/12/everything-pay-go/","summary":"\u003cp\u003eA recent vendor product briefing during \u003ca href=\"http://techfieldday.com/event/tfd12/\"\u003eTech Field Day 12\u003c/a\u003e got me thinking about the term \u0026ldquo;pay-as-you-go\u0026rdquo;. In my line of work, I talk about public cloud a decent amount and maybe I take pay-as-you-go for granted. When I think about this term it means that as soon as I\u0026rsquo;m done with a resource, I can destroy it and no longer have to pay for it anymore. It also means that I can scale when I need to and just start paying for the new resources as I start consuming them.\u003c/p\u003e","title":"Is Everything Pay-as-You-Go?"},{"content":"I was pretty unsure of the value proposition from DriveScale in the weeks preceding Tech Field Day 12. Maybe the reason is because I\u0026rsquo;m not a Hadoop expert by any means. They have a pretty interesting idea though, so I wanted to make sure others were clear about what their solution was capable of.\nIn a virtualized world, we\u0026rsquo;re pretty familiar with decoupling disks from our storage. It\u0026rsquo;s done via storage arrays that present iSCSI, Fibre Channel, NFS or whatever. Once we\u0026rsquo;ve presented a pool of disks to our hypervisor, we can carve up small virtual disks to be used with our virtual machines. In a Hadoop world, we want to have direct access to our drives so that HDFS can manage the storage. For this, we usually have rack mounted pizza box type servers with a certain amount of storage in them and then we can add multiples of them to form a cluster. DriveScale wanted to give HDFS some extra flexibility by allowing a pool of disks to be added, or removed to our servers.\nIn the example below, we see a compute layer which is any number of physical servers, probably only providing compute and memory. Those servers attach to a DriveScale adapter (which is a one rack unit, four node server) attached via iSCSI. The DriveScale Adapter connects to direct attached JBOD (Just a Bunch of Disks) via a SAS connection.\nSo when you have connected everything, you can assign a group of disks to one of your servers for use with a Hadoop cluster. Drive sizes, speeds, models, etc can be swapped out, for the cluster and moved between servers if needed. The example below shows that a group of eight disks were assigned to a single compute node.\nThe DriveScale Adapter is shown below and contains dual power supplies and four DriveScale ethernet to SAS Adapters in the 1U chassis. DriveScale wants to use unused capacity in the top of rack switches for this iSCSI connection. It does not require any quality of service to the switches but does require the use of jumbo frames for the iSCSI traffic.\nSummary DriveScale seems like an interesting solution for Hadoop workloads. It\u0026rsquo;s going to give a Hadoop shop some much needed flexibility for their hardware but has a pretty specific use case and won\u0026rsquo;t work for everyone. You might not see these guys like a Dell EMC or HP which is in almost every enterprise data center, but they might have a serious play for those shops focused on Hadoop.\n","permalink":"https://theithollow.com/2016/12/05/decouple-disks-compute-drivescale/","summary":"\u003cp\u003eI was pretty unsure of the value proposition from \u003ca href=\"https://www.drivescale.com/\"\u003eDriveScale\u003c/a\u003e in the weeks preceding \u003ca href=\"http://techfieldday.com/events/tfd12\"\u003eTech Field Day 12\u003c/a\u003e. Maybe the reason is because I\u0026rsquo;m not a Hadoop expert by any means. They have a pretty interesting idea though, so I wanted to make sure others were clear about what their solution was capable of.\u003c/p\u003e\n\u003cp\u003eIn a virtualized world, we\u0026rsquo;re pretty familiar with decoupling disks from our storage. It\u0026rsquo;s done via storage arrays that present iSCSI, Fibre Channel, NFS or whatever. Once we\u0026rsquo;ve presented a pool of disks to our hypervisor, we can carve up small virtual disks to be used with our virtual machines. In a Hadoop world, we want to have direct access to our drives so that HDFS can manage the storage. For this, we usually have rack mounted pizza box type servers with a certain amount of storage in them and then we can add multiples of them to form a cluster. \u003ca href=\"https://www.drivescale.com/\"\u003eDriveScale\u003c/a\u003e wanted to give HDFS some extra flexibility by allowing a pool of disks to be added, or removed to our servers.\u003c/p\u003e","title":"Decouple Disks and Compute with DriveScale"},{"content":"I\u0026rsquo;ve always liked the idea of taking a series of Microsoft PowerShell scripts and putting them behind a user interface so that I can give the tool to other users. I\u0026rsquo;m not sure why this idea appeals to me, but probably because it makes me feel like a programmer, if only for a little while. I came across this post by Stephen Owen and I had to try it out.\nThe project that I picked for this was based on the AWS PowerShell tools that I hadn\u0026rsquo;t used yet. Let\u0026rsquo;s face it, this is a good way to check out two different things, I didn\u0026rsquo;t have much experience with: The AWS PowerShell Tools and XAML for creating GUIs.\nIf you want to create your own GUI form for PowerShell scripts, please read Stephen\u0026rsquo;s post, download the code that I used and modify it, or both of them!\nUse the AWS PowerShell Console Before you use this console, you must install and configure the AWS PowerShell tools provided by Amazon. They can be found here: https://aws.amazon.com/powershell/\nAfter the installation of the tools, run the Initialize-AWSDefaults to set your default region and user keys. These defaults will be leveraged by the custom PowerShell Console.\nNOTE: One thing that you may want to modify is the desktop background. I\u0026rsquo;ve used theITHollow logo for the background, but you may want a different image. This can be modified by changing this line of code in the PowerShell form:\n\u0026lt;ImageBrush ImageSource=\u0026#34;https://assets.theithollow.com/wp-content/uploads/2016/10/AWS-PoshBackground.png\u0026#34;/\u0026gt; Go to Github and download the code. From here, you can run the code in your favorite PowerShell ISE. Once the code has been executed, a GUI will pop open. From here, you can select the \u0026ldquo;Get-EC2Instance-State\u0026rdquo; button to populate the grid with your EC2 instances as well as their state. Select any of the EC2-Instances and then click the appropriate button in the EC2 Actions sub-window to manage them.\nI\u0026rsquo;ve created a short video to demonstrate the operations. Please feel free to modify, or reuse any of the provided code for your own projects and take a peak at Stephen Owen\u0026rsquo;s series on XAML!\nhttps://youtu.be/Z7qhOWUGp6Q\n","permalink":"https://theithollow.com/2016/11/29/aws-powershell-console-xaml/","summary":"\u003cp\u003eI\u0026rsquo;ve always liked the idea of taking a series of Microsoft PowerShell scripts and putting them behind a user interface so that I can give the tool to other users. I\u0026rsquo;m not sure why this idea appeals to me, but probably because it makes me feel like a programmer, if only for a little while. I came across this \u003ca href=\"https://foxdeploy.com/2015/04/10/part-i-creating-powershell-guis-in-minutes-using-visual-studio-a-new-hope/\"\u003epost\u003c/a\u003e by \u003ca href=\"https://twitter.com/foxdeploy\"\u003eStephen Owen\u003c/a\u003e and I had to try it out.\u003c/p\u003e\n\u003cp\u003eThe project that I picked for this was based on the AWS PowerShell tools that I hadn\u0026rsquo;t used yet. Let\u0026rsquo;s face it, this is a good way to check out two different things, I didn\u0026rsquo;t have much experience with: The AWS PowerShell Tools and XAML for creating GUIs.\u003c/p\u003e","title":"AWS PowerShell Console with XAML"},{"content":"If you\u0026rsquo;re trying to get started with vRealize Automation and don\u0026rsquo;t know where to get started, you\u0026rsquo;re in luck. Pluralsight has just released my course on \u0026ldquo;Getting Started with vRealize Automation 7\u0026rdquo;, which will give you a great leg up on your new skills. In this course you\u0026rsquo;ll learn to install the solution, configure the basics, connect it to your vSphere environment and publish your first blueprints. The course will explain why you\u0026rsquo;d want to go down the path of using vRA 7 in the first place and how to use the solution.\nPluralsight requires a subscription to view all of the great content provided by their many authors, but if you want to try it out first, check out their free trial here.\nIf you\u0026rsquo;re already familiar with the solution but just need a hand in configuring certain parts, you can always check out my getting started guide.\n","permalink":"https://theithollow.com/2016/11/28/getting-started-vrealize-automation-course/","summary":"\u003cp\u003eIf you\u0026rsquo;re trying to get started with vRealize Automation and don\u0026rsquo;t know where to get started, you\u0026rsquo;re in luck. \u003ca href=\"http://pluralsight.com\"\u003ePluralsight\u003c/a\u003e has just released my course on \u0026ldquo;Getting Started with vRealize Automation 7\u0026rdquo;, which will give you a great leg up on your new skills. In this course you\u0026rsquo;ll learn to install the solution, configure the basics, connect it to your vSphere environment and publish your first blueprints. The course will explain why you\u0026rsquo;d want to go down the path of using vRA 7 in the first place and how to use the solution.\u003c/p\u003e","title":"Getting Started with vRealize Automation Course"},{"content":"vRealize Automation has had a different upgrade process for about every version that I can think of. The upgrade from vRA 7.1 to 7.2 is no exception, but this time you can see that some good things are happening to this process. There are fewer manual steps to do to make sure the upgrade goes smoothly and a script is now used to upgrade the IaaS Components which is a nice change from the older methods. As with any upgrade, you should read all of the instructions in the official documentation before proceeding.\nUpgrade! To start your vRA 7.1 to 7.2 upgrade, grab a snapshot of the vRA appliance, IaaS Server(s) and a SQL database backup. Yeah, the new process is much easier to handle, but there is no excuse for not taking a backup (just in case). Once the backups are secured, login to the VAMI interface of your vRA appliance at: https://vRA_APPLIANCE.DOMAIN.LOCAL:5480. Once logged in as the root user, go to the Update tab and click \u0026ldquo;Check Updates\u0026rdquo;. In a moment the interface should show a new version available and then you can click \u0026ldquo;Install Updates\u0026rdquo;.\nThis next part is important. BE PATIENT!!!!!\nThe Upgrade process will run in the background and may take quite a long time. My upgrade took about 45 minutes to complete and I thought that my browser had stopped refreshing. If you want to keep up with exactly what\u0026rsquo;s happening you can monitor the /opt/vmware/var/log/vami/updatecli.log file on the vRA appliance.\nEventually, you\u0026rsquo;ll get a message stating that the appliance has been updated successfully. At this point, you can reboot the vRA appliance.\nOnce the vRA appliance has been upgraded and restarted, check to make sure that all of the services show \u0026ldquo;REGISTERED\u0026rdquo; again. This may take a few moments.\nOnce the vRA appliance is upgraded, SSH into the appliance and navigate to the /usr/lib/vcac/tools/upgrade directory. From here run ./generate_properties which will create a properties file in the same directory. Open this file with your favorite editor such as VI.\nThe properties file will want to know some information about the IaaS component servers. You\u0026rsquo;ll need service account information for the web services and DEMs. Enter in the information and save the file.\nNOTE: You might think this is pretty insecure to keep a file with passwords on it stored in the vRA appliance. You\u0026rsquo;re probably right, but the next stage will delete this file when it\u0026rsquo;s done running.\nOnce you\u0026rsquo;ve saved the file run ./upgrade to begin the upgrade process. You should see something similar to the output listed below.\nSummary The upgrade process is certainly getting better for vRA and I look forward to what is in store next from VMware. You might be wondering what happens to the vRO instance that is embedded in the vRA appliance. Well, it\u0026rsquo;s upgraded too. Enjoy your new 7.2 instance and get to deploying new containers and Azure VMs with those new fancy capabilities. Happy automating!\n","permalink":"https://theithollow.com/2016/11/24/upgrade-vra-7-1-7-2/","summary":"\u003cp\u003evRealize Automation has had a different upgrade process for about every version that I can think of. The upgrade from vRA 7.1 to 7.2 is no exception, but this time you can see that some good things are happening to this process. There are fewer manual steps to do to make sure the upgrade goes smoothly and a script is now used to upgrade the IaaS Components which is a nice change from the older methods. As with any upgrade, you should read all of the instructions in the \u003ca href=\"http://pubs.vmware.com/vrealize-automation-72/topic/com.vmware.ICbase/PDF/vrealize-automation-71to72-upgrading.pdf\"\u003eofficial documentation\u003c/a\u003e before proceeding.\u003c/p\u003e","title":"Upgrade from vRA from 7.1 to 7.2"},{"content":"It’s the time of year in the United States where we celebrate Thanksgiving. If you’re not familiar with this, it’s a holiday where we give thanks for those things which have blessed us and to take a moment to reflect on all the good things that we have. I recently came home from Tech Field Day 12 and was reflecting how some people have positively affected my career and possibly had no clue what kind of impact they\u0026rsquo;ve made.\nMy Story While working as a Systems Administrator several years ago I was invited by a vendor to go to my first VMworld. During a dinner in Las Vegas I was introduced to another fellow Chicagoan named Chris Wahl who was also interested in VMware. It was during this dinner when my eyes were opened to a larger community of people who shared experiences through twitter, blogs and social media in general. At the time I thought that this was a trivial thing, but soon after the conference I began participating in the VMUG in Chicago as a member, so this chance encounter introduced me to several new avenues of career growth.\nIt wasn’t long after attending my first VMUG when I began presenting. Yeah, this was a big deal for an introvert. Speaking in front of other people about things that I felt like I barely understood myself was difficult. But it helped me to learn, that by teaching a subject to other people, I learned it at a much deeper level than I originally understood it.\nThis experience made it apparent to me that I should start blogging about these things so I’d understand it better, while hopefully helping other people that may be looking for information. Blogging eventually introduced me to Stephen Foskett and the Tech Field Day crew. Stephen introduced me to a small group of other delegates including Scott Lowe and David Davis who were working on their own startup. Scott and David needed a hand with some keynotes and I was happy to oblige.\nBetween VMUGs, blogging, and speaking engagements I had learned enough to get a job at AHEAD which introduced me to a great team of people, probably the smartest group I’ve ever worked with. No doubt, a team I’ll likely never be able to top. Here I found technical expertise on more things than I’ll ever be able to learn myself, but on top of that I found people that have helped me to pursue career goals. Steve Pantol has been a guide and example on how to manage stressful challenges and Tim Carr has been a great VCDX partner and friend.\nIndianapolis VMUG User Conference 2016\nI’ve, for sure, missed many people in this post, but picked a few that I feel have directly or indirectly had a big impact to my work life besides my family members who have been a much bigger part of my career. I would guess that the people that I\u0026rsquo;ve mentioned have very little idea that they\u0026rsquo;ve had any impact at all on my career so hopefully they will see this and know that I\u0026rsquo;ve at least benefitted from their conversations, friendships, or guidance.\nWhat\u0026rsquo;s Your Story? It seems very odd writing about specific people that have had an impact on your career, especially ones that you still work with. But should it? It’s Thanksgiving and shouldn’t it be OK to acknowledge people who’ve helped you? There should be no shame in saying thank you to people who’ve impacted your lives and I encourage the community to share how individuals have helped them in the comments for this post.\nHappy Thanksgiving from theITHollow!\n","permalink":"https://theithollow.com/2016/11/21/unwitting-accomplices-career/","summary":"\u003cp\u003eIt’s the time of year in the United States where we celebrate Thanksgiving. If you’re not familiar with this, it’s a holiday where we give thanks for those things which have blessed us and to take a moment to reflect on all the good things that we have. I recently came home from Tech Field Day 12 and was reflecting how some people have positively affected my career and possibly had no clue what kind of impact they\u0026rsquo;ve made.\u003c/p\u003e","title":"Unwitting Accomplices in Your Career"},{"content":"Customers have a ton of requirements around log aggregation, file shares, media streaming repositories, and just a simple place to store objects. It can be difficult to manage all of these different use cases but Dell EMC Isilon might just be the solution that can help to manage these requirements. Many times customers have several small islands of storage used for different purposes. Maybe this is because of a brand new requirement like \u0026ldquo;all security camera data will be stored for seven years\u0026rdquo;, which might require some additional storage space. Whatever the reason, companies many times will have small islands of storage, possibly even from different storage companies. This can become tough to manage and require more storage administrators with differing skill sets.\nIsilon may be a good solution to aggregate all of these storage islands into a single scalable solution that can handle many different types of workloads. Throwing all of your data into a single pool is often described as a \u0026ldquo;Data Lake\u0026rdquo; due to the volume of the pool and the amount of stuff stored on it.\nWhat is an Isilon? Dell EMC Isilon provides a single namespace for a giant pool of storage. A single Isilon cluster can scale out to 144 nodes in the cluster and a total of about 60 Petabytes of data. Each node of the cluster provides additional performance and capacity to the overall Isilon cluster. One of the neat things about the solution is that you can add a different node type to the cluster where some nodes can provide more performance and some are better for dense capacity. You aren\u0026rsquo;t constrained by the first node type that you chose.\nAn Isilon cluster is designed to ingest a bunch of different types of workloads all at the same time and to be able to do this for an enterprise there has to be a variety of protocols that can be used to add data to it. Currently, Isilon can provide the following protocols which give you tons of flexibility:\nSMB v3 HTTP REST SWIFT HDFS NDMP NFS v4 FTP What Workloads do I Use on Isilon? During Tech Field Day 12, David Noy (VP Isilon Product Strategy) was very clear about what types of workloads are best suited for an Isilon. Streaming media, file shares, home directories, log files, analytical data, etc and just about any type of file level data. David was clear to mention that you can also use an Isilon for storing virtual machines and backups but probably not the best fit for the product. Dell EMC obviously has other technologies such as XtremIO / VMAX/ VNX for virtual machines and Avamar / Data Domain for backup related tasks which are much better suited for those workloads.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Tech Field Day 12. In addition, Dell EMC provided a gift to all delegates but with no expectations about the coverage through this blog or social media.\nIsilon Features If you\u0026rsquo;re going to be storing Petabytes of data, then you better be able to manage security and reliability of those bits. Isilon is a full featured enterprise solution which can provide capabilities such as snapshots through SnapshotIQ, replication through SyncIQ, dedupe through SmartDedupe and various other \u0026ldquo;Smart\u0026rdquo; products.\nEach of the nodes in the cluster can provide additional tiering for caching of the data. To provide additional performance, caching mechanisms can store data for read performance of hot data. Any data that hasn\u0026rsquo;t been accessed recently will be pushed down to spinning disk to allow for more recently used data. An additional feature has even been added so that you can archive the data off to public cloud storage on AWS or Azure.\nA big feature for financial and health care customers is the \u0026ldquo;Smart Lock\u0026rdquo; feature which allows you to prevent files from being changed. This is useful to protect sensitive data from attacks from worms or other malicious activity. Again, very important for an enterprise solution that is housing this much data. It must be protected and managed well. And of course, the directory tree housed by the Isilon can be managed through role-based access control (RBAC) which is a requirement at this point for almost all storage arrays. If you\u0026rsquo;re very concerned about security you can also use self-encrypting drives for storing encrypted data at rest.\nAdministrators will have access to analytics of the data being stored on the cluster to effectively manage the data being added to the Isilon. With the amount of data being stored on the solution, being able to identify which applications are hot spots and chewing up the data or resources is a must have to make sure the cluster is being used effectively and to manage future capacity needs.\nTry it yourself Yeah, it\u0026rsquo;s an enterprise product, but if you want to go get your hands on it, the Isilon team provides a free version of the Edge SD product which is a Isilon virtual appliance typically used for remote sites. The free version lacks a few features but you can get the look and feel of the solution and store up to 36 Tb of data for your home lab if you wish, just don\u0026rsquo;t use it for production purposes for fear of violating the EULA. http://www.emc.com/products-solutions/trial-software-download/isilonsd-edge.htm?PID=STORE-DL-ISD\n","permalink":"https://theithollow.com/2016/11/16/throw-isilon-data-lake/","summary":"\u003cp\u003eCustomers have a ton of requirements around log aggregation, file shares, media streaming repositories, and just a simple place to store objects. It can be difficult to manage all of these different use cases but \u003ca href=\"http://www.emc.com/en-us/storage/isilon/index.htm\"\u003eDell EMC Isilon\u003c/a\u003e might just be the solution that can help to manage these requirements. Many times customers have several small islands of storage used for different purposes. Maybe this is because of a brand new requirement like \u0026ldquo;all security camera data will be stored for seven years\u0026rdquo;, which might require some additional storage space. Whatever the reason, companies many times will have small islands of storage, possibly even from different storage companies. This can become tough to manage and require more storage administrators with differing skill sets.\u003c/p\u003e","title":"Throw Your Isilon in the Data Lake"},{"content":"I was pretty unfamiliar with Cohesity until the recent Tech Field Day 12 presentation but they\u0026rsquo;ve been receiving a lot of buzz in the industry. If you\u0026rsquo;re like I was and weren\u0026rsquo;t paying enough attention, you should at least check them out. Cohesity\u0026rsquo;s go to market strategy is based around covering all aspects of the secondary storage market. The thought being that there are way too many solutions in use by the enterprise and that all of these different solutions makes it difficult to manage. For example, the secondary storage solutions include media servers, backup managers, target storage for backups, cloud gateways, test/dev storage, file shares for archives and a copy of data for analytics. This is a big task to tackle but the real goal for Cohesity is to replace all of these individual server types into a single scale-out solution.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Tech Field Day 12. In addition, Cohesity provided a gift to all delegates but with no expectations about the coverage through this blog or social media.\nOverview The Cohesity appliance comes as a 2U chassis capable of housing four nodes (three nodes are required to get started with a replication factor of two). Each of the nodes provides roughly 24TB of capacity and 1.6 TB of PCIe flash connected by a pair of 10GbE adapters. This is a pretty standard architecture in the hyperconverged market these days. Several other hyperconverged vendors have a VERY similar appliance that is out on the market. The Cohesity appliances also scale out in a similar manner to the other hyperconverged players. Scale out as a necessary and purchase what you need at the time that you need it. The solution includes some of the enterprise features that you\u0026rsquo;d hope from a storage array these days including global dedupe, and an API so that all of the work can be orchestrated. If you\u0026rsquo;d like more information about their products please see their product page for the latest updates.\nBackups It seems to me that the primary function of a Cohesity solution is to take over for a complicated (and traditionally boring) backup solution. Cohesity can register your vCenters or physical servers and pull in the inventory so that it can be backed up as part of a schedule. Its a nice thing to be able to add a virtual machine to a cluster and have it automatically protected by a backup plan. Let\u0026rsquo;s face it, no one wants to manage backups these days. Once the virtual machines are backed up, the data can be searched and restored very quickly or used for test/dev workloads or analytics. It\u0026rsquo;s pretty nice to not have to make multiple copies of this data for use in a test environment or for analytical purposes.\nBackups can be done at a VM Image level and can perform SQL level backups. Any restore requests will be mounted on the internal Cohesity drives and then storage vMotioned back to your vSphere environment if you have the correct vSphere licensing. This includes SQL restores which can be selected at any point in time, something that your database administrators will certainly require out of a backup solution.\nCloud Like many appliance solutions these days, Cohesity is embracing (not ignoring) the public cloud. Cohesity allows you to have data archived out to S3 targets like AWS S3. This data can be stored for a lower cost than some more costly on premises storage arrays so companies will like the ability to take advantage of cheap public cloud storage.\nOne of the newer features currently still under development is that a Cohesity appliance can be spun up on the public cloud. During Tech Field Day 12, Cohesity demonstrated spinning up a new instance in Microsoft Azure right from the Azure Marketplace (still in staging, not available to the public yet). Once an instance is deployed in the public cloud, it can be added as a replication target for your on-premises instance and then you can bring up your test/dev environments on your favorite public cloud vendors (AWS, Azure, Google Compute).\nSummary If you\u0026rsquo;re in the market for some secondary storage solutions, it is probably worth doing some demos with Cohesity before deciding what solution you\u0026rsquo;d eventually like to purchase. Simplifying your enterprise platforms has a big benefit for companies and you should not overlook the value in this.\n","permalink":"https://theithollow.com/2016/11/15/cohesity-provides-secondary-storage-needs/","summary":"\u003cp\u003eI was pretty unfamiliar with \u003ca href=\"http://cohesity.com/\"\u003eCohesity\u003c/a\u003e until the recent \u003ca href=\"http://techfieldday.com/event/tfd12/\"\u003eTech Field Day 12\u003c/a\u003e presentation but they\u0026rsquo;ve been receiving a lot of buzz in the industry. If you\u0026rsquo;re like I was and weren\u0026rsquo;t paying enough attention, you should at least check them out. Cohesity\u0026rsquo;s go to market strategy is based around covering all aspects of the secondary storage market. The thought being that there are way too many solutions in use by the enterprise and that all of these different solutions makes it difficult to manage. For example, the secondary storage solutions include media servers, backup managers, target storage for backups, cloud gateways, test/dev storage, file shares for archives and a copy of data for analytics. This is a big task to tackle but the real goal for Cohesity is to replace all of these individual server types into a single scale-out solution.\u003c/p\u003e","title":"Cohesity Provides All of Your Secondary Storage Needs"},{"content":"Today begins the Gestalt IT Tech Field Day 12 in Palo Alto California. If you\u0026rsquo;ve been in IT for a while and want to remember what it\u0026rsquo;s like to be just \u0026ldquo;keeping up\u0026rdquo; with the conversation, join in on the live stream which you can watch right here.\nCompanies presenting include:\nCohesity Dell EMC Docker Drive Scale Igneous Rubrik StorageOS Pay attention to these other bloggers on twitter to get their perspective on the solutions presented:\nAlex Galbraith Ethan Banks James Green Jody Lemoine John White Jon Hildebrand Josh De Jong Matt Crape Mike Preston Tim Miller Tim Smith Eric Shanks ","permalink":"https://theithollow.com/2016/11/15/7248/","summary":"\u003cp\u003eToday begins the Gestalt IT Tech Field Day 12 in Palo Alto California. If you\u0026rsquo;ve been in IT for a while and want to remember what it\u0026rsquo;s like to be just \u0026ldquo;keeping up\u0026rdquo; with the conversation, join in on the live stream which you can watch right here.\u003c/p\u003e\n\u003cp\u003eCompanies presenting include:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"http://cohesity.com/\"\u003eCohesity\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.delltechnologies.com/en-us/index.htm\"\u003eDell EMC\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://docker.com\"\u003eDocker\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"https://www.drivescale.com/\"\u003eDrive Scale\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://www.igneous.io/\"\u003eIgneous\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://www.rubrik.com/\"\u003eRubrik\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://storageos.com/\"\u003eStorageOS\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003ePay attention to these other bloggers on twitter to get their perspective on the solutions presented:\u003c/p\u003e","title":"Tech Field Day 12 Live Stream"},{"content":"Creating a Cisco UCS Director Catalog is a critical step because it\u0026rsquo;s what your end users will request new virtual machines and services from. There are a couple types of catalogs that will deploy virtual machines, advanced and standard. Standard selects a virtual machine template from vSphere. Advanced selects a pre-defined workflow that has been built in UCSD and then published to the catalog.\nCreate a Standard Catalog To create a “standard” object go to the Policies drop down and select catalogs. From there click \u0026ldquo;Add\u0026rdquo;. Select a catalog type and then click \u0026ldquo;Submit\u0026rdquo;. In this example, I\u0026rsquo;ve chosen the \u0026ldquo;Standard\u0026rdquo; catalog type.\nGive the catalog a name and a description. Then select the catalog icon from the list. From there select “Applied to all groups” and “Publish to end users”.\nNote: Just selecting the \u0026ldquo;Applied to all groups\u0026rdquo; doesn\u0026rsquo;t assign the permissions to those users. This just makes this catalog item available to those users should you choose to assign permissions later.\nSelect the cloud that the catalog will be deployed on and then select the “image” (vSphere Template) that will be used to clone. You may use linked clones as well and have options for the virtual disks. Click Next when finished.\nSelect the Category.\nNote: Application Categories can be used to override the policies assigned at the VDC. This may be useful in order to change a VLAN for a specific catalog item or a different datastore, etc.\nEnter the OS type and any application tags that should be assigned to a catalog item. Click \u0026ldquo;Next\u0026rdquo;.\nSelect your preference for credential items. Click \u0026ldquo;Next\u0026rdquo;.\nSelect lease times, guest customization and cost computations that match your desired design. Click Next.\nSelect what kinds of access that the end users will have to the new catalog item from the portal. Click Next.\nReview the summary and click Submit.\nSummary After completing the steps in this post, a catalog item will be available to have permissions assigned to it. This catalog item will then be available for requests to be made to deploy virtual machines from. A group of catalog items will be your one-stop shop for a self service portal and will become your online store of services that you\u0026rsquo;re providing to your users. Iterate on your first catalog item and continue to build new solutions that your users can deploy to make their work lives better. No more tickets asking for new virtual machines.\n","permalink":"https://theithollow.com/2016/11/14/creating-cisco-ucs-director-catalog/","summary":"\u003cp\u003eCreating a Cisco UCS Director Catalog is a critical step because it\u0026rsquo;s what your end users will request new virtual machines and services from. There are a couple types of catalogs that will deploy virtual machines, advanced and standard. Standard selects a virtual machine template from vSphere. Advanced selects a pre-defined workflow that has been built in UCSD and then published to the catalog.\u003c/p\u003e\n\u003ch1 id=\"create-a-standard-catalog\"\u003eCreate a Standard Catalog\u003c/h1\u003e\n\u003cp\u003eTo create a “standard” object go to the Policies drop down and select catalogs. From there click \u0026ldquo;Add\u0026rdquo;. Select a catalog type and then click \u0026ldquo;Submit\u0026rdquo;. In this example, I\u0026rsquo;ve chosen the \u0026ldquo;Standard\u0026rdquo; catalog type.\u003c/p\u003e","title":"Creating a Cisco UCS Director Catalog"},{"content":"I\u0026rsquo;m a big fan of Terraform from Hashicorp but many organizations are using cloud management platforms like Cisco UCS Director or vRealize Automation in order to deploy infrastructure. If you read my blog often, you\u0026rsquo;ll know that I\u0026rsquo;ve got some experience with both of these products and if you\u0026rsquo;re looking to get up to speed on either of them, check out one of these links: UCS Director 6 Guide or vRealize Automation 7 Guide. But why not use Terraform with Cisco UCS Director and have the best of both worlds?\nUCS Director can deploy virtual machines pretty easily, but what if you want to deploy a more complex stack, like a pair of virtual machines behind a load balancer? Well, UCS Director could do this, but Terraform makes it really easy. So here we\u0026rsquo;ll use a Terraform configuration file to do it.\nThe Terraform Configuration File Below is a configuration file that could be used with Terraform that will deploy a pair of EC2 Instances and place them behind a load balancer. You can see that the instances are deployed in separate availability zones and the instances are web servers that I created through \u0026ldquo;packer\u0026rdquo; which is another Hashicorp product.\nvariable \u0026#34;access_key\u0026#34; {} variable \u0026#34;secret_key\u0026#34; {} provider \u0026#34;aws\u0026#34; { access_key = \u0026#34;${var.access_key}\u0026#34; secret_key = \u0026#34;${var.secret_key}\u0026#34; region = \u0026#34;us-east-1\u0026#34; } resource \u0026#34;aws_elb\u0026#34; \u0026#34;elb1\u0026#34; { name = \u0026#34;hollow-elb\u0026#34; availability_zones = [\u0026#34;us-east-1a\u0026#34;,\u0026#34;us-east-1b\u0026#34;] listener { instance_port = 80 instance_protocol = \u0026#34;http\u0026#34; lb_port = 80 lb_protocol = \u0026#34;http\u0026#34; } instances = [\u0026#34;${aws_instance.instance1.id}\u0026#34;,\u0026#34;${aws_instance.instance2.id}\u0026#34;] cross_zone_load_balancing = true idle_timeout = 400 connection_draining = true connection_draining_timeout = 400 tags { Name = \u0026#34;HollowELB\u0026#34; } } resource \u0026#34;aws_instance\u0026#34; \u0026#34;instance1\u0026#34; { ami = \u0026#34;ami-f9b3e4ee\u0026#34; instance_type = \u0026#34;t2.micro\u0026#34; } resource \u0026#34;aws_instance\u0026#34; \u0026#34;instance2\u0026#34; { ami = \u0026#34;ami-f9b3e4ee\u0026#34; instance_type = \u0026#34;t2.micro\u0026#34; } UCS Director Workflow In UCS Director, we\u0026rsquo;ll deploy a workflow that makes an SSH call to a linux machine that has Terraform installed and my configuration files stored there. You can see from the screenshot below that I\u0026rsquo;ve got the Terraform binary, the terraform config file and a variable file which is used to store the EC2 keys for the configuration file.\nThe UCS Director workflow will SSH into our Linux VM, create a new directory named after the service request in UCSD, copy the files to the new directory and execute a \u0026ldquo;terraform apply\u0026rdquo; to start the build. The full workflow is listed below and it only requires a single task.\nThe task is a custom ssh task that you can download from the Cisco communities website. You might get away with using the out-of-the-box workflow but the custom workflow that can be imported includes a rollback section to \u0026ldquo;undo\u0026rdquo; the deployment later.\nBelow you can see that I\u0026rsquo;ve loaded a bash profile and will make a new directory \u0026ldquo;/root/terraform/{ServiceRequest}\u0026rdquo;. Then we\u0026rsquo;ll copy files and run the \u0026ldquo;terraform apply\u0026rdquo; command. If you look in the \u0026ldquo;Undo Commands\u0026rdquo; section we run the \u0026ldquo;terraform destroy\u0026rdquo; command and then remove the directory we created.\nExecute the UCS Director Workflow When you execute the workflow, you can see that a new directory is made and the files are copied over. The new directory is named after the service record ID from UCSD.\nLooking in the AWS console, we can see that a pair of EC2 instances were created in different availability zones.\nAnd a load balancer was created and added those two instances to it.\nOne of the differences between vRA and UCSD is that vRA will only manage the virtual machines that were deployed through vRA. UCSD on the other hand can manage machines that were not deployed through the solution. This means that when I look at the virtual machines in UCS Director that they will show up and can be powered of, powered off or destroyed.\nThanks to the undo commands in the Custom SSH task, we can rollback the deployment which will terminate the two instances and destroy the load balancer.\nSummary Terraform is a pretty neat tool to use to define your infrastructure as a piece of code. If you combine it with an existing cloud management platform you can extend your capabilities even further.\n","permalink":"https://theithollow.com/2016/11/07/terraform-cisco-ucs-director/","summary":"\u003cp\u003eI\u0026rsquo;m a big fan of Terraform from Hashicorp but many organizations are using cloud management platforms like Cisco UCS Director or vRealize Automation in order to deploy infrastructure. If you read my blog often, you\u0026rsquo;ll know that I\u0026rsquo;ve got some experience with both of these products and if you\u0026rsquo;re looking to get up to speed on either of them, check out one of these links: \u003ca href=\"/2016/10/13/cisco-ucs-director-6-guide/\"\u003eUCS Director 6 Guide\u003c/a\u003e or \u003ca href=\"/2016/01/11/vrealize-automation-7-guide/\"\u003evRealize Automation 7 Guide\u003c/a\u003e. But why not use Terraform with Cisco UCS Director and have the best of both worlds?\u003c/p\u003e","title":"Terraform with Cisco UCS Director"},{"content":"Creating a Cisco UCS Director Catalog is the first step to publishing services to your end users. The second step is to assign permissions. This post will show you how to assign permissions to UCS Director Catalogs.\nTo allow users to access a catalog item they must be granted permissions. To do this, go to the Administration drop down \u0026ndash;\u0026gt; Users and Groups. From there click on the \u0026ldquo;User Groups\u0026rdquo; tab and find the group which should be entitled.\nClick on the group that should be entitled and then click “Catalog Setup”.\nSelect the catalog that you want to entitle. Once complete the user will be able to access the UCSD portal and request the item from the portal.\nSummary Adding a group of users to a catalog is a really simple procedure but I had some trouble locating this information when I was learning how to do it. I hope that this brief blog post has helped you with it setup.\n","permalink":"https://theithollow.com/2016/11/02/assigning-permissions-ucs-director-catalogs/","summary":"\u003cp\u003eCreating a Cisco UCS Director Catalog is the first step to publishing services to your end users. The second step is to assign permissions. This post will show you how to assign permissions to UCS Director Catalogs.\u003c/p\u003e\n\u003cp\u003eTo allow users to access a catalog item they must be granted permissions. To do this, go to the Administration drop down \u0026ndash;\u0026gt; Users and Groups. From there click on the \u0026ldquo;User Groups\u0026rdquo; tab and find the group which should be entitled.\u003c/p\u003e","title":"Assigning Permissions to UCS Director Catalogs"},{"content":"The Cisco UCS Director end user self-service policy is used to determine which day 2 operations that come out of the box are available on catalogs in a VDC. By \u0026ldquo;day 2\u0026rdquo; I mean the types of operations that can be performed on a virtual machine after its been deployed, such as reboot, power on, snapshot, etc.\nTo configure these, go to the Policies drop down and select Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery. Then select the “End User Self-Service Policy” and click the Add button.\nClick the cloud type. And click submit. This example will use a VMware cloud but you may select which ever type of cloud makes sense for your environment.\nNext, give the policy a name and description and then select the day 2 operations that can be available in that VDC.\nOnce you\u0026rsquo;ve got a Self-Service Policy, this will be added to your Virtual Data Center Policy. If you have multiple users with different permissions necessary, build more self-service policies to accommodate this.\n","permalink":"https://theithollow.com/2016/10/31/cisco-ucs-director-end-user-self-service-policy/","summary":"\u003cp\u003eThe Cisco UCS Director end user self-service policy is used to determine which day 2 operations that come out of the box are available on catalogs in a VDC. By \u0026ldquo;day 2\u0026rdquo; I mean the types of operations that can be performed on a virtual machine after its been deployed, such as reboot, power on, snapshot, etc.\u003c/p\u003e\n\u003cp\u003eTo configure these, go to the Policies drop down and select Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery. Then select the “End User Self-Service Policy” and click the Add button.\u003c/p\u003e","title":"Cisco UCS Director End User Self-Service Policy"},{"content":"Cisco UCS Director VMware Management Policy is used to determine how virtual machines will behave and more specifically be cleaned up. In the cloud world, the removal of inactive and unnecessary virtual machines may be more important that the deployment of them. The VM Management Policy is used to configure leases, notifications about when leases expire, and determining when a VM is inactive. This policy is very useful to keep your cloud clean, and removing unneeded virtual machines when they\u0026rsquo;re past their usefulness.\nAdvanced Controls Before you get started with setting up a VM Management Policy, be sure to set the advanced control that allows for virtual machines to be deleted automatically by UCSD when they are inactive. The VM Management Policy you will create, will setup the rules, but until the advanced control is set up, the policy won\u0026rsquo;t do anything.\nTo set the advanced control go to the Administration drop down \u0026ndash;\u0026gt; System. From there click the \u0026ldquo;Advanced Controls\u0026rdquo; tab and then select the \u0026ldquo;Delete Inactive VMs Based on vDC Policy\u0026rdquo; check box.\nVM Management Policy To create a VM Management Policy go to the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery and then the VM Management Policy tab.\nClick \u0026ldquo;Add\u0026rdquo;.\nGive the policy a name and description. Then configure lease notification settings and the number of days before an inactive VM is considered unused. This also requires setting follow up notifications and a grace period prior to deletion.\nConfigure VM Lease Notification: Check this box to set notifications for any VMs with a lease expiring. How many days before VM Lease expiry should notifications be sent: When should notifications start being sent before a VM Lease expires? How many notifications should be sent: The number of notifications that will be sent via email. Interval between notifications: How long between notifications before sending another one. Delete after inactive VM days: The number of days a VM is unused before being considered inactive. Additional grace period for deleting expired VMs: An additional number of days before finally deleting the VM. Action for failed rollback tasks: What happens if the rollback fails. Notify and delete the VM or just notify. Configure VM Delete Notification: Decide whether or not to send notifications about VM deletions. How many days before VM deletion should notifications be sent: How long before a VM is deleted should notifications be sent out How many notifications should be sent: Number of notifications to send for the VM. Interval between notifications: Time period between the notifications. Click Submit and the repeat for any additional management policies.\nSummary The VM Management Policy is useful for setting up notifications for virtual machine removal. Let\u0026rsquo;s face it, users might be a little skiddish if they know their virtual machines are going to be deleted without any notifications. This policy is crucial to keeping your data center clean of unused virtual machines.\n","permalink":"https://theithollow.com/2016/10/26/ucs-director-vmware-management-policy/","summary":"\u003cp\u003eCisco UCS Director VMware Management Policy is used to determine how virtual machines will behave and more specifically be cleaned up. In the cloud world, the removal of inactive and unnecessary virtual machines may be more important that the deployment of them. The VM Management Policy is used to configure leases, notifications about when leases expire, and determining when a VM is inactive. This policy is very useful to keep your cloud clean, and removing unneeded virtual machines when they\u0026rsquo;re past their usefulness.\u003c/p\u003e","title":"UCS Director VMware Management Policy"},{"content":"Chargeback or at least showback is an important thing for any cloud environment. Cisco UCS Director can provide cost information back to managers but you need to create a UCS Director cost model. This cost model will define how all the costs are calculated.\nAdd a Cost Model To create a cost model, go to the Policies drop down and select Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery. Then select the Cost Model tab.\nEither edit the default cost model if it will be used throughout the environment or create a new one by clicking the add button.\nGive the Model a name and description. Then select \u0026ldquo;Standard\u0026rdquo; for the model type unless advanced cost structures are to be defined. Advanced cost structures would be things like tiered models for CPU. (Example being 10-20 CPUs are at one rate but 21-30 CPUs are at a different rate.) Select the charge duration. This determines the nearest whole unit that charges will be billed against. Reports will show weekly or monthly but not hourly. The costs though, will be calculated based on this setting. From here, fill out how the costs will be assigned per hour for each of the parameters. Each parameter will need a value and this must be calculated by hand.\nOne Time Cost: A fixed one time cost for a virtual machine being provisioned.\nActive VM Cost: A per-hour cost of the virtual machine while running.\nInactive VM Cost: A per-hour cost of the virtual machine while not running.\nCPU Charge Unit: CPU Cores or GHz\nProvisioned CPU Cost: CPU costs per hour as a percentage of CPU provisioned\nReserved CPU Cost: Reserved CPU costs per GHz\nUsed CPU Cost: Actual CPU Usage if the CPU charge unit is in GHz\nProvisioned Memory Cost: Cost for memory in GB per hour\nReserved Memory Cost: Reserved memory costs per hour\nUsed Memory Cost: Used memory costs per hour\nReceived Network Data Cost: Incoming network bandwidth in GB per hour\nTransmitted Network Data Cost: Outgoing network bandwidth in GB per hour\nCommitted Storage Cost: Storage costs for actual storage used in GB per hour\nUncommitted Storage Cost: Storage costs for provisioned but unused GB per hour\nTag Based Cost Model:\nThere are also costs for physical servers which is outside the scope of this post. I think you\u0026rsquo;ll get the idea of how they work from this post though.\nRepeat this process for additional cost models.\nSummary A cost model is an absolute necessity in the cloud age. Cisco UCS Director gives you a good way to track these costs but does take some work on your part to calculate how much these costs should be. Determining how much each of the units will cost is usually not a simple process.\n","permalink":"https://theithollow.com/2016/10/24/ucs-director-cost-model/","summary":"\u003cp\u003eChargeback or at least showback is an important thing for any cloud environment. Cisco UCS Director can provide cost information back to managers but you need to create a UCS Director cost model. This cost model will define how all the costs are calculated.\u003c/p\u003e\n\u003ch1 id=\"add-a-cost-model\"\u003eAdd a Cost Model\u003c/h1\u003e\n\u003cp\u003eTo create a cost model, go to the Policies drop down and select Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery. Then select the Cost Model tab.\u003c/p\u003e","title":"UCS Director Cost Model"},{"content":"UCS Director System Policies are kind of a catch all for any settings that need to be defined prior to a virtual machine being deployed, and that don\u0026rsquo;t fit into a neat little category like Network, Storage or Compute. This post reviews two types of system policies: VMware and AWS.\nVMware System Policy This policy is used to configure things like the Time Zones, DNS Settings, virtual machine naming conventions and guest licensing information. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery screen and from there you\u0026rsquo;ll be looking for the VMware System Policy tab.\nClick \u0026ldquo;Add\u0026rdquo;. Give the policy a name and description and then begin filling in the rest of the deployment details.\nVM Name Template: This is the name of the virtual machine as it will show up in vCenter.\nVM Name Validation Policy: If a validation policy is created, you can select it here to ensure that when end users pick a name, that it conforms to a set convention.\nEnd User VM Name or VM Prefix: Allow users to enter a VM Prefix to the beginning of the name.\nPower On after Deploy: Should new virtual machines be powered on once deployed?\nHost Name Template: This is the naming strategy for the guest hostname\nHost Name Validation Policy: If you create a validation policy, this ensures that if end users pick a name, that it conforms to a set convention.\nLinux Time Zone: What time zone linux machines should be set configured with\nLinux VM Max Boot Wait Time: How long before a Linux VM takes to boot before considering it timed out.\nDNS Domain: The DNS domain\nDNS Suffix List: The DNS Suffixes separated by a comma.\nDNS Server List: The DNS Servers in the environment, separated by a comma.\nVM Image Type: The type of machine deployed. This is either (Windows and Linux) or Linux. If you choose Windows and Linux, you\u0026rsquo;ll need to enter some additional information such as licensing owners, license mode, organizations, and some specific windows server requirements.\nWhen complete click Submit and repeat this process with additional system policies.\nAmazon Deployment Policy An Amazon Deployment Policy defines how EC2 instances will be deployed in AWS. To create an Amazon Deployment Policy go to the Policies drop down and select Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery. Then select the “Amazon Deployment Policy” tab.\nGive the policy a name and description.\nKeypair Type: Either Unique for a new key for each VM, or a single key shared by group.\nEnable CloudWatch: Check if you want AWS CloudWatch monitoring to be enabled. Additional charges apply.\nSecurity Group: This is the AWS Security Group that will be assigned to the virtual machine in AWS.\nFirewall Specifications: The firewall rules created in the security group. These are in the format of: protocol, port_range_start, port_range_end, source_CIDR 32bit VM Instance Type: The image size for a 32 bit VM\n64bit VM Instance Type: The image size for a 64 bit VM\nUser Data: Additional information passed to an AWS EC2 instance at provisioning time to be executed. Example: yum update -y\nSummary A System Policy is required for a Cisco UCS Director Virtual Data Center. These settings will help define how the virtual machine should behave and additional policy may need to be created for KVM or Hyper-V environments. Settings for system policies may differ by type, but are all to add additional customization to the virtual machines.\n","permalink":"https://theithollow.com/2016/10/19/ucs-director-system-policies/","summary":"\u003cp\u003eUCS Director System Policies are kind of a catch all for any settings that need to be defined prior to a virtual machine being deployed, and that don\u0026rsquo;t fit into a neat little category like Network, Storage or Compute. This post reviews two types of system policies: VMware and AWS.\u003c/p\u003e\n\u003ch1 id=\"vmware-system-policy\"\u003eVMware System Policy\u003c/h1\u003e\n\u003cp\u003eThis policy is used to configure things like the Time Zones, DNS Settings, virtual machine naming conventions and guest licensing information. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery screen and from there you\u0026rsquo;ll be looking for the VMware System Policy tab.\u003c/p\u003e","title":"UCS Director System Policies"},{"content":"The UCS Director Virtual Data Center construct requires several underlying policies in order to become an item that virtual machine can be deployed on. One of these items is the networking policy which includes IP Pools, VLANs, vNic rules and port group selection.\nIP Pool Policy Before creating any Network Policies it may be necessary to create an IP Pool Policy. The IP Pool is used to distribute IP Addresses from UCS Director instead of an IPAM solution or DHCP. If either of those methods are to be used, this section can be skipped.\nTo create an IP Pool Policy go to the Policies drop down and select “Virtual/Hypervisor Polices”\u0026ndash;\u0026gt; Network. Then select the “Static IP Pool Policy” tab.\nClick \u0026ldquo;Add\u0026rdquo; to add a new policy. Enter a policy name and description and then you are able to specify whether the IP addresses are allowed to overlap or if they’re assigned to a specific person or container. Click the plus sign to define the pool.\nDefine your IP Pool here along with a subnet mask, gateway and VLAN ID. Then click “Submit”.\nRepeat this process for any additional pools.\nVMware Network Policy The VMware Network Policy determines how vNICs will be placed in the vSphere environments. To create a new VMware Network Policy go to the Policies drop down and select “Virtual/Hypervisor Polices”\u0026ndash;\u0026gt; Network. Then select “VMware Network Policy”.\nYou’ll notice that there may be some default network policies listed here. These can be deleted if you wish to create your own policies from scratch. Network polices are created by default when you add a cloud account.\nClick \u0026ldquo;Add\u0026rdquo; to define a new network policy. Enter a policy name and description. Then select which cloud this policy belongs with.\nFrom here, we’ll enter a list of VM Networks that can be added. Click the Plus button.\nAdd a NIC Alias name and select the options appropriate for your environment.\nAllow end user to choose portgroups: S elect this if the person requesting the VM will pick a portgroup manually.\nShow policy level portgroups: Checking this check box along with the \u0026ldquo;Allow end users to choose portgroups\u0026rdquo; check box lists all the selected portgroups of NICs in the policy.\nCopy Adapter Type from Template: Select this to use either Flexible, E1000, VMXNET3 etc from the vSphere Template\nAllow end user to override IP Address: Use this option to allow the requester to plug in an IP Address instead of using a policy.\nAdapter Type: Define the VM adapter type that will be used, unless you\u0026rsquo;ve selected \u0026ldquo;Copy Adapter Type form Template\u0026rdquo;\nClick the plus sign to select the port groups associated with your cloud.\nSelect the portgroup that virtual machines will be deploy on and then the IP Address configurations for those VMs. Select Static for the type, and IP Pool Policy for the Address Source to use the IP Pool Policy that was created in the previous section. Otherwise select DHCP.\nClick Submit and OK, three times. Repeat this process for additional clouds and port groups.\nSummary After completing these steps, a VMware Network Policy should be available for selection as part of a VDC. Network policies are essential to determine how virtual machines will be placed onto vSphere portgroups. If you have a different environment such as Hyper-V then you\u0026rsquo;ll want to choose a different network policy type, but the same kinds of questions and concepts will still apply.\n","permalink":"https://theithollow.com/2016/10/17/ucs-director-network-policies/","summary":"\u003cp\u003eThe UCS Director Virtual Data Center construct requires several underlying policies in order to become an item that virtual machine can be deployed on. One of these items is the networking policy which includes IP Pools, VLANs, vNic rules and port group selection.\u003c/p\u003e\n\u003ch1 id=\"ip-pool-policy\"\u003eIP Pool Policy\u003c/h1\u003e\n\u003cp\u003eBefore creating any Network Policies it may be necessary to create an IP Pool Policy. The IP Pool is used to distribute IP Addresses from UCS Director instead of an IPAM solution or DHCP. If either of those methods are to be used, this section can be skipped.\u003c/p\u003e","title":"UCS Director Network Policies"},{"content":"The storage policy defines how virtual disks will be deployed on vSphere datastores. This policy will be added to the Cisco UCS Director Virtual Data Center construct to provide a comprehensive policy on how to deploy new virtual machines on VMware vSphere.\nVMware Storage Policies To configure a VMware Storage Policy, go to the Policies drop down “Virtual/Hypervisor Policies” \u0026ndash;\u0026gt; Storage. Then click on the “VMware Storage Policy” tab.\nYou’ll notice that there may be some default storage policies listed here. These can be deleted and you can create your own policies from scratch. VMware storage polices are created by default when you add the cloud. Click \u0026ldquo;Add\u0026rdquo;.\nGive the policy a name and description and then select the cloud associated with the policy.\nUse Linked Clone: If you plan to use a linked clone from your vSphere image, select the check box\nStorage Profile: If you are using VMware VSAN, you can select a storage profile to select the correct datastores based on criteria.\nData Stores/Datastore Clusters Scope: Select which datastores and datastore clusters can be considered for virtual machine deployments as part of this policy.\nBelow this, select the type of storage options that will determine the datastores being used by UCS Director to deploy virtual machines. Be sure to put in a filter that will match your desired datastores and be sure to put in a free space filter so that you won’t over allocate your datastores and knock them offline.\nIn the section below this, allow the resizing of virtual machine disks upon provisioning and enter in any criteria that describes the desired state of the virtual disks. On the bottom check box, you\u0026rsquo;re able to allow your end users to specify the datastores that are part of the scope, during the provisioning request.\nClick “Next”.\nOn the disk policies screen, you\u0026rsquo;re able to set different capabilities for different disk types. By default any additional disks use the system disk policies you created previously.\nSelect Next on the Disk Policies tab.\nIf additional disks are allowed to be added during provisioning time, click the plus sign and add information for the additional disks on the last screen. By default a system disk is the only disk added and you do not need to add one to this screen. If you want to create a new disk click the plus button and fill out the required information for an additional data disk.\nThen click \u0026ldquo;Submit\u0026rdquo;.\nRepeat this process for other clouds and other disk policy sets as necessary.\nSummary The VMware Storage Policy is important to complete so that virtual machines will be able to find a suitable datastore and have guardrails around how many disks can be created and what sizes are available. If you have a Hyper-V or KVM environment, please select those tabs and complete those sections. This policy will be added to a VDC for use later on.\n","permalink":"https://theithollow.com/2016/10/17/ucs-director-vmware-storage-policy/","summary":"\u003cp\u003eThe storage policy defines how virtual disks will be deployed on vSphere datastores. This policy will be added to the Cisco UCS Director Virtual Data Center construct to provide a comprehensive policy on how to deploy new virtual machines on VMware vSphere.\u003c/p\u003e\n\u003ch1 id=\"vmware-storage-policies\"\u003eVMware Storage Policies\u003c/h1\u003e\n\u003cp\u003eTo configure a VMware Storage Policy,  go to the Policies drop down “Virtual/Hypervisor Policies” \u0026ndash;\u0026gt; Storage. Then click on the “VMware Storage Policy” tab.\u003c/p\u003e\n\u003cp\u003eYou’ll notice that there may be some default storage policies listed here. These can be deleted and you can create your own policies from scratch. VMware storage polices are created by default when you add the cloud. Click \u0026ldquo;Add\u0026rdquo;.\u003c/p\u003e","title":"UCS Director VMware Storage Policy"},{"content":"Yesterday it was announced that VMware and Amazon Web Services are partnering to provide vSphere\u0026rsquo;s hypervisor and toolsets on the AWS platform. Since this time there have been plenty of articles written questioning the motives of both parties involved and whether or not one of these two companies is going to regret this partnership. I invite you to read other perspectives on this and a few of them are listed here: Cloud Opinion, Enrico Signoretti, Frank Denemman (VMware), Jeff Barr (AWS) and there will be more.\nWhat Does VMware Get Out of This? VMware gets a quick, short term entry point into the clear number 1 public cloud vendor\u0026rsquo;s infrastructure. VMware has been trying to get vCloud Air off the ground and never really generated any lift. This deal, will allow VMware to run their solid hypervisor inside Amazon\u0026rsquo;s data centers and give users access to regions that would have been very costly for VMware to get into. AWS is likely providing the data center gear (Power, Cooling, Networking, Security) already and VMware just needs to manage the hypervisor, servers and overlays to their product such as vCenter. VMware seems to understand that customers are moving to the public cloud anyway, and want to have a viable solution ready for them before they make the decision to move. For VMware this is a way to hold on to customers who may be migrating to public cloud for a little while longer.\nMy Take: This is a good move for VMware for the short term, but probably also shortsighted.\nWhat Does AWS Get Out of This? Amazon Web Services Q2 revenue numbers reached 2.8 Billion and show no signs of slowing down. Cloud adoption is still increasing quarter after quarter and many customers are feeling pressure to move to an operating expense model where you pay for what you use, instead of depreciating equipment over three to five years regardless of how much they\u0026rsquo;re being utilized. Amazon has grabbed many of the early adopters but in order to continue this growth without a diminishing return, new revenue streams have to always be considered. Amazon should have little trouble bringing in new revenue from startups and development shops but many of the larger enterprises that have such a large investment in VMware and aren\u0026rsquo;t going to re-architect their applications for cloud at a very rapid pace. Sure, these customers will do some new development and architect it for cloud but what about all of the older \u0026ldquo;legacy\u0026rdquo; solutions that they have to maintain? This VMware partnership makes it very simple to \u0026ldquo;lift and shift\u0026rdquo; the workloads out of the physical data center and into one of AWS\u0026rsquo;s data centers. How do I know that these companies aren\u0026rsquo;t going to be rapidly refactoring their applications to move it to the cloud you ask? Because I work with companies every day that still have physical servers, mainframes, and other solutions that haven\u0026rsquo;t been able to migrate to a virtualized environment yet.\nSo, Amazon gets some more revenue by making it easy to move workloads into their data centers right? Many of us already know that just picking up a workload and sticking it in the public cloud will work, but usually becomes more expensive if the application isn\u0026rsquo;t re-architected to take advantage of scaling for on-demand resources. While VMware is busy selling AWS\u0026rsquo;s services, customers may start to see that these costs are just to expensive to deal with at which time Amazon can swoop in and show customers how they can eliminate costs by moving off the VMware solution and onto native AWS capabilities. I mean, the workloads are already in Amazon\u0026rsquo;s data center so this should be an easy transition.\nMy Take: Amazon is the clear winner in the long game. VMware will be selling Amazon\u0026rsquo;s product indirectly and getting users to adopt their the public cloud for AWS. Later Amazon will swallow up those workloads anyway.\nWhat Does the Customer Get Out of This? The good news in all of this is that the customer really wins. There is significant pressure on CIOs to move to \u0026ldquo;cloud\u0026rdquo; because it\u0026rsquo;s cheaper or to move to this Opex model for accounting purposes, or any of the other reasons businesses think cloud is important. This solution will make it easy for CIOs that have initiatives to move a percentage of their data centers to the cloud by 2018. The migration should be pretty seemless, with little downtime, no re-training of administrators, and no re-architecting needed by developers. The straight to AWS option is still there, the on-premises option is still there and now there is a new option which is VMware on AWS. Choice should always be good for a customer and this partnership gives them another one.\nMy Take: This additional choice will be an excuse for System Administrators to be complacent and to not learn new skills that may help them in their career. Instead of learning new skills, admins can be contented to leverage their existing skill sets. Besides this caveat though, customers will have an excellent choice for hitting their corporate mandates without putting the environment at risk by sloppily moving things to a new platform.\nSummary Its WAY too soon to tell whether any of this is going to really play out like I see it, but these are my predictions. Hopefully this is one of those deals where it works out for everyone involved.\n","permalink":"https://theithollow.com/2016/10/14/aws-and-vmware-what-is-happening-here/","summary":"\u003cp\u003eYesterday it was announced that VMware and Amazon Web Services are partnering to provide vSphere\u0026rsquo;s hypervisor and toolsets on the AWS platform. Since this time there have been plenty of articles written questioning the motives of both parties involved and whether or not one of these two companies is going to regret this partnership. I invite you to read other perspectives on this and a few of them are listed here: \u003ca href=\"https://medium.com/@cloud_opinion/aws-blinked-20cddbb537ed#.z7ghrynut\"\u003eCloud Opinion\u003c/a\u003e, \u003ca href=\"http://www.juku.it/en/vmwonaws-really-cool-not/\"\u003eEnrico Signoretti\u003c/a\u003e, \u003ca href=\"https://blogs.vmware.com/vsphere/2016/10/vmware-cloud-on-aws-a-closer-look.html\"\u003eFrank Denemman\u003c/a\u003e (VMware), \u003ca href=\"https://aws.amazon.com/blogs/aws/in-the-works-vmware-cloud-on-aws/\"\u003eJeff Barr\u003c/a\u003e (AWS) and there will be more.\u003c/p\u003e","title":"AWS and VMware, What is Happening Here?"},{"content":"Cisco UCS Director 6 is a cloud management platform that can deploy virtual machines and services across vSphere, KVM, Hyper-V and AWS endpoints. UCS Director will manage the orchestration, lifecycle and governance of virtual machines deployed through it and can also help in the automatic provisioning of hardware resources. Cisco has plenty of documentation on how to click the buttons to create constructs used for deployment, but I was not able to find any great resources on what order they should be performed in and why I\u0026rsquo;m making the choices in the GUI. If you follow this guide in the order of posts listed, it should help you to get a Cisco UCS Director 6 environment setup and be able to use it to deploy virtual resources. This guide does not cover many of the additional benefits that UCSD can provide when dealing with a physical environment. I hope that this guide can give you a good starting point on how the solution works and what you can do with it.\nCisco UCS Director Basic Setup Configurations Cisco UCS Director Infrastructure Setup Cisco UCS Director Computing Policies Cisco UCS Director Network Policies Cisco UCS Director VMware Storage Policies Cisco UCS Director System Policies Cisco UCS Director Cost Models Cisco UCS Director VMware Management Policies Cisco UCS Director End User Self-Service Policies Cisco UCS Director VDCs Cisco UCS Director Catalogs Cisco UCS Director Catalog Permissions Assignments Cisco UCS Director Catalog Request Additional Resources Cisco UCS Director Official Documentation - https://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-director-6-0/model.html\nCisco UCS Director Community Workflows - https://communities.cisco.com/docs/DOC-56419\n","permalink":"https://theithollow.com/2016/10/13/cisco-ucs-director-6-guide/","summary":"\u003cp\u003eCisco UCS Director 6 is a cloud management platform that can deploy virtual machines and services across vSphere, KVM, Hyper-V and AWS endpoints. UCS Director will manage the orchestration, lifecycle and governance of virtual machines deployed through it and can also help in the automatic provisioning of hardware resources. Cisco has plenty of documentation on how to click the buttons to create constructs used for deployment, but I was not able to find any great resources on what order they should be performed in and why I\u0026rsquo;m making the choices in the GUI. If you follow this guide in the order of posts listed, it should help you to get a Cisco UCS Director 6 environment setup and be able to use it to deploy virtual resources. This guide does not cover many of the additional benefits that UCSD can provide when dealing with a physical environment. I hope that this guide can give you a good starting point on how the solution works and what you can do with it.\u003c/p\u003e","title":"Cisco UCS Director 6 Guide"},{"content":"The Computing Polices determine how vCPUs and vMEM will be assigned to a virtual machine deployed through UCS Director as well as which clusters and hosts can have virtual machines placed on them.\nAdd a VMware Computing Policy To add a computing policy got to the Policies drop down and select “Virtual/Hypervisor Polices” \u0026ndash;\u0026gt; Computing. Then select the VMware Computing Policy tab.\nYou’ll notice that there may be some default VMware computing policies listed here. These can be deleted and you can create your own policies from scratch. VMware computing polices are created by default when you add the cloud.\nClick \u0026ldquo;Add\u0026rdquo;.\nGive the policy a name and description, then select the cloud that its associated with, any resource pools and a filter condition to determine the correct host to land on. Memory typically is used to determine host placement.\nHost Node/Cluster Scope: Select a cluster or group of hosts that may be used to deploy virtual machines.\nResource Pool: The resource pool doesn\u0026rsquo;t have to be selected, but you can select any VMware resource pools you want, or it will select the default \u0026ldquo;resources\u0026rdquo; pool at the root of your cluster.\nESX Type: Select ESX or ESXi. SIDE NOTE: you should really be on ESXi by now.\nESX Version: You may be able to filter by ESX/ESXi version here.\nFilter Conditions: Select criteria for determining which host will have new virtual machines deployed on them. It\u0026rsquo;s important to select some criteria to ensure that the hosts are over utilized. Memory Usage %, CPU Usage % and Memory Swap are good options.\nFurther down on the setup screen, you can select virtual machine options such as:\nPermitted Values for vCPUs: This is the number of CPUs that a VM can have. Separated by commas.\nPermitted Values for Memory in MB: The amount of virtual memory that a VM can have assigned to it. Separated by commas.\nDeploy to Folder: Virtual machine folder in vSphere that new virtual machines will be listed under.\nClick Submit and then repeat this process for any additional clouds or policies that are in use.\nSummary Computing policies are a requirement for creating a Virtual Data Center in UCS Director. If you have a KVM or Hyper-V infrastructure, these options may be a bit different, but a computing policy for each of these should be created as well.\n","permalink":"https://theithollow.com/2016/10/13/ucs-director-computing-policy/","summary":"\u003cp\u003eThe Computing Polices determine how vCPUs and vMEM will be assigned to a virtual machine deployed through UCS Director as well as which clusters and hosts can have virtual machines placed on them.\u003c/p\u003e\n\u003ch1 id=\"add-a-vmware-computing-policy\"\u003eAdd a VMware Computing Policy\u003c/h1\u003e\n\u003cp\u003eTo add a computing policy got to the Policies drop down and select “Virtual/Hypervisor Polices” \u0026ndash;\u0026gt; Computing. Then select the VMware Computing Policy tab.\u003c/p\u003e\n\u003cp\u003eYou’ll notice that there may be some default VMware computing policies listed here. These can be deleted and you can create your own policies from scratch. VMware computing polices are created by default when you add the cloud.\u003c/p\u003e","title":"UCS Director Computing Policy"},{"content":"UCS Director is a cloud management platform and thus requires some infrastructure to deploy the orchestrated workloads. In many cases UCS Director can also orchestrate the configuration and deployment of bare metal or hardware as well, such as configuring new VLANs on switches, deploying operating systems on blades and setting hardware profiles etc. This post focuses on getting those devices to show up in UCS Director so that additional automation can be performed.\nCreate Pods A Pod is a logical grouping of infrastructure objects. For instance, a pod may include vCenters, UCS Blades, Nexus switches and a storage array. This could be a packaged pod like a Vblock or Flexpod, or it could be a generic pod meaning that the equipment is put together on your own. By default, a pod is created out of the box and is considered the default pod but you can create your own pods by navigating to the Administration drop down and select “Physical Accounts”. From here select the “Pods” tab.\nClick Add and give the pod a name, type and select a site that was previously created. Fill out the rest of the information and select “Add”. If you\u0026rsquo;re not working with a pre-packaged pod like a Vblock, select \u0026ldquo;Generic\u0026rdquo; for the type.\nRepeat this step for any additional pods that should be created.\nCredential Policies Before adding accounts that will grant UCS Director access to physical and virtual infrastructure solutions, it’s beneficial to create a credential policy which saves the accounts as a template. To setup these credential policies go to the Policies drop down and select “Physical Infrastructure Policies” \u0026ndash;\u0026gt; “Credentials Policies”.\nClick \u0026ldquo;Add\u0026rdquo; to add a new policy.\nThe credential policy setup screen will be different depending on which type of account is getting a new policy. The example below shows a VMware credential policy and requires a username/password combination as well as a port and access URL. A credential policy for a network switch may look vastly different.\nOnce the Credential Policy information has been created click “Submit” and then continue to add additional policies for any other technologies that will be in use. This could include firewalls, UCS Manager credentials, VMAX, VNX and other storage devices.\nCreate Virtual Accounts Virtual Accounts are the method in which UCSD communicates with vCenter, AWS or Microsoft Virtual Machine Managers. These “Virtual Accounts” are the connections used between UCSD and these endpoints. Once created, these virtual endpoints will be known as a “Cloud”.\nTo create a Virtual Account go to the Administration drop down and select “Virtual Accounts”. Click \u0026ldquo;Add\u0026rdquo; to add a virtual account. Select the cloud type and based on that decision different requirements will be asked of you. This example uses VMware.\nEnter a name for the cloud that is created here and fill out any information about connecting to that endpoint. Since credential policies were created earlier, you may select that check box to not have to type in any user ID or passwords.\nOnce you get down to the Datacenters and Clusters, you’ll notice a checkbox that says “Discover Datacenters/Clusters”. If you click this, you’ll see a list of datacenters and clusters discovered by UCS Director. This is also a good way to test the connectivity between UCSD and the Cloud Endpoint.\nAfter selecting the correct information be sure to assign this cloud to one of the pods that you created earlier so that it\u0026rsquo;s associated with your equipment correctly.\nWhen finished click “Add”. Then add any additional vCenters, AWS endpoints and any other virtual endpoints in the design.\nCreate Physical Accounts In the same manner that virtual accounts are created, a physical account should be setup for each of the physical devices that will be managed with UCS Director. To add a physical account go to the Administration drop down and select “Physical Accounts”, then select the Physical Accounts tab.\nClick the \u0026ldquo;Add\u0026rdquo; button and enter in the Pod in which the equipment should be assigned, the category of the account (either Computing or Storage) and the account type. The example below assumes a Cisco UCS Manager.\nOnce you enter the basic information about the account, additional questions will be asked such as the login methods, and which pieces of the equipment should be managed by UCS Director. The questions about the account may be different depending on the type of account being setup. The example below uses a a UCS Manager configuration setup.\nOnce done repeat this process with any additional physical accounts.\nManaged Network Elements A managed network element is similar to a physical account. A managed network element needs to be created and added to a pod just like the computing and storage elements. To add a new networking switch, F5, firewall or storage switch go to the Administration drop down and select “Physical Accounts”. From there select the “Managed Network Element” tab.\nClick the “Add Network Element” tab. Enter in the IP addressing, category and port information before clicking \u0026ldquo;Submit\u0026rdquo;. Be sure to place the Managed network Element in your correct pod.\nAfter creating the first device, repeat this process for any other network devices.\nSummary Once you\u0026rsquo;ve completed all your credential policies, account setup and pods, you\u0026rsquo;ll likely have a pod that looks similar to the screenshot below. Not every piece of your environment needs to be part of a pod, but anything that is to be automated should be in here. Remember that if you want to change your sites, you can select the side drop down and show different pods by your site definitions.\n","permalink":"https://theithollow.com/2016/10/12/ucs-director-infrastructure-setup/","summary":"\u003cp\u003eUCS Director is a cloud management platform and thus requires some infrastructure to deploy the orchestrated workloads. In many cases UCS Director can also orchestrate the configuration and deployment of bare metal or hardware as well, such as configuring new VLANs on switches, deploying operating systems on blades and setting hardware profiles etc. This post focuses on getting those devices to show up in UCS Director so that additional automation can be performed.\u003c/p\u003e","title":"UCS Director Infrastructure Setup"},{"content":"The basic deployment of UCS Director consists of deploying an OVF file that is available from the Cisco downloads site. This post won\u0026rsquo;t go through the deployment of the OVF but this should be a pretty simple setup. The deployment will ask for IP Addressing information and some passwords. Complete the deployment of the OVF in your virtual environment and then continue with this post.\nOnce the OVF has been deployed, open a web browser and place the IP Address of the appliance in the address bar.\nLogin to the portal with the default username and password:\nUsername: admin Password: admin\nOnce you\u0026rsquo;re logged in, the default password should be changed and UCS Director will prompt you to change this.\nChange the password and click \u0026ldquo;save\u0026rdquo;.\nAfter you’ve changed the password, you’ll need to re-login to the UCS Director portal.\nClose the guided setup wizard.\nMail Setup Setup the mail server to send alerts from UCS Director. To do this go to the Administration Drop Down and select System. Then click the Mail Setup tab and enter in the information for the SMTP server.\nNOTE: The \u0026ldquo;Server IP Address\u0026rdquo; is not the IP Address of the mail server, but should match the IP Address of the UCS Director appliance. It is used for approval links in the emails.\nAdd Licenses The Cisco UCS Director appliance comes with a trial license for 90 days. To add another license go to the Administration drop down and select license. Click the Update License and provide the license that was purchased.\nLDAP Configuration Before going through a bunch of setup processes, it\u0026rsquo;s nice to be able to login to UCS Director with the accounts from your own Active Directory environment. The Active Directory setup will sync user accounts from AD and then re-sync them on a schedule. Setup the connection with Active Directory by navigating to the Administration drop down \u0026ndash;\u0026gt; Ldap Integration. Then click \u0026ldquo;Add\u0026rdquo; to add a new LDAP account.\nNote: it might be a good idea to take a snapshot of your appliance before setting this up. If you have a large directory and sync the whole thing this may take a long time and the cleanup can be tedious. A backup might be very useful.\nFill out the first page of the LDAP Server Configuration Wizard that makes the connection with the Active Directory domain controller. Then click “Next”.\nFill out the information in the wizard such as:\nAccount Name: This is the name that UCSD will use to identify the directory.\nServer Type: What type of LDAP connection is being used.\nServer: Server Address for your LDAP connection\nPort: Which port should be used to connect to the LDAP server\nDomain Name: What is the domain for your Active Directory\nUsername / Password: This is an account with read permissions on your AD instance for UCSD to query accounts\nSynchronization Frequency: How long between LDAP Syncs? The minimum frequency is 1 hour\nOn the next screen select the base AD Object. This is the organization unit that will sync with Cisco UCS Director on the scheduled frequency. Many times it\u0026rsquo;s useful to only sync a specific OU instead of an entire directory tree for performance reasons. I created a new OU just for UCSD and place any of my users and groups in this OU. You can create your structure however it seems best for your organization. Then click “Next”.\nOn the user and group filters, you may filter out unnecessary users and groups from syncing with UCSD but at least 1 filter must be created. I use a default filter that looks for anything just to keep it simple. My filtering has already come from the OU selection that I created earlier. Create a filter and then click “Next”.\nOn the final screen, you may add an LDAP Role Filter but do not have to add one. Click “Submit”.\nAdd Users and Groups Now that LDAP has been synced go to the Administration drop down and select “Users and Groups”. You should see a list of Active Directory Groups that have been synced under the “User Groups” tab as well as a list of users listed under the “Users” tab. Here you can select any users that require elevated privileges and then click “Edit”.\nThe properties of the user will open and you can enter in any additional details such as an email address that will be used by UCSD and change the User Role from the default of “Service End-User” to something else like a System Admin. Then click “Save”.\nCreate Sites To make UCSD more manageable, it uses the concept of sites to help limit the number of objects that are displayed. You may switch between sites whenever necessary, but you should setup sites first before adding hardware or virtualization endpoints.\nTo add a Site go to the Administration drop down and the select “Physical Accounts”. Under the “Site Management” tab click \u0026ldquo;Add\u0026rdquo; to enter in information about your site. Click \u0026ldquo;Submit\u0026rdquo; and then enter in any additional sites that UCS Director will manage.\nSummary This post has walked you through the basic setup tasks that you should complete before configuring infrastructure and orchestration components. Much like other solutions a solid foundation will help speed up the time required for the rest of your configurations. In the next post we\u0026rsquo;ll cover some infrastructure setup tasks.\n","permalink":"https://theithollow.com/2016/10/11/ucs-director-basic-setup-configurations/","summary":"\u003cp\u003eThe basic deployment of UCS Director consists of deploying an OVF file that is available from the Cisco downloads site. This post won\u0026rsquo;t go through the deployment of the OVF but this should be a pretty simple setup. The deployment will ask for IP Addressing information and some passwords. Complete the deployment of the OVF in your virtual environment and then continue with this post.\u003c/p\u003e\n\u003cp\u003eOnce the OVF has been deployed, open a web browser and place the IP Address of the appliance in the address bar.\u003c/p\u003e","title":"UCS Director Basic Setup Configurations"},{"content":"Cisco UCS Director utilizes the idea of a Virtual Data Center (VDC) to determine how and where virtual machine should be placed. This includes which clusters to deploy to, networks to use, datastores to live on, as well as the guest customization and cost models that will be used for those virtual machines. According to the UCS Director Administration Guide, a Virtual Data Center is \u0026ldquo;a logical grouping that combines virtual resources, operational details, rules, and policies to manage specific group requirements\u0026rdquo;. Cisco UCS Director VDCs are the focal point of a virtual machine deployment.\nIf you\u0026rsquo;re trying to understand how to get things setup for your own implementation you should really be focusing on the underlying policies that will make up that VDC. The Virtual Data Center is really used to aggregate all of those individual polices into a single construct so that virtual machines will have all of the information necessary to be deployed in your environment.\nThe image below shows a list of policies that are necessary to be created before you start building your Virtual Data Center.\nService Delivery Policy - This policy is used to configure things like the Time Zones, DNS Settings, virtual machine naming conventions and guest licensing information. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery screen and from there you\u0026rsquo;ll be looking for the VMware System Policy or Hyper-V System Policy tabs. Computing Policy - The computing policy is used to make decisions like which clusters should be used, if any affinity rules should be applied and the number of virtual CPUs and virtual Memory can be assigned to virtual machines. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Computing and from there look for your hypervisor type. Network Policy - The network policy is used to configure the number of vNICs assigned to a virtual machine, what networks they can be placed on, what virtual adapter type should be used and how they\u0026rsquo;ll obtain an IP Address (Static, DHCP). If you choose static, an IP Pool Policy may also need to be created to dole out the IP Addresses. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Network and from there look for your hypervisor type. Storage Policy - Much like the computing and network policies this is used to determine which datastores are used to deploy virtual machines and what virtual disk sizes can be selected for new virtual machines. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Storage and from there look for your hypervisor type. Cost Model - The Cost model determines how much your virtual machines are costing the company or business unit. For the cost model to work several parameters need to be determined such as one time costs, active/inactive vm costs, CPU costs, memory costs, network costs, storage costs etc. Be prepared for this if you\u0026rsquo;re doing any kind of chargeback/showback in your organization. Filling this out is simple, but finding these costs may be very time consuming. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery and then the Cost Model tab. User Action Policy - User Actions are a way to add new day 2 operations to an object. I\u0026rsquo;ve written about this in the past where you can add a custom workflow to a virtual machine once it\u0026rsquo;s been deployed. This could be an action to snapshot a virtual machine, add it to DNS, reconfigure it or whatever automation task you can write. Once you setup your orchestration workflow, the policy can be created by going to the Policies drop down \u0026ndash;\u0026gt; Orchestration and then the User VM Action Policy tab. VM Management Policy - The VM Management Policy is used to configure leases, notifications about when leases expire, and determining when a VM is inactive. This policy is very useful to keep your cloud clean, and removing unneeded virtual machines when they\u0026rsquo;re past their usefulness. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery and then the VM Management Policy tab. End User Self-Service Policy - This policy is much like a User Action Policy where you\u0026rsquo;re selecting which day 2 operations are available to the virtual machines. The difference here is that these are out of the box capabilities that you\u0026rsquo;re adding. The policy can be found under the Policies drop down \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery and then the End User Self-Service Policy tab. Once you\u0026rsquo;ve created all of the above policies you\u0026rsquo;re now ready to create a Virtual Data Center in UCS Director. The VDC Setup will need some basic information like a name and description and some approvers that will be used to approve new virtual machine deployments, but after that you\u0026rsquo;ll simply need to select from the polices that you created above. Now that you have a Virtual Data Center created, your catalogs will have to make a single selection during deployment that will contain all of the settings necessary to deploy it in your environment.\nAfter virtual machines have been deployed to your VDCs you can monitor the statistics for those virtual data centers by going to the Virtual drop down \u0026ndash;\u0026gt; Compute and then selecting the vDCs tab. All of the VDCs for your environment will be listed and then you can drill down on them to get more statistics.\n","permalink":"https://theithollow.com/2016/10/10/cisco-ucs-director-vdcs/","summary":"\u003cp\u003eCisco UCS Director utilizes the idea of a Virtual Data Center (VDC) to determine how and where virtual machine should be placed. This includes which clusters to deploy to, networks to use, datastores to live on, as well as the guest customization and cost models that will be used for those virtual machines. According to the UCS Director Administration Guide, a Virtual Data Center is \u0026ldquo;a logical grouping that combines virtual resources, operational details, rules, and policies to manage specific group requirements\u0026rdquo;. Cisco UCS Director VDCs are the focal point of a virtual machine deployment.\u003c/p\u003e","title":"Cisco UCS Director VDCs"},{"content":"One of the new features of vRealize Automation in version 7.1 is the ability to scale out or scale in your servers. This sort of scaling is a horizontal scaling of the number of servers. For instance, if you had deployed a single web server, you can scale out to two, three etc. When you scale in, you can go from four servers to three and so on.\nUse Cases The use cases here could really vary widely. The easiest to get started with would be some sort of a web / database deployment where the web servers have some static front end web pages and can be deployed over and over again with the same configurations. If we were to place the web servers behind a load balancer (yep, think NSX here for you vSphere junkies) then your web applications can be scaled horizontally based on when you run out of resources.\nThe example that we\u0026rsquo;ll see in this post are done manually, but you could get really elaborate and have a monitoring solution like vRealize Operations, trigger a script or vRO workflow whenever CPU % got too high on your web servers and automatically scaled out using the vRA API. This would be like a homemade version of Amazon\u0026rsquo;s Auto Scaling Groups in AWS.\nScale a vRealize Automation Item To get started, I\u0026rsquo;ve chosen a multi-machine blueprint that I downloaded from vmtocloud.com that deploys some Docker hosts and a Docker Swarm server to manage them. I\u0026rsquo;ve placed these servers in an on-demand network through NSX. For your deployment tests, any multi-machine deployment will work fine, but I wanted to show NSX and the great work done by Ryan Kelly over at vmtocloud.com some love.\nTo start I\u0026rsquo;ve deployed a catalog item that has a single Docker host (Docker 15 in the screenshot) and a Swarm Host (Swarm01 in the screenshot). We\u0026rsquo;re going to scale out the Docker host because evidently, we\u0026rsquo;ve got so many containers running, we need another host. To start click on the Actions dropdown on the item.\nUnder the Actions drop down, choose \u0026ldquo;Scale Out\u0026rdquo; if you wish to add additional servers and \u0026ldquo;Scale In\u0026rdquo; if you wish to decrease the number of servers used.\nOnce the Scale Window appears, select the object that you wish to scale and change the number of virtual machines to your desired amount. The number of machines provisioned is still dependent upon the rules that were created during the blueprint creation. Remember this if you find that you can\u0026rsquo;t scale out an item, check the underlying blueprint to ensure that more than one virtual machine is available to be deployed.\nOnce you hit \u0026ldquo;Submit\u0026rdquo; a summary dialog window will open to show you what the changes to the item will be when finished. Click OK if your settings look correct.\nWhen the deployment finishes, the vRA item should show the additional objects that you have requested to scale out to. In this case an additional virtual machine named \u0026ldquo;Docker16\u0026rdquo; was deployed.\nIt\u0026rsquo;s important to note that when scaling out an item through this process, event subscriptions still run as they should. For example, whenever a machine is deployed in my lab, a vRO workflow is kicked off through an event subscription that adds the machine to Active Directory. Scaling out still executes this event subscription workflow.\n","permalink":"https://theithollow.com/2016/10/06/scaling-vrealize-automation/","summary":"\u003cp\u003eOne of the new features of vRealize Automation in version 7.1 is the ability to scale out or scale in your servers. This sort of scaling is a horizontal scaling of the number of servers. For instance, if you had deployed a single web server, you can scale out to two, three etc. When you scale in, you can go from four servers to three and so on.\u003c/p\u003e\n\u003ch1 id=\"use-cases\"\u003eUse Cases\u003c/h1\u003e\n\u003cp\u003eThe use cases here could really vary widely. The easiest to get started with would be some sort of a web / database deployment where the web servers have some static front end web pages and can be deployed over and over again with the same configurations. If we were to place the web servers behind a load balancer (yep, think NSX here for you vSphere junkies) then your web applications can be scaled horizontally based on when you run out of resources.\u003c/p\u003e","title":"Scaling in vRealize Automation"},{"content":"Azure scale sets are a way to horizontally increase or decrease resources for your applications. Wouldn\u0026rsquo;t it be nice to provision a pair of web servers behind a load balancer, and then add a third or fourth web server once the load hit 75% of capacity? Even better, when the load on those web servers settles down, they could be removed to save you money? This is what an Azure scale set does. Think of the great uses for this; seasonal demand for a shopping site, event promotions that cause a short spike in traffic, or even end of the month data processing tasks could automatically scale out to meet the demand and then scale in to save money when not needed.\nBuilding a Scale set In the Azure resource manager portal, browse the services and click on \u0026ldquo;virtual machine scale sets\u0026rdquo; item. From there, enter in the information below for the set.\nVirtual machine scale set name: A name for the scale set you\u0026rsquo;re deploying. Make this descriptive for documentation purposes of course. OS type: The operating system that will live behind the scale set. Choose Windows or Linux. Username: The username that has permissions on the virtual machines. This user account is used to monitor performance of the virtual machine so that it can be scaled out when needed. Password: The password associated with the user account. Subscription: The subscription account used to pay for the services. Resource Group: A group of resources that will be used in conjunction with each other. Location: The region where the scale set will be deployed. On the next blade, we\u0026rsquo;ll need to enter some information about what will exist in the scale set and some logic to determine when to scale out or in.\nPublic IP address: The IP Address to be assigned to the public side of the load balancer. Domain name label: The domain name that will be associated with the load balancer. Operating system disk image: The OS image that will be housed behind the scale set load balancer. The list of images will be dependent upon the OS type you selected earlier. Instance count: The number of virtual machines to be deployed to begin with. Scale set virtual machine size: The size of each of the virtual machines in the scale set. Number of CPU, amount of memory, etc. Autoscale: The option to have it automatically scale for you or to require manual intervention. Autoscale minimum number of virtual machines: The lowest number of virtual machines in your scale set. Remember to thing about redundancy here. You may not want to let the set go down to a single host. Autoscale maximum number of virtual machines: The largest number of virtual machines that can be in the scale set. You might think this should be a really high number, but consider your workloads. Scale out CPU Percentage threshold: The performance metric used to determine if more servers are required. Number of VMs to increase by on scale out: When it\u0026rsquo;s time to scale out your machines, how many more machines should be added at a time? Scale in CPU Percentage threshold: The percentage when you\u0026rsquo;ll scale in your virtual machines. Example, machines hit under 25% utilization means removing a virtual machine in the scale set. Number of VMs to decrease by on scale in: The number of virtual machines to remove when a scale in action is triggered. The next blade we\u0026rsquo;ll need to review the summary after it has passed all of the validation checks.\nThe last blade will requires us to accept the terms of use and purchase the services. Click Finish and your scale set will be created in the Azure Resource Manager portal.\n","permalink":"https://theithollow.com/2016/10/03/azure-scale-sets/","summary":"\u003cp\u003eAzure scale sets are a way to horizontally increase or decrease resources for your applications. Wouldn\u0026rsquo;t it be nice to provision a pair of web servers behind a load balancer, and then add a third or fourth web server once the load hit 75% of capacity? Even better, when the load on those web servers settles down, they could be removed to save you money? This is what an Azure scale set does. Think of the great uses for this; seasonal demand for a shopping site, event promotions that cause a short spike in traffic, or even end of the month data processing tasks could automatically scale out to meet the demand and then scale in to save money when not needed.\u003c/p\u003e","title":"Azure Scale Sets"},{"content":"The Chicago chapter of VMware Users Group had it\u0026rsquo;s annual conference at the Rosemont Convention Center on Thursday of last week and it was again a success thanks in no small part to the VMUG corporate team. Over six hundred people walked through the doors to experience sponsored sessions, community sessions, keynotes from Kit Colbert and Phoummala Schmidt, as well as plenty of other fun things. This was the fourth official Chicago VMUG Conference that I\u0026rsquo;ve attended as a member of the leadership team. This was also my final event as a leader. Typically I use this blog as a place to post technical information but in this case I felt that it is important to reflect on the importance of what this group meant to me.\nOut of my Comfort Zone When I first started attending VMUGs I was a Systems Administrator for a collection agency and my day to day role was administering the virtualization environment. I attended my first event to learn more about the products and other ways the solutions could be used to help me with my job. What I found was a community of passionate people who were also trying to learn more to help their company, boost their career, and mingle with people who had similar interests. I remember some great presentations on products and vividly remember a session about the vCLI. There were some giveaways and lunch provided, it was great. Where else could I go to a free session to learn more stuff that I wouldn\u0026rsquo;t get to see during my daily job activities? I knew then that I was signing up for the next event a few months later.\neric_srm2\nAt the second meeting someone asked for volunteers to present on a topic. At first I kind of hid my face like the kid in school who didn\u0026rsquo;t want the teacher to call on him/her, but soon I decided that I wanted to present. I knew some products pretty well and had spent plenty of hours working with VMware Site Recovery Manager, so why not? I nervously presented what my company had done at the next meeting and when I was done, I felt great about it. I came to find out that the people in the audience weren\u0026rsquo;t the only people that were learning something from the presentation. I began to realize that preparing for speaking in front of a group made me learn the material much more in depth. No one wants to get up in front of people and tell them something that isn\u0026rsquo;t true. It could be embarrassing if someone knows the real answer, and worse, someone might act on your information. So it turns out that I learned a lot about a product I thought I already knew very well, just by presenting on it.\nSoon after the presentation, I was asked to volunteer and help out with the leadership team as part of the steering committee and later went on to be a full fledged leader. Why not, I had talents that I could leverage and just by speaking at my first meeting, I could already see some doors opening for my career.\nOpening New Doors Indianapolis VMUG User Conference 2016\nVMUG itself isn\u0026rsquo;t responsible for opening any new doors to my career, but it was the avenue that allowed me to take advantage of new opportunities. I started speaking at more meetings and at conferences and soon switched jobs and went into consulting. VMUG and my new job sort of fed off of each other and I was learning at an increased rate. All of this new information made me want to share it even more which made me focus more on my blog, and additional speaking events. I eventually spoke at some conferences in other parts of the country including Connecticut, Philadelphia and Indianapolis. These conferences had between 400-900 attendees so now I was speaking in front of a larger audience. It was a new level of fear, but each time I did one of these sessions I honed my craft and learned more about myself. The confidence I gained from these engagements helped me in my job, speaking with customers, and helping to feed my desire to share information with other users who might need a hand.\nEventually, I decided it was time to move my career forward and I went to work at AHEAD, here in Chicago. AHEAD was one of the top consulting companies and I knew I\u0026rsquo;d be thrown into a lot of new things that I didn\u0026rsquo;t yet understand but I could gain a lot of knowledge in a short period of time. I knew in the back of my mind that a top tier certification might be achievable if I committed to embracing all of the things I might learn soon. The move over to AHEAD was a scary change since I had just bought a new house and I was making very good money at my previous job, but knew I needed to take a step back if I wanted to move forward with something more lasting. VMUG is one of the reasons that I took this chance on a new company.\nPeople Knowledge is a great thing that comes out of the VMUG organization, but by far the most important thing that came out of my time as a VMUG Leader were the people. Through VMUG and transitively my move to AHEAD, I finally found a few select people that I would consider mentors which was something I previously had been lacking in my career. This is not to say that I had bad leaders, or dumb people that I had worked with before, but either directly or indirectly due to VMUG I met some people that were invested in my career development. I got encouragement and advice with my career, with my blog, with how I approached designs and found a great partner to attempt the VMware Certified Design Expert (VCDX) certification with. You gentleman know who you are, thank you for your support and encouragement.\nThe corporate team from VMUG made the meetings very simple to put together and made the leaders look much better than we probably deserved. I owe a special thanks to Andrea, Emily, Renee, Brandi, Abby, Colleen, Maria, Jean and many more whom I\u0026rsquo;m sure I\u0026rsquo;ve left out here. Thank you for your support and hard work over the years and I hope to see you again soon at some of your great events. Keep up the good work.\nTo the other Chicago VMUG Leaders that I\u0026rsquo;ve worked with, Chris Wahl, Jason Bertini, Patrick Benson, Adam Cavaliere, Brian Suhr, Justin Lauer and Neil Soderstrom; Thank you for the opportunity to work with you, to learn from you, and for the opportunity to share these wonderful experiences with you.\nSo long\u0026hellip;\n2016leaders\nVMUG Leaders with Home Lab Winner\nConnecticut VMUG Keynote\nIndianapolis VMUG Keynote\nVCDX Partner - Tim Carr\nwahl-phoummala-me\n","permalink":"https://theithollow.com/2016/09/26/a-farewell-to-vmug/","summary":"\u003cp\u003eThe Chicago chapter of VMware Users Group had it\u0026rsquo;s annual conference at the Rosemont Convention Center on Thursday of last week and it was again a success thanks in no small part to the VMUG corporate team. Over six hundred people walked through the doors to experience sponsored sessions, community sessions, keynotes from Kit Colbert and Phoummala Schmidt, as well as plenty of other fun things. This was the fourth official Chicago VMUG Conference that I\u0026rsquo;ve attended as a member of the leadership team. This was also my final event as a leader. Typically I use this blog as a place to post technical information but in this case I felt that it is important to reflect on the importance of what this group meant to me.\u003c/p\u003e","title":"A Farewell to VMUG"},{"content":"Microsoft Azure has a neat way to store and run code right from within Microsoft Azure called \u0026ldquo;Azure Automation\u0026rdquo;. If you\u0026rsquo;re familiar with Amazon\u0026rsquo;s Lambda service, Azure Automation is similar in many ways. The main difference is that in Azure, we\u0026rsquo;re working with PowerShell code instead of Python or Node.js.\nCreate An Azure Automation Account To get started, the first thing that we need to do is to setup an Azure Automation Account. In your Azure instance, browse for \u0026ldquo;Automation Accounts\u0026rdquo; and then click Add. Give the account a name and a subscription that the PowerShell commands should run under. As with any Azure objects, select a resource group or create your own and then select a location. The last setting is to decide whether or not the account with be an \u0026ldquo;Azure Run As\u0026rdquo; account. If you select \u0026ldquo;Yes\u0026rdquo; then the account will have access to other Azure Resources within your instance. For our examples, this account should be a \u0026ldquo;run as\u0026rdquo; account.\nBuilding a Runbook Now that the account has been created, we can open the account properties and go runbooks. A runbook is the script that will be executed through the Azure portal. Click Create a new runbook (you can import one if you already have existing runbooks as well) and give the runbook a name. Then under the \u0026ldquo;Runbook type\u0026rdquo; field, you\u0026rsquo;ll need to select the type of runbook. There are two main types of runbooks, PowerShell or Graphical. Powershell workflows allow you to type PowerShell code directly into the Azure window, while the Graphical type will show you cmdlets and they can be added by clicking on them. For our purposes we\u0026rsquo;ll be typing the PowerShell code in directly, so we won\u0026rsquo;t use the Graphical type. Add a description and click the \u0026ldquo;Create\u0026rdquo; button.\nNow we can edit our runbook and put our PowerShell code into the work pane. I\u0026rsquo;ve added some code here to authenticate with Microsoft Azure and get a list of all virtual machines with \u0026ldquo;test\u0026rdquo; somewhere in their name. Then stop the virtual machine. Maybe you even have the code delete virtual machines with \u0026ldquo;test\u0026rdquo; in their name to cleanup your Azure subscriptions and stop paying for those resources. You can make the code look how you want it.\nOnce you\u0026rsquo;ve written your PowerShell code, you can click the \u0026ldquo;Test pane\u0026rdquo; link to take you to the test window. Here you can enter any parameters that might be involved in the script and execute it. The status will show up as queued in the test pane and then eventually will run. This usually isn\u0026rsquo;t immediately executed, but should execute fairly quickly after being submitted. The code will execute and show you any results in the pane.\nOnce you\u0026rsquo;ve tested your code and are happy with it, you can \u0026ldquo;Publish\u0026rdquo; the code to make it available to your users. The Runbooks screen will show you the current status.\nCreate an Entry Point Now that the code is published you can run it from the Azure portal, but you can also set it on a schedule, or create a webhook to call through a REST call. In our example a schedule might be a good way to execute the code and shutdown your test VMs on a specific routine to cut costs, but here I\u0026rsquo;m actually going to add a Webhook so that I can execute it through a REST call.\nI click Add Webhook and create a new one. The webhook also gets a name and expiration date as well as being able to enable or disable it from the Internet. The last box will show the URL used to make a POST request. Be sure to copy that URL to use it later on. You can also set some of the configuration parameters ahead of time so that you don\u0026rsquo;t pass them through the REST call, its up to you.\nExecute the Code I used an online REST call maker and pasted in my Request URL. This triggers the PowerShell code we just created through the webhook.\nOnce I run it, a job will be returned, and we can view the job status on the runbook screen.\nWhen I look in my virtual machines screen, I see that my virtual machine named \u0026ldquo;testvm\u0026rdquo; is now stopped thanks to our script running.\nAdditional Options Credentials Under your Automation Account, in Assets you can also add credentials so that they can be used in future scripts. Perhaps you need credentials to login to some of your virtual machines to execute PowerShell scripts on them. The credentials screen lets you store these in the portal for reuse.\nInstall PowerShell Modules In some cases you may want to add additional PowerShell functionality to your Azure instance. For instance, you might want to install the Posh-SSH module so that you can execute ssh commands from PowerShell. You can do this in a couple of ways, the first is to go to your Azure Automation account and go to the assets option. Select Modules and then you can browse the gallery for the module you want to install in your Azure instance.\nThe second way is to go right to the PowerShell Gallery and find the module. Microsof thas added a very nice \u0026ldquo;Deploy to Azure Automation\u0026rdquo; button to add the module to your Azure instance. zure\n","permalink":"https://theithollow.com/2016/09/19/get-started-azure-automation/","summary":"\u003cp\u003eMicrosoft Azure has a neat way to store and run code right from within Microsoft Azure called \u0026ldquo;Azure Automation\u0026rdquo;. If you\u0026rsquo;re familiar with Amazon\u0026rsquo;s Lambda service, Azure Automation is similar in many ways. The main difference is that in Azure, we\u0026rsquo;re working with PowerShell code instead of Python or Node.js.\u003c/p\u003e\n\u003ch1 id=\"create-an-azure-automation-account\"\u003eCreate An Azure Automation Account\u003c/h1\u003e\n\u003cp\u003eTo get started, the first thing that we need to do is to setup an Azure Automation Account. In your Azure instance, browse for \u0026ldquo;Automation Accounts\u0026rdquo; and then click Add. Give the account a name and a subscription that the PowerShell commands should run under. As with any Azure objects, select a resource group or create your own and then select a location. The last setting is to decide whether or not the account with be an \u0026ldquo;Azure Run As\u0026rdquo; account. If you select \u0026ldquo;Yes\u0026rdquo; then the account will have access to other Azure Resources within your instance. For our examples, this account should be a \u0026ldquo;run as\u0026rdquo; account.\u003c/p\u003e","title":"Get Started with Azure Automation"},{"content":"If you\u0026rsquo;re getting started with Microsoft Azure, you may feel confused about where things are located. One of the reasons for this confusion is the current use of multiple portals. It\u0026rsquo;s hard enough to learn how subscriptions work, how to access the resources through PowerShell and all of those new concepts without having to navigate different sites. This post should shed some light on what the portals are and how they\u0026rsquo;re used.\nBilling Portal account.windowsazure.com\nThe services for Azure aren\u0026rsquo;t free and we will want to know how much these services cost. The billing portal allows us to manage our subscriptions, create new ones and view how much those subscriptions cost our organization.\nClassic Portal manage.windowsazure.com\nMicrosoft jumped into the public cloud offerings long ago and the classic portal is where it all began. This portal gave Microsoft it\u0026rsquo;s first public cloud portal. Initial services were published here in the catalog and are still available for use. If you\u0026rsquo;re new to Microsoft Azure, you may start with the new portal, but many services will redirect you here if they haven\u0026rsquo;t been ported over yet.\nNew Portal portal.azure.com\nThe new portal was previously known as the Azure Resource Manager (ARM) portal. So anytime you hear ARM, think this new portal. The new portal should be the go forward location to build new services and many of the services in the classic portal are being ported over to this site. Each of the services here are represented, not by a different web page, but a \u0026ldquo;blade\u0026rdquo;. These blades open depending on the properties of the service that is opened.\nSome services here in the new portal haven\u0026rsquo;t been ported over yet. If you browse for services, some of them will have a special icon to let you know you\u0026rsquo;re changing to a different portal to set them up.\nSummary Hopefully this post has shed some light on what the purpose of each of the portals is. I neglected to tell you that there is actually a fourth portal thats not in this list called the Enterprise Portal. This portal gives customers with enterprise agreements (EA) the ability to manage multiple accounts, and yes this has its own portal to do so.\n","permalink":"https://theithollow.com/2016/09/12/microsoft-azure-portals/","summary":"\u003cp\u003eIf you\u0026rsquo;re getting started with Microsoft Azure, you may feel confused about where things are located. One of the reasons for this confusion is the current use of multiple portals. It\u0026rsquo;s hard enough to learn how subscriptions work, how to access the resources through PowerShell and all of those new concepts without having to navigate different sites. This post should shed some light on what the portals are and how they\u0026rsquo;re used.\u003c/p\u003e","title":"Microsoft Azure Portals"},{"content":"Azure provides a Platform-as-a-Service offering called a \u0026ldquo;Cloud Service.\u0026rdquo; Instead of managing every part of a virtual machine (the middle-wear and the application) it might be desirable to only worry about the application that is being deployed. An Azure cloud service allows you to just focus on the app, but does give you access to the underlying virtual machine if you need to use it.\nSo what makes up an Azure Cloud Service? There are two main types of virtual machines that are deployed through a cloud service; web roles and worker roles. Web roles are Windows servers with IIS installed and ready to use on them. Worker roles are Windows servers without IIS installed. In addition to the Windows instances that will be deployed, a cloud service also includes a load balancer that will automatically load balance the web roles, and an IP Address will be assigned to the load balancer. One thing to note is that the web server roles have an agent installed on them as well so that the load balancer can determine if the server is working correctly and if it needs to remove a server from the load balancer.\nDeploy a Cloud Service through Microsoft Visual Studio One of the great features about a cloud service is that it can be completely deployed through Microsoft Visual Studio. This puts the application deployment control directly into the hands of the developers who are coding it. To get started, you\u0026rsquo;ll need to install Microsoft Visual Studio and then install the Azure SDK. Once this is complete, create a new project and select the \u0026ldquo;Azure Cloud Service\u0026rdquo; template under the \u0026ldquo;Cloud\u0026rdquo; menu.\nNext, I\u0026rsquo;m selecting a Web Role to be added.\nNOTE: even though a cloud service can include worker nodes, this example is only using web roles for a simple demo.\nNext I\u0026rsquo;m selecting an Model-View-Controller type of web role.\nNow that you have a project created, you can make modifications to your web page. Things like adding graphics and the main work of building a web service for whatever your business purpose is. I\u0026rsquo;ve made a few modifications from the default template, but you can just deploy the plain template if you\u0026rsquo;re learning. One thing I did do, was to click on my WebRole1 object and change the instance count to two. This means that I\u0026rsquo;ll be deploying two web servers as part of this cloud service.\nPublish Your Cloud Service to Azure Once your web service is ready to go, right click on the service and click \u0026ldquo;Publish\u0026hellip;\u0026rdquo;\nYou may need to provide your Azure portal login first thing. After that, select the subscription that this service will be billed under and give the cloud service a name. This name must be unique in Azure, not just your account. Select a region or affinity group and the replication methodology for your servers. Next you\u0026rsquo;ll be able to select an environment (either staging or production) and will also have an opportunity to enable remote desktop for all of your roles in case you need to login to these servers for futher configuration.\nWait for your service to be deployed. After this is complete you can look at the \u0026ldquo;Cloud services (classic) menu in Azure and will see an entry that matches your cloud app. Notice that you\u0026rsquo;ll have a Site URL, Public IP Address and a status for all of your instances listed. It\u0026rsquo;s also worth noting that the roles have been added to separate update and fault domains so that an outage won\u0026rsquo;t take down your whole cloud service.\nGo to your URL and you should see your web service running and load balanced.\nAdd Functionality We can look at our Visual Studio code again and add whatever options that we need to add. One thing that might need to be added is some worker nodes. Worker nodes may be scaled out in the same manner that the web roles were. The worker nodes may have an http listener to perform actions or really anything you need it to do. One thing to be aware of though is many times you don\u0026rsquo;t want your worker roles to be accessed from the Internet. Generally the worker roles only communicate with the front-end web services and not the Internet. To do this we can change our endpoint inputs for the worker roles. In the example below, I\u0026rsquo;ve added a worker role and added an endpoint on port 8088. Then under type we can change it from an \u0026ldquo;Input\u0026rdquo; endpoint to an \u0026ldquo;Internal\u0026rdquo; endpoint so that it is only accessible from within the cloud service container.\nSummary Cloud Services is a nice way to add some functionality when you need to deploy web and app servers together. It can ease your operational tasks because it will deploy your load balancers operating systems for web and app servers for you. You can also combine this with some PaaS Services like a service bus so that you can even more functionality and deploy the whole thing through Visual Studio.\n","permalink":"https://theithollow.com/2016/09/07/azure-cloud-services/","summary":"\u003cp\u003eAzure provides a Platform-as-a-Service offering called a \u0026ldquo;Cloud Service.\u0026rdquo; Instead of managing every part of a virtual machine (the middle-wear and the application) it might be desirable to only worry about the application that is being deployed. An Azure cloud service allows you to just focus on the app, but does give you access to the underlying virtual machine if you need to use it.\u003c/p\u003e\n\u003cp\u003eSo what makes up an Azure Cloud Service? There are two main types of virtual machines that are deployed through a cloud service; web roles and worker roles. Web roles are Windows servers with IIS installed and ready to use on them. Worker roles are Windows servers without IIS installed. In addition to the Windows instances that will be deployed, a cloud service also includes a load balancer that will automatically load balance the web roles, and an IP Address will be assigned to the load balancer. One thing to note is that the web server roles have an agent installed on them as well so that the load balancer can determine if the server is working correctly and if it needs to remove a server from the load balancer.\u003c/p\u003e","title":"Azure Cloud Services"},{"content":"Azure allows you to manage network interfaces as an object that can be decoupled from the virtual machine. This is important to note, because when you delete your virtual machine, the Network Interface will still be in the Azure Portal. This NIC and all of it\u0026rsquo;s settings will still exist for reuse if you wish. This would include keeping the Public IP Address that is associated with it, subnets, and Network Security Groups.\nNOTE: At the time of this writing, Microsoft has a pair of Azure portals including the new Azure Resource Manager portal and the Azure Classic portal. Not all abilities are in the new portal yet. This post focuses solely on the new Azure Resource Manager Portal.\nCreate a Network Interface Creating a network interface is pretty simple, but using them can be somewhat confusing to a first timer. To setup a NIC, go to Network Interfaces and click the \u0026ldquo;Add\u0026rdquo; button. Give the NIC a descriptive but unique name, then select the VNet and subnet that it is associated with. After this, you can select whether or not the IP Address will be assigned dynamically (think DHCP) or Statically set. If you choose static, you\u0026rsquo;ll need to provide the private IP Address to be used. Lastly, select a security group if you need one, and then you must select both a subscription as well as a location.\nOnce you\u0026rsquo;ve setup the NIC, it will be stored in Azure as an object with its own list of properties. You can drill down into the object to see the IP Addresses security groups and DNS servers that are associated with it.\nDeploying a Multi-Nic Virtual Machine OK, here\u0026rsquo;s the rub, when you deploy a new virtual machine through the ARM portal, you aren\u0026rsquo;t allowed to select the Network Interface! Yep, this is true and I sort of expect this to change in the future because we have a great mechanism to build Network Interfaces, but we have no way to attach the NIC to a virtual machine through the portal. I\u0026rsquo;ll wait for you to work through your disbelief. When you deploy a new virtual machine a new network interface will be added to it automatically. Having a secondary NIC though is something that is desirable, maybe your virtual machine needs a front-end/back-end setup like in a DMZ.\nThere are several things that you should know before you try to deploy a Multi-Nic virtual machine in Azure.\nYou can\u0026rsquo;t deploy a multi-nic virtual machine from the Azure Resource Manager Portal. It must be done through the Azure CLI, API or PowerShell Not every virtual machine can size can support multiple NICs. Make sure that the size you choose will support more than one NIC. You must connect every NIC in a virtual machine to the same VNet. The subnets can be different, but the VNet must be the same. No cross VNet virtual machines. If you plan to add a NIC to an existing virtual machine, it must be a multi-nic virtual machine to start with. You are not allowed to add a second virtual NIC to a machine if it started with one NIC. To add a nic to an existing VM you can go from two Nics to three, but not one NIC to two. If you try to add another NIC to an existing virtual machine, it will be rebooted. When you add a dual NIC VM, one NIC must be marked as primary. OK, so now we need to know how to deploy our virtual machine through PowerShell instead of through the portal, so that we can make it a multi-nic virtual machine. I\u0026rsquo;ve provided some sample code below that I used in my deployment. I\u0026rsquo;ve also got the code stored in Github for public usage if you\u0026rsquo;d like that.\n#Azure Account Variables $subscr = \u0026#34;AZURERM\u0026#34; $StorageAccountName = \u0026#34;AZURERMSTORAGEACCOUNT\u0026#34; $StorageResourceGroup = \u0026#34;AZURESTORAGERESOURCEGROUP\u0026#34; #set Azure Subscriptions and Storage Account Defaults Get-AzureRmSubscription -SubscriptionName $subscr | Select-AzureRmSubscription -WarningAction SilentlyContinue $StorageAccount = Get-AzureRmStorageAccount -name $StorageAccountName -ResourceGroupName $StorageResourceGroup | set-azurermstorageaccount -WarningAction SilentlyContinue ##Global Variables $resourcegroupname = \u0026#34;AZURERESOURCEGROUPNAME\u0026#34; $location = \u0026#34;AZURERMLOCATION\u0026#34; ## Compute Variables $VMName = \u0026#34;AZURERMVIRTUALMACHINENAME\u0026#34; $ComputerName = \u0026#34;AZUREMACHINENAME\u0026#34; $VMSize = \u0026#34;AZURERMSIZE\u0026#34; $OSDiskName = $VMName + \u0026#34;OSDisk\u0026#34; ## Network Variables $Interface1Name = $VMName + \u0026#34;_int1\u0026#34; $Interface2Name = $VMName + \u0026#34;_int2\u0026#34; $Subnet1Name = \u0026#34;AZURERMSUBNET1\u0026#34; $Subnet2Name = \u0026#34;AZURERMSUBNET2\u0026#34; $VNetName = \u0026#34;AZURERMVNET\u0026#34; ########################################################### #Do Not Edit Below This Point # ########################################################### ## Network Interface Creation $PIp1 = New-AzureRmPublicIpAddress -Name $Interface1Name -ResourceGroupName $ResourceGroupName -Location $Location -AllocationMethod Dynamic -WarningAction SilentlyContinue $VNet = Get-AzureRmVirtualNetwork -name $VNetName -ResourceGroupName $resourcegroupname -WarningAction SilentlyContinue $Interface1 = New-AzureRmNetworkInterface -Name $Interface1Name -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $VNet.Subnets[0].Id -PublicIpAddressId $PIp1.Id -WarningAction SilentlyContinue $Interface2 = New-AzureRmNetworkInterface -Name $Interface2Name -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $VNet.Subnets[0].Id -WarningAction SilentlyContinue ## Create VM Object $Credential = Get-Credential $VirtualMachine = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize -WarningAction SilentlyContinue $VirtualMachine = Set-AzureRmVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $Credential -ProvisionVMAgent -EnableAutoUpdate -WarningAction SilentlyContinue $VirtualMachine = Set-AzureRmVMSourceImage -VM $VirtualMachine -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version \u0026#34;latest\u0026#34; -WarningAction SilentlyContinue $VirtualMachine = Add-AzureRmVMNetworkInterface -VM $VirtualMachine -Id $Interface1.Id -WarningAction SilentlyContinue $VirtualMachine = Add-AzureRmVMNetworkInterface -VM $VirtualMachine -Id $Interface2.Id -WarningAction SilentlyContinue $OSDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + \u0026#34;vhds/\u0026#34; + $OSDiskName + \u0026#34;.vhd\u0026#34; $VirtualMachine = Set-AzureRmVMOSDisk -VM $VirtualMachine -Name $OSDiskName -VhdUri $OSDiskUri -CreateOption FromImage -WarningAction SilentlyContinue $VirtualMachine.NetworkProfile.NetworkInterfaces.Item(0).Primary = $true ## Create the VM in Azure New-AzureRmVM -ResourceGroupName $ResourceGroupName -Location $Location -VM $VirtualMachine -WarningAction SilentlyContinue Once you\u0026rsquo;ve deployed your new virtual machine\nAdd Another NIC to an Existing VM Now that we\u0026rsquo;ve got a dual NIC virtual machine and our pre-created Network Interface, we can add this nic to our virtual machine. Again, to do this (as of the time of this blog post) you must do this through the Azure CLI, API, or PowerShell module. The code below is what I used to add my nic to the virtual machine, but I\u0026rsquo;ve also added the code to the Github Repo as well for your use.\n#Set Subscription Variables Add-AzureAccount $subscr = \u0026#34;AZURERMSUBSCRIPTION\u0026#34; Get-AzureRmSubscription -SubscriptionName $subscr | Select-AzureRmSubscription #Set Variables $VMname = \u0026#34;VMNAME\u0026#34; $VMRG = \u0026#34;VIRTUALMACHINERESOURCEGROUP\u0026#34; $NIC = \u0026#34;VIRTUALNICNAME\u0026#34; $NicRG = \u0026#34;NICRESOURCEGROUP\u0026#34; ################################################### #DO NOT edit below this point # ################################################### #Get the VM $VM = Get-AzureRmVM -Name $VMname -ResourceGroupName $VMRG #Get the NIC $NewNIC = Get-AzureRmNetworkInterface -Name $NIC -ResourceGroupName $NICRG #Add the NIC to the VM $VM = Add-AzureRmVMNetworkInterface -VM $VM -Id $NewNIC.Id #Reconfigure the VM #NOTE ------- VM WILL BE RESTARTED!!!!!!!!!!!!!!!!!!!! Update-AzureRmVM -VM $VM -ResourceGroupName $VMRG The result is that my HollowNic1 has been added to my virtual machine.\nSummary Network Interfaces might be an important part to you deployment strategy but for now, remember that you must take this into account when you first deploy your machines and it must be done through the CLI, API, or PowerShell. For the time being, this can\u0026rsquo;t be done through the Azure Resource Manager Portal. Happy coding.\n","permalink":"https://theithollow.com/2016/09/06/azure-network-interfaces/","summary":"\u003cp\u003eAzure allows you to manage network interfaces as an object that can be decoupled from the virtual machine. This is important to note, because when you delete your virtual machine, the Network Interface will still be in the Azure Portal. This NIC and all of it\u0026rsquo;s settings will still exist for reuse if you wish. This would include keeping the Public IP Address that is associated with it, subnets, and Network Security Groups.\u003c/p\u003e","title":"Azure Network Interfaces"},{"content":"An issue serious enough to require servers in your data center to be failed over to a secondary site will probably keep you busy enough all on it\u0026rsquo;s own. You don\u0026rsquo;t want to have to think about how complicated your disaster recovery tool is. I\u0026rsquo;ve been impressed with Zerto since the first time that I worked with it. The tool requires a piece of software called the Zerto Virtual Manager, to be installed at each of your sites and connected to your vCenters. This manager will then deploy replication appliances on each of your ESXi hosts to manage the replication. From there on, all the replication settings, orchestration options, and fail over tasks are completed through this manager.\nZerto has been a long time sponsor of theITHollow.com. This sponsorship had no influence on the writing or content of this article.\nZerto is a simple to install, simple to configure and simple to manage disaster recovery orchestration tool. The video in this post will demonstrate how to get it setup, configured a basic virtual protection group and both test as well as fail over your virtual machines.\nhttps://youtu.be/2HYUpSaUzVk\n","permalink":"https://theithollow.com/2016/08/24/simple-disaster-recovery-options-zerto/","summary":"\u003cp\u003eAn issue serious enough to require servers in your data center to be failed over to a secondary site will probably keep you busy enough all on it\u0026rsquo;s own. You don\u0026rsquo;t want to have to think about how complicated your disaster recovery tool is. I\u0026rsquo;ve been impressed with Zerto since the first time that I worked with it. The tool requires a piece of software called the Zerto Virtual Manager, to be installed at each of your sites and connected to your vCenters. This manager will then deploy replication appliances on each of your ESXi hosts to manage the replication. From there on, all the replication settings, orchestration options, and fail over tasks are completed through this manager.\u003c/p\u003e","title":"Simple Disaster Recovery Options with Zerto"},{"content":"Congratulations! If you\u0026rsquo;ve made it this far in the Microsoft Azure Series, you\u0026rsquo;re finally ready to start deploying virtual machines in Microsoft Azure. Let\u0026rsquo;s face it, the whole series has led up to this post because most of you are probably looking at getting started in Azure with the virtual machine. It\u0026rsquo;s familiar and can house applications, databases, data or whatever you\u0026rsquo;ve been housing in in your on premises data center. If you\u0026rsquo;re trying to benchmark Azure with you\u0026rsquo;re own data center apps, virtual machines are probably where you\u0026rsquo;ll spend your time. As you learn more about the the platform, Azure\u0026rsquo;s PaaS offerings might be more heavily used to prevent you from having to manage those pesky operating systems but for now we\u0026rsquo;re focusing on the VM.\nDeploy In the new portal (formerly called the ARM portal) click the \u0026ldquo;New\u0026rdquo; button. Select Virtual Machines and then select your Featured App. The feature app an contain bare operating systems in a variety of versions and flavors.\nOnce you\u0026rsquo;ve selected an app, review the terms and select a deployment model. We\u0026rsquo;re focusing on the new portal for this series primarily, so choose resource manager.\nNow a new set of \u0026ldquo;blades\u0026rdquo; opens and we\u0026rsquo;ll need to enter some information.\nNOTE: These options might vary a bit depending on the location and the operating system.\nEnter a server name, and a disk type of either SSD or HDD. Then enter a username for the person who will have administrator access on the box. If you choose a linux distribution you may need to select an authentication type so that you can use an SSH key instead of a password. As with all Azure resources, choose a resource group, and then finally select a location.\nThe next screen will show you some recommended \u0026ldquo;t-shirt\u0026rdquo; sizes for the virtual machine. Select the size that is appropriate for your workload. On the next blade we need to select some placement information. Select the disk type, storage account, VNet and subnet, public IP address if needed and a security group. You can also enable monitoring for this machine. Monitoring will allow you to track information about the performance of the virtual machine and possibly alert based on those metrics. Another cool thing we can do on this blade is add a virtual machine extension. Extensions allow you to have some automated tasks performed on your virtual machine like install the Chef Client. To learn more about extensions check out https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-extensions-features/. You can finally select the availability group that the machine should belong with so that your app can withstand an outage of a singe Azure zone.\nReview the settings and click create.\nIn a few moments your machine will be built and you\u0026rsquo;ll be able to RDP or SSH into it with the credentials provided in the setup screens. If you are unable to login, check to make sure your subnet is reachable and that no network security groups are blocking the RDP or SSH traffic to the new machine. Install your apps and have fun with your new virtual machine.\nDeploy from PowerShell If you\u0026rsquo;re one of those people who love to do their work from the command line, or you want to programmatically deploy your servers for an application pipeline or something, you can also deploy your virtual machines through PowerShell. Here is some sample code to get you started.\nThe code below requires you to enter in a few variables and will deploy a virtual machine in the Azure Resource Manager (new) Portal. It requires you to login and enter local administrator credentials for the machine. You will be prompted to enter these two sets of credentials unless you modify the below code. You may also download this code from my git repo: https://github.com/theITHollow/AzureRMDeployVM\n#Azure Account Variables $subscr = \u0026#34;AZURERM\u0026#34; $StorageAccountName = \u0026#34;AZURERMSTORAGEACCOUNT\u0026#34; $StorageResourceGroup = \u0026#34;AZURESTORAGERESOURCEGROUP\u0026#34; #set Azure Subscriptions and Storage Account Defaults Login-AzureRmAccount Get-AzureRmSubscription -SubscriptionName $subscr | Select-AzureRmSubscription -WarningAction SilentlyContinue $StorageAccount = Get-AzureRmStorageAccount -name $StorageAccountName -ResourceGroupName $StorageResourceGroup | set-azurermstorageaccount -WarningAction SilentlyContinue ##Global Variables $resourcegroupname = \u0026#34;AZURERESOURCEGROUPNAME\u0026#34; $location = \u0026#34;AZURERMLOCATION\u0026#34; ## Compute Variables $VMName = \u0026#34;AZURERMVIRTUALMACHINENAME\u0026#34; $ComputerName = \u0026#34;AZUREMACHINENAME\u0026#34; $VMSize = \u0026#34;AZURERMSIZE\u0026#34; $OSDiskName = $VMName + \u0026#34;OSDisk\u0026#34; ## Network Variables $VNetName = \u0026#34;AZURERMVNET\u0026#34; $Subnet1Name = \u0026#34;AZURERMSUBNET1\u0026#34; $Interface1Name = $VMName + \u0026#34;_int1\u0026#34; ########################################################### #Do Not Edit Below This Point # ########################################################### ## Network Interface Creation $PIp1 = New-AzureRmPublicIpAddress -Name $Interface1Name -ResourceGroupName $ResourceGroupName -Location $Location -AllocationMethod Dynamic -WarningAction SilentlyContinue $VNet = Get-AzureRmVirtualNetwork -name $VNetName -ResourceGroupName $resourcegroupname -WarningAction SilentlyContinue $Interface1 = New-AzureRmNetworkInterface -Name $Interface1Name -ResourceGroupName $ResourceGroupName -Location $Location -SubnetId $VNet.Subnets[0].Id -PublicIpAddressId $PIp1.Id -WarningAction SilentlyContinue ## Create VM Object $Credential = Get-Credential $VirtualMachine = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize -WarningAction SilentlyContinue $VirtualMachine = Set-AzureRmVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName $ComputerName -Credential $Credential -ProvisionVMAgent -EnableAutoUpdate -WarningAction SilentlyContinue $VirtualMachine = Set-AzureRmVMSourceImage -VM $VirtualMachine -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2012-R2-Datacenter -Version \u0026#34;latest\u0026#34; -WarningAction SilentlyContinue $VirtualMachine = Add-AzureRmVMNetworkInterface -VM $VirtualMachine -Id $Interface1.Id -WarningAction SilentlyContinue $OSDiskUri = $StorageAccount.PrimaryEndpoints.Blob.ToString() + \u0026#34;vhds/\u0026#34; + $OSDiskName + \u0026#34;.vhd\u0026#34; $VirtualMachine = Set-AzureRmVMOSDisk -VM $VirtualMachine -Name $OSDiskName -VhdUri $OSDiskUri -CreateOption FromImage -WarningAction SilentlyContinue $VirtualMachine.NetworkProfile.NetworkInterfaces.Item(0).Primary = $true ## Create the VM in Azure New-AzureRmVM -ResourceGroupName $ResourceGroupName -Location $Location -VM $VirtualMachine -WarningAction SilentlyContinue ","permalink":"https://theithollow.com/2016/08/23/deploying-virtual-machines-microsoft-azure/","summary":"\u003cp\u003eCongratulations! If you\u0026rsquo;ve made it this far in the \u003ca href=\"/2016/07/18/guide-getting-started-azure/\"\u003eMicrosoft Azure Series\u003c/a\u003e, you\u0026rsquo;re finally ready to start deploying virtual machines in Microsoft Azure. Let\u0026rsquo;s face it, the whole series has led up to this post because most of you are probably looking at getting started in Azure with the virtual machine. It\u0026rsquo;s familiar and can house applications, databases, data or whatever you\u0026rsquo;ve been housing in in your on premises data center. If you\u0026rsquo;re trying to benchmark Azure with you\u0026rsquo;re own data center apps, virtual machines are probably where you\u0026rsquo;ll spend your time. As you learn more about the the platform, Azure\u0026rsquo;s PaaS offerings might be more heavily used to prevent you from having to manage those pesky operating systems but for now we\u0026rsquo;re focusing on the VM.\u003c/p\u003e","title":"Deploying Virtual Machines in Microsoft Azure"},{"content":"It\u0026rsquo;s a weird thing to say, but we can install PowerShell on Mac after the announcement from Microsoft that PowerShell will be available for both Macintosh and Linux. It\u0026rsquo;s pretty easy to accomplish but having a great scripting language like PowerShell available for Mac is really cool and deserves a blog post. I mean, now I don\u0026rsquo;t even need to fire up my Windows virtual machine just to run PowerShell!\nTo get started, download the OSX .pkg file from the github page: https://github.com/PowerShell/PowerShell/releases/\nOnce the file is downloaded you\u0026rsquo;ll need to find the download and right click and choose open. This will start the installation process.\nClick Next on the Introduction Page.\nSelect the Destination Disk and click Next.\nSelect the installation location and click Install.\nEnter in the administrative credentials to install the software and click Install Software.\nWhen the installation is finished click Close.\nOnce the install is complete, open a Terminal Window and run \u0026ldquo;powershell\u0026rdquo;. Once you do this, you\u0026rsquo;ll be able to execute PowerShell commands. The example below I ran \u0026ldquo;get-host\u0026rdquo; to find the PowerShell version that was installed. I will note that the Install-Module commands don\u0026rsquo;t work quite yet so adding things like PowerCLI and AzureRM modules won\u0026rsquo;t be super easy to accomplish yet. This will probably change soon. After all, this is a very early release.\n","permalink":"https://theithollow.com/2016/08/22/install-powershell-mac/","summary":"\u003cp\u003eIt\u0026rsquo;s a weird thing to say, but we can install PowerShell on Mac after the \u003ca href=\"https://azure.microsoft.com/en-us/blog/powershell-is-open-sourced-and-is-available-on-linux/\"\u003eannouncement from Microsoft\u003c/a\u003e that PowerShell will be available for both Macintosh and Linux. It\u0026rsquo;s pretty easy to accomplish but having a great scripting language like PowerShell available for Mac is really cool and deserves a blog post. I mean, now I don\u0026rsquo;t even need to fire up my Windows virtual machine just to run PowerShell!\u003c/p\u003e\n\u003cp\u003eTo get started, download the OSX .pkg file from the github page: \u003ca href=\"https://github.com/PowerShell/PowerShell/releases/\"\u003ehttps://github.com/PowerShell/PowerShell/releases/\u003c/a\u003e\u003c/p\u003e","title":"Install PowerShell on Mac"},{"content":" Today Rubrik announced the release of their latest version of the Rubrik Cloud Data Management (RCDM) operating system and this one has some really neat enhancements. If you\u0026rsquo;re not familiar with Rubrik, and hate managing backups, then you really should take a closer look at them. Their Cloud Data Management Platform makes managing backups a very simple task. Think Apple\u0026rsquo;s Time Machine, only for your data center.\nThe latest version of their operating system is named \u0026ldquo;Firefly\u0026rdquo;, instead of having a boring old number distinction like 2.0. I\u0026rsquo;m told that future versions will also be named in a similar fashion around a bio-luminescence naming scheme. So if you\u0026rsquo;re not into fireflies, just hang tight for the Angler fish version which I\u0026rsquo;m speculating will be next.\nA highlight of some of the features in the new release include:\nAdditional Archive Backup Targets Erasure Coding Physical Backup Targets Edge Virtual Appliances Additional Archive Backup Targets If you\u0026rsquo;re familiar with Rubrik already, you\u0026rsquo;ll know that you can have your backups \u0026ldquo;age out\u0026rdquo; and be moved to an object based storage platform such as Amazon S3, or your own on-premises object storage target. In the latest release, Microsoft Azure Blob Storage is now an option for archival as well. This should really make those companies with Microsoft ELAs that have Azure credits burning a whole in their pocket, very happy.\nErasure Coding RAID is so last year at this point. All the cool kids are using erasure coding to ensure data durability. Rubrik is no exception and has moved towards erasure coding to ensure that failed disks do not compromise the data which is stored on them. The best thing about the switch to erasure coding is that now the \u0026ldquo;brik\u0026rdquo; can store twice as much data with the exact same hardware. Instead of having two data blocks out of six being data (four for parity), erasure coding means that four data blocks out of six can be data, and the other two are erasure codes. Thus the doubling of capacity. This also should increase performance since more data will be pushed to disk during a write operation. Physical Backup Sources One of the main reasons I heard customers pass on Rubrik in the past was that it would only backup vSphere virtual machines. Well, as expected, this is changing as well. In the latest Firefly version, Rubrik will now allow you to backup both physical Linux servers as well as SQL Server. Linux can be setup to backup a file set through a template. For instance, you can set a template that only backs up \u0026ldquo;/home\u0026rdquo;, or a partition, or whatever you can think of. For SQL Server, Rubrik can set a policy to backup the data and logs.\nEdge Virtual Appliance We need backups across all of our sites, but some of those sites are really small and may have a tiny infrastructure footprint. It would be a shame to need to add a whole 2U appliance to the site just to backup a couple of machines. Rubrik is introducing a virtual appliance to maintain backups in those small and remote offices. This is an OVA that can be deployed on your existing disks and the virtual appliance can replicate back to your own Rubrik physical appliance or archive to object storage, much like a physical brik. If you\u0026rsquo;ve got a home lab setup, I would keep an eye on this one!\n","permalink":"https://theithollow.com/2016/08/16/rubrik-announces-firefly/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2016/08/firefly.png\"\u003e\u003cimg alt=\"firefly\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/08/firefly.png\"\u003e\u003c/a\u003e Today \u003ca href=\"http://rubrik.com\"\u003eRubrik\u003c/a\u003e announced the release of their latest version of the Rubrik Cloud Data Management (RCDM) operating system and this one has some really neat enhancements. If you\u0026rsquo;re not familiar with Rubrik, and hate managing backups, then you really should take a closer look at them. Their Cloud Data Management Platform makes managing backups a very simple task. Think Apple\u0026rsquo;s Time Machine, only for your data center.\u003c/p\u003e\n\u003cp\u003eThe latest version of their operating system is named \u0026ldquo;Firefly\u0026rdquo;, instead of having a boring old number distinction like 2.0. I\u0026rsquo;m told that future versions will also be named in a similar fashion around a bio-luminescence naming scheme. So if you\u0026rsquo;re not into fireflies, just hang tight for the Angler fish version which I\u0026rsquo;m speculating will be next.\u003c/p\u003e","title":"Rubrik Announces Firefly"},{"content":"Microsoft Azure has its own command line that can be used to script installs, export and import configurations and query your portal for information. Being a Microsoft solution, this command line is accessed through PowerShell.\nInstall Azure PowerShell Using PowerShell with Microsoft Azure is pretty simple to get up and going. The first step to getting started is to install the Azure PowerShell modules. Open up your PowerShell console and run both \u0026ldquo;Install-Module AzureRM\u0026rdquo; and then \u0026ldquo;Install-Module Azure\u0026rdquo;.\nAzure PowerShell Login Once you\u0026rsquo;ve installed the PowerShell Modules, the next step is to login to your Azure account. This can be done by using the \u0026ldquo;Login-AzureRMAccount\u0026rdquo; command.\nWhen you run the login command, you\u0026rsquo;ll be asked to authenticate. Login with your Azure credentials.\nRun Some Commands Now that we\u0026rsquo;ve gotten logged in, we can run some simple commands to ensure that we\u0026rsquo;re getting the correct data. To start, I\u0026rsquo;m running a Get-AzureRmSubscripiton command to list my subscriptions.\nAfter I\u0026rsquo;ve listed my subscriptions, I\u0026rsquo;ll set my default subscription. This will ensure that my future commands for this session are run against the correct subscription. I\u0026rsquo;ve run \u0026ldquo;Get-AzureRmSubscription -SubscriptionName $NAME | Select-AzureRmSubscription\nNow that the subscription has been set, I\u0026rsquo;ve run a couple of simple get commands to list VMs and network interfaces. ","permalink":"https://theithollow.com/2016/08/15/get-started-azure-powershell/","summary":"\u003cp\u003eMicrosoft Azure has its own command line that can be used to script installs, export and import configurations and query your portal for information. Being a Microsoft solution, this command line is accessed through PowerShell.\u003c/p\u003e\n\u003ch1 id=\"install-azure-powershell\"\u003eInstall Azure PowerShell\u003c/h1\u003e\n\u003cp\u003eUsing PowerShell with Microsoft Azure is pretty simple to get up and going. The first step to getting started is to install the Azure PowerShell modules. Open up your PowerShell console and run both \u0026ldquo;Install-Module AzureRM\u0026rdquo; and then \u0026ldquo;Install-Module Azure\u0026rdquo;.\u003c/p\u003e","title":"Get Started with Azure PowerShell"},{"content":"Azure storage accounts provide a namespace in which to store data objects. These objects could be blobs, file, tables, queues and virtual machine disks. This post focuses on the pieces necessary to create a new storage account for use within Azure Resource Manager portal.\nSetup To setup a storage account go to the Azure Resource Manager Portal, select storage accounts and then click the \u0026ldquo;Add\u0026rdquo; button. From there you\u0026rsquo;ll have some familiar settings that will need to be filled out such as a unique name for the account, a subscription to use for billing, a resource group for management, and a location for the region to be used. The rest of this article explains the additional settings shown in the screenshot below.\nAccount Kind In the \u0026ldquo;Account Kind\u0026rdquo; section, there are two types of storage available, General Purpose and Blob. I found this option to be incredibly confusing because the general purpose storage also supports blob storage as well. Blob storage accounts are for applications requiring block or append type blob storage and don\u0026rsquo;t support page blobs. The blob storage referred to in the \u0026ldquo;General Purpose\u0026rdquo; type are page blobs and provide persistent block storage similar to the Amazon\u0026rsquo;s EBS.\nIf your goal is to setup storage for a virtual machine use, then you\u0026rsquo;re looking for General Purpose storage for the account kind property. I actually found that I was able to select a blob storage account when deploying a virtual machine, but the deployment failed and the error message shows that the storage account type was not supported.\nWhen you drill down into your storage account you\u0026rsquo;ll notice that there are services listed for the storage account and you can drill into each service to get more information.\nIf we looked at the blob service, we\u0026rsquo;ll see a container named \u0026ldquo;vhds\u0026rdquo; which is where the Azure machine virtual disks are stored. You\u0026rsquo;ll need to go into this container to delete the vhd when you remove the virtual machine from Azure unless you are removing the entire resource group.\nPerformance - General Purpose Only When you create a storage account you must pick a performance tier and this can\u0026rsquo;t be modified afterwards so choose wisely. There are two types of performance as of the time of this writing.\nStandard - Backed by magnetic disks and provide lower cost per GB. These are best used for applications requiring bulk storage with infrequently accessed data.\nPremium - Backed by solid state drives offering low latency performance. Recommenced for I/O intensive applications such as databases.\nReplication In the replication box there are several options. These options are dependent upon the \u0026ldquo;Account Kind\u0026rdquo; and \u0026ldquo;Performance\u0026rdquo; selections that you\u0026rsquo;ve selected earlier. There are four main types of replication that can be used in your Azure storage account.\nLocally Redundant Storage (LRS) - This type of storage replicates data within the region where your storage account is created. Every data request in the storage account is replicated three times in different fault domains.\nZone Redundant Storage (ZRS) - This storage replicates data across two or three facilities within a region or across regions. ZRS provides protection against a facility failure.\nGeo-redundant Storage (GRS) - Replicates data to a second region that is hundreds of miles away and provides protection from regional outages. For example GRS would have the East US region replicate to the West US region to provide regional durability. It would not have East US replicate to East US 2 because they are geographically located too close together. GRS maintains six copies of the data, three within each datacenter. The data in the secondary location is replicated asynchronously meaning that it is eventually consistent. The recovery point objective on GRS is about fifteen minutes but there is no published SLA on this.\nRead-access Geo-redundant Storage (RA-GRS) - This storage is similar to the Geo-Redundant Storage described earlier except that it allows for read-only access at the secondary location.\nAccess Tier - Blob Storage Only If you\u0026rsquo;ve selected \u0026ldquo;Blob storage\u0026rdquo; as part of your Account kind, you\u0026rsquo;ll need to select an access tier. There are two different tiers that can be chosen from. A hot tier for data that is frequently accessed and a cool tier for data that is infrequently accessed. The cool tier provides a cost savings but also provides less performance.\n","permalink":"https://theithollow.com/2016/08/11/azure-storage-accounts/","summary":"\u003cp\u003eAzure storage accounts provide a namespace in which to store data objects. These objects could be blobs, file, tables, queues and virtual machine disks. This post focuses on the pieces necessary to create a new storage account for use within Azure Resource Manager portal.\u003c/p\u003e\n\u003ch1 id=\"setup\"\u003eSetup\u003c/h1\u003e\n\u003cp\u003eTo setup a storage account go to the Azure Resource Manager Portal, select storage accounts and then click the \u0026ldquo;Add\u0026rdquo; button. From there you\u0026rsquo;ll have some familiar settings that will need to be filled out such as a unique name for the account, a subscription to use for billing, a resource group for management, and a location for the region to be used. The rest of this article explains the additional settings shown in the screenshot below.\u003c/p\u003e","title":"Azure Storage Accounts"},{"content":"Unless you\u0026rsquo;re starting up a company from scratch, you probably won\u0026rsquo;t host all of your workloads in a public cloud like Microsoft Azure. If you\u0026rsquo;re building a hybrid cloud, you probably want to have network connectivity between the two clouds and that means a VPN. Microsoft Azure uses a Virtual Network Gateway to provide this connectivity.\nNOTE: As of the writing of this blog post, Microsoft has two portals that can be used to provide cloud resources. The Classic portal and the Azure Resource Manager portal. This post focuses on setting up a VPN tunnel using the new Azure Resource Manager portal.\nAzure Setup To begin setting up a VPN tunnel, we first need to deploy a virtual network gateway in our Azure VNet. To do this, go to the Azure portal and browse for the \u0026ldquo;Virtual Network Gateways\u0026rdquo; tab. Click Add to deploy a new one.\nGive the gateway a name, and select the VNet that it will belong in. Give the gateway a subnet address range. To give some background, virtual network gateways are deployed in a new subnet so you\u0026rsquo;ll need to deploy a new subnet when you deploy your virtual network gateway. Add a new public IP address or select an existing one. This IP address will serve as the public cloud\u0026rsquo;s VPN endpoint. Select \u0026ldquo;VPN\u0026rdquo; for the gateway type and then in the VPN type select \u0026ldquo;Policy-based\u0026rdquo;. As with any Azure service you\u0026rsquo;ll need to select the subscription for proper billing services. Select a resource group and location.\nWait! It may take up to twenty minutes to deploy the virtual network gateway, so be patient. When your virtual network gateway is deployed open up your gateway menu and click \u0026ldquo;Connections\u0026rdquo;. Add a new connection to add your VPN information. Here you\u0026rsquo;ll need to name the VPN tunnel and choose what type of connection. Since this is going to connect to a Cisco ASA Firewall in my lab, I\u0026rsquo;m choosing Site-to-Site (IPSec). Select the network gateway if not already selected, and add a new local network gateway.\nThe local network gateway is not an Azure object to be deployed. It is a reference to your on-premises VPN endpoint. In my case this is the public IP address of the firewall in my lab. I\u0026rsquo;ve blurred out the IP Address for obvious reasons. After adding the public IP Address, add the local IP space in your on-premises network. This is the internal network that will be accessible over the tunnel.\nOnce done, go back to your Connection menu and finish adding a PreShared Key for authentication purposes, the subscription to use and a resource group.\nOn-Premises VPN Setup You\u0026rsquo;re mileage will very here, but this is my setup through a Cisco ASA 5505 Firewall. Also, I\u0026rsquo;m using the wizard (Don\u0026rsquo;t judge me). I started the VPN wizard and the first question asks where the Peer IP Address is. This is the public IP Address of the virtual gateway we deployed earlier. If you don\u0026rsquo;t remember what that is, check the properties of your virtual network gateway in the Azure portal.\nNext, I\u0026rsquo;m asked which local and remote network should be part of the IPsec encryption. On my lab side, I want the 10.10.50.0/24 network, and in the Azure network I need to add my VNet information of 172.30.0.0/16.\nNow we\u0026rsquo;re asked how to setup authentication. We select the simple configuration and then enter the same pre-shared key that we entered in the Azure setup screen. They must match exactly, including the case. It\u0026rsquo;s a password, they must match.\nNext, I\u0026rsquo;m exempting any network address translation on my internal networks destined for the Azure VNet. Then review the summary and finish the wizard. The Results After the VPN is configured, I can check the IPSec sessions in my ASA and see that I\u0026rsquo;ve got a successful connection.\nIn Microsoft Azure, I can look at the VPN and will see that I have a \u0026ldquo;Connected\u0026rdquo; state along with data in and out. I can deploy a virtual machine in azure now and connect to it through the VPN tunnel in my home lab.\nWhen connecting to my Linux VM in my Azure VNet, I\u0026rsquo;m able to ping my lab network over the VPN even though I can\u0026rsquo;t ping any public IP Address.\nTroubleshooting Tips A couple of notes if you had issues.\nIf you have a successful tunnel created, but can\u0026rsquo;t connect to the machine, check to see if you have a security group on the subnets. Remember that they act as a firewall on your networks and may not allow SSH for example. Make sure you waited long enough for the Virtual Network Gateway to be deployed in Azure. It takes a little while to deploy. Make sure your pre-shared key matches, local and remote networks match on both sides and public IP Addresses are correct. ","permalink":"https://theithollow.com/2016/08/08/create-azure-vpn-connection/","summary":"\u003cp\u003eUnless you\u0026rsquo;re starting up a company from scratch, you probably won\u0026rsquo;t host all of your workloads in a public cloud like Microsoft Azure. If you\u0026rsquo;re building a hybrid cloud, you probably want to have network connectivity between the two clouds and that means a VPN. Microsoft Azure uses a Virtual Network Gateway to provide this connectivity.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eNOTE: As of the writing of this blog post, Microsoft has two portals that can be used to provide cloud resources. The Classic portal and the Azure Resource Manager portal. This post focuses on setting up a VPN tunnel using the new Azure Resource Manager portal.\u003c/p\u003e","title":"Create Azure VPN Connection"},{"content":"An Azure network security group is your one stop shop for access control lists. Azure NSGs are how you will block or allow traffic from entering or exiting your subnets or individual virtual machines. In the new Azure Resource Manager Portal NSGs are applied to either a subnet or a virtual NIC of a virtual machine, and not the entire machine itself.\nNOTE: At the time of this post, Azure has a pair of Azure portals, including the classic portal where NSGs are applied to a virtual machine, or the Resource Manager Portal where NSGs are applied to a VNic of a virtual machine.\nHere are some notes that you should know about Azure Network Security Groups.\nThe NSGs in Azure are Stateful. Meaning that if you open an incoming port, the outgoing port will be open automatically to allow the traffic. The default rules in a Network Security Group allow for outbound access and inbound access is denied by default. Access within the VNet is allowed by default. Like normal ACLs the rules are processed based on a priority. NSGs can only be used in the Azure region that it was created in. There is a soft limit of 100 NSGs per subscription and a soft limit of 200 rules per NSG. Create NSG To create a network security group in the Azure Resource Manager browse to the \u0026ldquo;Network Security Groups\u0026rdquo; section in the ARM Portal. Click the Add button to create a new one.\nGive the NSG a name that will be descriptive for you and select the subscription that it will belong with. Be sure to use a descriptive name that won\u0026rsquo;t need to be repeated. An NSG name must be unique within the region. Then, select a resource group that it should belong to, or create a new one and then select the location or region. Remember that the resource group an be exported to deploy later on, so this way the access control lists can be deployed with the networks and virtual machines later on.\nCreate Rules Once the NSG has been created, locate the NSG and go to the properties. Select either Inbound or Outbound security rules to add your customized rules. Click Add in the security rules page.\nGive the rule a name typically a protocol, port or application name. Then give the rule a priority, remember the lower the priority the earlier in the process that it is evaluated. It\u0026rsquo;s also a good idea to leave space between rules so that you can insert new rules later. Don\u0026rsquo;t place your priorities at 10,11,12 because you may need to inject a rule between rules 10 and 11 but won\u0026rsquo;t be able to. A better method would be adding them as 10,20,30 so that there is room between them.\nNext, select a source IP address, CIDR block or Azure Tag that the traffic will be coming from, assuming you\u0026rsquo;re doing an inbound security rule creation. Then select the protocol, source port range and the destination if you need to specify them. Lastly select the destination port range and either allow or deny the traffic.\nAssociate NSG Once your Network Security Group is created and configured with your desired rules, you will associate that NSG with a subnet or network interfaces. Under the NSG properties select either the \u0026ldquo;Network interfaces\u0026rdquo; or \u0026ldquo;Subnets\u0026rdquo;.\nIf you\u0026rsquo;ve selected the Network interfaces you\u0026rsquo;ll click the \u0026ldquo;Associate\u0026rdquo; button to connect the NSG with a virtual machine NIC. That NSG will stay with the NIC so if you move it to a different virtual machine later, the NSG will follow it.\nIf you selected \u0026ldquo;subnets\u0026rdquo;, You\u0026rsquo;ll click the \u0026ldquo;Associate\u0026rdquo; button to connect the NSG with an Azure subnet.\nSummary Network security groups are in place to provide access controls to your virtual machines. These can be set at the subnet level to affect all of the virtual NICs attached in those subnets, or directly on a NIC themselves. These can be used to segment a tiered application, setup DMZs or just limit the access to the outside world so they should be planned out carefully for your new Azure networks.\n","permalink":"https://theithollow.com/2016/08/03/azure-network-security-groups/","summary":"\u003cp\u003eAn Azure network security group is your one stop shop for access control lists. Azure NSGs are how you will block or allow traffic from entering or exiting your subnets or individual virtual machines. In the new Azure Resource Manager Portal NSGs are applied to either a subnet or a virtual NIC of a virtual machine, and not the entire machine itself.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eNOTE: At the time of this post, Azure has a pair of Azure portals, including the classic portal where NSGs are applied to a virtual machine, or the Resource Manager Portal where NSGs are applied to a VNic of a virtual machine.\u003c/p\u003e","title":"Azure Network Security Groups"},{"content":"Setting up networks in Microsoft Azure is pretty simple task, but care should be taken when deciding how the address space will be carved out. To get started lets cover a couple of concepts about how Azure handles networking. To start we have the idea of a \u0026ldquo;VNet\u0026rdquo; which is the IP space that will be assigned to smaller subnets. These VNets are isolated from each other and the outside world. If you want your VNet to communicate with another VNet or your on-premises networks, you\u0026rsquo;ll need to setup a VPN tunnel. You might be wondering, how do you do any segmentation between servers without having to setup a VPN then? The answer there is using subnets. Multiple subnets can be created inside of a VNet and security groups can be added to them so that they only allow certain traffic, sort of like a firewall does.\nThe example below shows a pair of VNets each with their own address space of 172.3X.0.0/16. The two VNets can\u0026rsquo;t communicate with each other without a VPN tunnel. However, inside of the left VNet there are two subnets named Management and Workloads and they are carved out of the larger VNet address space. These two subnets can communicate by default, with no other communications needed. Traffic between subnets is allowed and IP routes are automatically added by default. The last subnet is a Gateway Subnet and this is created when you deploy a Virtual Network Gateway, to be covered in another post.\nNOTE: At the time of this writing, Microsoft has two separate portals for managing Azure, Classic and the Resource Manager Portal. The setup of the VNets in the rest of this post are based on the new Azure Resource Manager Portal.\nSetup To setup your first VNet, login to your Azure Portal and browse to \u0026ldquo;Virtual Networks\u0026rdquo;. Click the Add button.\nGive the VNet a descriptive name and then enter an Address space. Remember here that the subnets that you create will be a subset (should go without saying based on the name) of the larger VNet Address space. Also remember that you don\u0026rsquo;t want this address space to overlap with other VNets, or your on-premises network if you plan to connect them together at a later date. So, take care about what IP space you use here.\nThe next box to fill out will be a Subnet Name. When you create a new VNet you have to create at least one subnet with it so we\u0026rsquo;re doing that as part of the VNet setup. You can add more later if you wish. Give the subnet a descriptive name and then an address range, again being a subset of your larger Address space.\nAs with most of the concepts in Azure, select which subscription the VNet belongs to and then select an existing Resource Group or create a new one for now. We\u0026rsquo;ll cover Resource groups in another post. Lastly select your location.\nSummary You\u0026rsquo;re well on your way to deploying workloads on Azure now. We\u0026rsquo;ve got a VNet setup and a subnet to deploy virtual machines on. In a future post we\u0026rsquo;ll add some additional connectivity to make this subnet more usable.\n","permalink":"https://theithollow.com/2016/08/01/setup-azure-networks/","summary":"\u003cp\u003eSetting up networks in Microsoft Azure is pretty simple task, but care should be taken when deciding how the address space will be carved out. To get started lets cover a couple of concepts about how Azure handles networking. To start we have the idea of a \u0026ldquo;VNet\u0026rdquo; which is the IP space that will be assigned to smaller subnets. These VNets are isolated from each other and the outside world. If you want your VNet to communicate with another VNet or your on-premises networks, you\u0026rsquo;ll need to setup a VPN tunnel. You might be wondering, how do you do any segmentation between servers without having to setup a VPN then? The answer there is using subnets. Multiple subnets can be created inside of a VNet and security groups can be added to them so that they only allow certain traffic, sort of like a firewall does.\u003c/p\u003e","title":"Setup Azure Networks"},{"content":"The use cases here are open for debate, but you can setup a serverless call to vRealize Orchestrator to execute your custom orchestration tasks. Maybe you\u0026rsquo;re integrating this with an Amazon IoT button, or you want voice deployments with Amazon Echo, or maybe you\u0026rsquo;re just trying to provide access to your workflows based on a CloudWatch event in Amazon. In any case, it is possible to setup an Amazon Lambda call to execute a vRO workflow. In this post, we\u0026rsquo;ll actually build a Lambda function that executes a vRO workflow that deploys a CentOS virtual machine in vRealize Automation, but the workflow could really be anything you want.\nIf you\u0026rsquo;re not familiar with AWS Lambda, it is a service that allows you to execute Python, Node.js, and Java functions without having to have a server to run them on, or \u0026ldquo;serverless\u0026rdquo;. The functions can be used for about anything you\u0026rsquo;d like to use them for.\nArchitecture To get started, lets take a look at the architecture involved here. A lambda function is built in Amazon Web Services within a VPC. By default, lambda functions are executed against publicly accessible objects unless you place them within a VPC where it can access your own resources. In our case, the lambda function is created in a VPC that has a VPN connection to my on-premises vRealize Orchestrator Instance. The vRO workflow that I\u0026rsquo;m calling requests a vRealize Automation deployment of a CentOS virtual machine. From there I could deploy to a variety of clouds, or the workflow could be doing something else like adding an Active Directory User or creating a snapshot on a virtual machine. The world is your oyster.\nBuild Lambda Function Before you create the function call, go to Github to download the lambda function that you\u0026rsquo;ll update. You\u0026rsquo;ll find a zip file in the repo that will have a python script it it called \u0026ldquo;vro-Lambda.py\u0026rdquo;. Open up this file to make changes to your environment variables.\nThe code is shown below, and you\u0026rsquo;ll need to enter in your own environment variables to call whatever vRO workflow you\u0026rsquo;re trying to execute. Notice that you\u0026rsquo;ll need to update the username and password used to login to the vRO instance, the vRO IP Address or DNS name if available in AWS, and also the workflow GUID that should be executed. Depending on your workflow, you can also update the payload data to provide additional inputs, but the workflow I\u0026rsquo;m executing doesn\u0026rsquo;t require any inputs. When you\u0026rsquo;re done modifying the file, make sure you create a new ZIP file and name it \u0026ldquo;lambda-vRO.zip\u0026rdquo; just as it was before you open the compressed file.\nCreate a Lambda Role Before we can execute a lambda call within a VPC, we need to create a role that will give the function enough permissions to execute. To do this, go to the IAM section of your AWS instance and create a new role.\nWhen the create new role wizard opens, give the role a name. I\u0026rsquo;ve named my role \u0026ldquo;lambda-VPC\u0026rdquo;.\nNext, select the role type. Here we\u0026rsquo;re looking for the \u0026ldquo;AWS Lambda\u0026rdquo; role type.\nNext, select the policy that has the correct permissions. In this case we\u0026rsquo;re looking for the \u0026ldquo;AWSLambdaVPCAccessExecutionRole\u0026rdquo; policy.\nOn the Review screen, check it to make sure it looks correct and then create the role. Deploy the Lambda Code Now we create the lambda function that will enable us to execute the vRealize Orchestrator Workflow. To get started access the lambda section of the AWS portal and click \u0026ldquo;Create a Lambda function\u0026rdquo; button.\nOn the next few screens you\u0026rsquo;ll see blueprints and other things that are able to quickly get you started. We don\u0026rsquo;t need any of that stuff so you can click next until you get to the \u0026ldquo;Configure function\u0026rdquo; section. Here you\u0026rsquo;ll start the real work. Give the function a name and a description and select \u0026ldquo;Python 2.7\u0026rdquo; as the Runtime. Under the Lambda function code section leave the \u0026ldquo;Code entry type\u0026rdquo; field as \u0026ldquo;Upload a .ZIP file\u0026rdquo;. After this, you\u0026rsquo;ll need to upload a ZIP file with all of our request information in it and the function call. This is the lambda-vRO.zip file that we created in a previous section of this post.\nFurther down the page we\u0026rsquo;ll have to enter the handler information. The handler should be the zip file name of the function and then the handler name separated by a period. So for us it should be vro-Lambda.vro_handler. For the \u0026ldquo;Role\u0026rdquo; field leave it at \u0026ldquo;Choose an existing role\u0026rdquo; and then the existing role name should be the name of the role we created in the above section. Under the advanced settings section, we can leave the Memory (MB) fields and the timeout fields at their defaults. But in the VPC sections we need to select which VPC the lambda code will run in, and the subnets that its able to run within. For high availability purposes two subnets should be selected. Under security groups, select a security group with access to your vRealize Orchestrator instance. As you can see in my inbound rules I\u0026rsquo;m allowing access to this subnet from my 10.10.0.0/16 network which is my home lab over the VPN, my outbound rules are just as open as the inbound rules. Your mileage may vary here. Test your code. Now in the Input test event editor of the lambda function, I\u0026rsquo;ve left the values as {} because I\u0026rsquo;m not providing any inputs. When you run the test, you\u0026rsquo;ll see a return value of \u0026ldquo;null\u0026rdquo; and some info about the request.\nIf you look in your vRO instance, you\u0026rsquo;ll see a token object of the request that has been executed.\nIn my vRA instance I can see a CentOS machine being provisioned. Summary This post should give you an understanding on how Lambda functions are setup within a VPC. I\u0026rsquo;ve chosen to write a script to call a vRO workflow, but you could use Lambda for just about anything you wanted a script for. I\u0026rsquo;d love to hear how you\u0026rsquo;re using Lambda and if you\u0026rsquo;ve got a specific use case for calling your vRO workflows from the AWS portal. Happy Coding.\n","permalink":"https://theithollow.com/2016/07/26/vro_from_aws_lambda/","summary":"\u003cp\u003eThe use cases here are open for debate, but you can setup a serverless call to vRealize Orchestrator to execute your custom orchestration tasks. Maybe you\u0026rsquo;re integrating this with an \u003ca href=\"http://amzn.to/2a0VHhe\"\u003eAmazon IoT button\u003c/a\u003e, or you want voice deployments with \u003ca href=\"http://amzn.to/2a0VFG8\"\u003eAmazon Echo\u003c/a\u003e, or maybe you\u0026rsquo;re just trying to provide access to your workflows based on a CloudWatch event in Amazon. In any case, it is possible to setup an Amazon Lambda call to execute a vRO workflow. In this post, we\u0026rsquo;ll actually build a Lambda function that executes a vRO workflow that deploys a CentOS virtual machine in vRealize Automation, but the workflow could really be anything you want.\u003c/p\u003e","title":"Execute vRO Workflow from AWS Lambda"},{"content":"It\u0026rsquo;s about time to head to the US VMworld conference again and this year its in Las Vegas Nevada. VMworld is always a time that is full of excitement for virtualization junkies. Will there be new product announcements that will disrupt the established virtual design principles? Will a new product vendor make a big splash at the event? Can I learn brand new ways to enable my company? All of these questions spread the anticipation for the event.\nThis year I\u0026rsquo;ll be presenting a session with Chris Wah l, author of wahlnetwork.com and Technical Evangelist for Rubrik. If you are filling out your VMworld schedule and are looking for anything related to backups, automation or just learning how vRealize Automation or Orchestrator work, be sure to check out session MGT8714-SPO.\nI hope to see you on Tuesday August 30th from 4pm - 5pm Las Vegas time! Stay tuned for more details about the session and go check out Chris\u0026rsquo;s blog as well.\n","permalink":"https://theithollow.com/2016/07/25/vmworld-2016-sessions/","summary":"\u003cp\u003eIt\u0026rsquo;s about time to head to the US VMworld conference again and this year its in Las Vegas Nevada. VMworld is always a time that is full of excitement for virtualization junkies. Will there be new product announcements that will disrupt the established virtual design principles? Will a new product vendor make a big splash at the event? Can I learn brand new ways to enable my company? All of these questions spread the anticipation for the event.\u003c/p\u003e","title":"VMworld 2016 Sessions"},{"content":"I was asked to provide the PowerPoint deck used in the 2016 Indianapolis VMUG Conference Keynote. If you are interested in this presentation, it can be found here. Indy-Keynote v6 ","permalink":"https://theithollow.com/2016/07/21/indianapolis-vmug-keynote-2016/","summary":"\u003cp\u003eI was asked to provide the PowerPoint deck used in the 2016 Indianapolis VMUG Conference Keynote. If you are interested in this presentation, it can be found here. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2016/07/Indy-Keynote-v6.pptx\"\u003eIndy-Keynote v6\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2016/07/Spilled.png\"\u003e\u003cimg alt=\"Spilled\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/07/Spilled.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Indianapolis VMUG Keynote 2016"},{"content":"Following the posts in order, this guide should help you to understand and get familiar with Microsoft Azure. This is a guide to getting started with Azure that you can build upon to deploy your own public cloud environment. Azure Accounts and Subscriptions Azure Active Directory Integration Azure Resource Groups Setup Azure Networks Azure Network Security Groups Create Azure VPN Connection Azure Storage Accounts Setup Azure PowerShell Azure Virtual Machine Deployment Azure Network Interfaces Azure Cloud Services Azure Scale Sets Understanding the Multiple Azure Portals Using Azure Automation Microsoft Azure Official Links Azure Resource Manager Portal - https://portal.azure.com Azure Classic Portal - http://manage.windowsazure.com Microsoft Azure Documentation and Resources - https://azure.microsoft.com\nBlogs and Videos focusing on Microsoft Azure Microsoft Azure Official Blog - https://azure.microsoft.com/en-us/blog/ Azure Friday Videos - https://channel9.msdn.com/Shows/Azure-Friday/ Azure Tutorials - https://channel9.msdn.com/Series/Windows-Azure-Cloud-Services-Tutorials/\nPeople to follow on twitter for more Microsoft Azure related goodies Haishi Bai - @haishibai2010\nMicrosoft Channel 9 - @ch9\n","permalink":"https://theithollow.com/2016/07/18/guide-getting-started-azure/","summary":"\u003cp\u003eFollowing the posts in order, this guide should help you to understand and get familiar with Microsoft Azure. This is a guide to getting started with Azure that you can build upon to deploy your own public cloud environment. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2016/07/Azure-Guide.png\"\u003e\u003cimg alt=\"Azure Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/07/Azure-Guide.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"azure-accounts-and-subscriptions\"\u003e\u003ca href=\"http://wp.me/p32uaN-1J8\"\u003eAzure Accounts and Subscriptions\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-active-directory-integration\"\u003e\u003ca href=\"/2016/06/27/setup-azure-ad-connector/\"\u003eAzure Active Directory Integration\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azureresource-groups\"\u003e\u003ca href=\"http://wp.me/p32uaN-1Iz\"\u003eAzure Resource Groups\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"setup-azure-networks\"\u003e\u003ca href=\"/2016/08/01/setup-azure-networks/\"\u003eSetup Azure Networks\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-network-security-groups\"\u003e\u003ca href=\"/2016/08/03/azure-network-security-groups/\"\u003eAzure Network Security Groups\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"createazure-vpnconnection\"\u003e\u003ca href=\"http://wp.me/p32uaN-1In\"\u003eCreate Azure VPN Connection\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-storage-accounts\"\u003e\u003ca href=\"/2016/08/11/azure-storage-accounts/\"\u003eAzure Storage Accounts\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"setup-azure-powershell\"\u003e\u003ca href=\"/2016/08/15/get-started-azure-powershell/\"\u003eSetup Azure PowerShell\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-virtual-machine-deployment\"\u003e\u003ca href=\"/2016/08/23/deploying-virtual-machines-microsoft-azure/\"\u003eAzure Virtual Machine Deployment\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-network-interfaces\"\u003e\u003ca href=\"/2016/09/06/azure-network-interfaces/\"\u003eAzure Network Interfaces\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-cloud-services\"\u003e\u003ca href=\"/2016/09/07/azure-cloud-services/\"\u003eAzure Cloud Services\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"azure-scale-sets\"\u003e\u003ca href=\"/2016/10/03/azure-scale-sets/\"\u003eAzure Scale Sets\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"understanding-the-multiple-azure-portals\"\u003e\u003ca href=\"/2016/09/12/microsoft-azure-portals/\"\u003eUnderstanding the Multiple Azure Portals\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"using-azure-automation\"\u003e\u003ca href=\"/2016/09/19/get-started-azure-automation/\"\u003eUsing Azure Automation\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"microsoft-azure-official-links\"\u003eMicrosoft Azure Official Links\u003c/h1\u003e\n\u003cp\u003e\u003cstrong\u003eAzure Resource Manager Portal -\u003c/strong\u003e \u003ca href=\"https://portal.azure.com\"\u003ehttps://portal.azure.com\u003c/a\u003e \u003cstrong\u003eAzure Classic Portal\u003c/strong\u003e - \u003ca href=\"http://manage.windowsazure.com\"\u003ehttp://manage.windowsazure.com\u003c/a\u003e \u003cstrong\u003eMicrosoft Azure Documentation and Resources -\u003c/strong\u003e \u003ca href=\"https://azure.microsoft.com\"\u003ehttps://azure.microsoft.com\u003c/a\u003e\u003c/p\u003e","title":"Guide to Getting Started with Azure"},{"content":"An Azure resource group is a way for you to, you guessed it, group a set of resources together. This is a useful capability in a public cloud so that you can manage permissions, set alerts, built deployment templates and audit logs on a subset of resources. Resource groups can contain, virtual machines, gateways, VNets, VPNs and about any other resource Azure can deploy.\nMost items that you create will need to belong to a resource group but an item can only belong to a single resource group at a time. Resources can be moved from one resource group to another.\nCreate a Resource Group To add a resource group navigate to the \u0026ldquo;Resource groups\u0026rdquo; object and click the Add button. Give the Resource Group a name and assign it to a subscription. Lastly enter a location and click submit.\nYou can click on your resource group to see the list of resources assigned to it.\nAssign Permissions One of the uses for resource groups is to assign permissions to the resources. You can assign a user limited access to a group of resources all at once. Go to the resource groups settings and select Users. Click Add.\nSelect a role.\nThen select the user accounts to assume the role.\nAdd a Lock Locks are useful to restrict permissions on a set of resources. Locks can prevent items from being deleted and a read-only lock can prevent changes to the resource group.\nThe benefits to resource groups goes on. You can tag resource groups and view costs for a resource group which might be a subset of a subscription. Alerts can be assigned to a group of resources and the most useful is an export template. Assume that you have a tiered application that requires several servers. Build the servers out once, add them to a resource group and then use the \u0026ldquo;Export template\u0026rdquo; option to then use it later on to redeploy it all from a script.\n","permalink":"https://theithollow.com/2016/07/18/azure-resource-groups/","summary":"\u003cp\u003eAn Azure resource group is a way for you to, you guessed it, group a set of resources together. This is a useful capability in a public cloud so that you can manage permissions, set alerts, built deployment templates and audit logs on a subset of resources. Resource groups can contain, virtual machines, gateways, VNets, VPNs and about any other resource Azure can deploy.\u003c/p\u003e\n\u003cp\u003eMost items that you create will need to belong to a resource group but an item can only belong to a single resource group at a time. Resources can be moved from one resource group to another.\u003c/p\u003e","title":"Azure Resource Groups"},{"content":"Azure is a great reservoir of resources that your organization can use to deploy applications upon and the cloud is focused around pooling resources together. However, organizations need to be able to split resources up based on cost centers. The development team will be using resources for building new apps, as well as maybe an e-commerce team for production uses. Subscriptions allow for a single Azure instance to separate these costs, and bill to different teams.\nWhen you signed up to try out Azure, you created at least one subscription to get started. Every resource that is deployed must be associated with only one subscription so that billing for that item can be processed. When I signed up for my account there was a free-trial going on so I have two subscriptions.\nThe screenshot below shows that I have a pair of subscriptions in use.\nWhen we drill down into each of these subscriptions, we can see that they show statistics about the billing of those subscriptions. Stats like which objects are using the resources, and the burn rate to see how the costs have changed over time. We\u0026rsquo;re also able to further break down what resources and resource groups are part of the subscription. We can also assign permissions to this subscription object. One thing you\u0026rsquo;ll notice is missing is the \u0026ldquo;Add a Subscription\u0026rdquo; button.\nNOTE: At the time of this writing, Microsoft has a pair of Azure portals including the new Azure Resource Manager portal and the Azure Classic portal. Not all abilities are in the new portal yet.\nAdd a New Subscription To get to the subscription additions page you must go to the account center. https://account.windowsazure.com/Home/Index The Account page will allow the Azure administrator to add additional subscriptions. Go to the Subscription tab and then click the \u0026ldquo;add subscription\u0026rdquo; link on that page.\nThe next screen gives you the opportunity to select an \u0026ldquo;offer\u0026rdquo; which equates to a payment plan. You can pick \u0026ldquo;Pay-As-You-Go to pay monthly with a credit card or a prepaid plan.\nI selected the Pay-As-You-Go plan which required me to enter a credit card number and accept the agreement and privacy statements. When your subscription build is complete you\u0026rsquo;ll get a message about managing your subscriptions. If you click the link, it will take you back to the new Azure Resource Manager portal.\nManage Subscription Security Sometimes you need to modify the person that is the administrator for the subscription. By default, the account administrator (User who setup the Azure Account) is also the Service Administrator for the subscription. The Service Administrator has admin access to the subscription including modifying the directory connected to the subscription. If you want to, you can add a co-administrator which has all of the administrative permissions of the subscription except for modifying the directories connected to the subscription. This prevents the co-administrator from modifying which sets of users are connected to that subscription.\nTo do this go to the classic portal at: https://manage.windowsazure.com and go to settings. Select the Administrators tab and then select the subscription to manage.\nEnter the co-administrator that you want to add to the subscription and then click the check box.\nWhen you\u0026rsquo;re done, you\u0026rsquo;ll notice that your new user has been added an is listed as a Co-Administrator.\nYou can also add the user to the subscription through the new Azure Resource Manager portal but won\u0026rsquo;t see \u0026ldquo;Co-Administrator\u0026rdquo; anywhere. Go to Subscriptions, and add a user. Select the role of \u0026ldquo;Owner\u0026rdquo; and then your user.\nWhen you\u0026rsquo;re done you\u0026rsquo;ll see a list of users again and your new user will show as \u0026ldquo;Owner\u0026rdquo; which gives it the same rights. You\u0026rsquo;ll also notice \u0026ldquo;Subscription Admins\u0026rdquo; which includes all Service Administrators and Co-Administrators. They of course also have \u0026ldquo;Owner\u0026rdquo; rights. ","permalink":"https://theithollow.com/2016/07/11/azure-subscriptions/","summary":"\u003cp\u003eAzure is a great reservoir of resources that your organization can use to deploy applications upon and the cloud is focused around pooling resources together. However, organizations need to be able to split resources up based on cost centers. The development team will be using resources for building new apps, as well as maybe an e-commerce team for production uses. Subscriptions allow for a single Azure instance to separate these costs, and bill to different teams.\u003c/p\u003e","title":"Azure Subscriptions"},{"content":"Join me on July 20th in Indianapolis Indiana for a day of fun and learning at the annual Indianapolis VMware Users Group Conference! For those of you not familiar with VMUG, its an independent customer-led organization created to maximize members\u0026rsquo; use of VMware and partner solutions through knowledge sharing, training, collaboration and events. VMUG is the largest organization worldwide focused on virtualization users. Don\u0026rsquo;t worry if you just want the day off from work, that\u0026rsquo;s just one of the benefits, but BE SURE TO REGISTER FOR FREE HERE: https://www.vmug.com/p/cm/ld/fid=13570 VMUGs are available all over the world, but the Indianapolis VMUG conference is one of the largest. There will be opportunities to learn from peers, partners and industry experts throughout the day through panels and sessions. Be sure to get their early to listen to the Keynote provided by yours truly, entitled \u0026quot; You Spilled Your Cloud in My DevOps.\u0026quot; The Full Event Schedule can be found below, or on the Indy VMUG event page. There is also an app for Android and iPhone so you will always know the updates on the day of the event.\nEvent Agenda TIMEEVENTTRACK****LOCATION7:45 AM - 8:20 AMRegistration | BreakfastRegistrationSagamore Ballroom Foyer8:20 AM - 8:35 AMVMUG WelcomeGeneral SessionSagamore Ballroom 1-38:35 AM - 9:00 AMVMware Update - Adam Eckerle, VMwareGeneral SessionSagamore Ballroom 1-39:00 AM - 9:45 AMKeynote Presentation: You Spilled Your Cloud in My DevOps - Eric ShanksGeneral SessionSagamore Ballroom 1-39:45 AM - 10:10 AMBreak | Mingle with Partners | Demo ZoneBreakSagamore Ballroom 4-79:45 AM - 1:00 PMUserCon Career ConsultCommunitySagamore Ballroom 4-710:10 AM - 10:50 AMConverged Platforms, Building Blocks for the Modern Data Center, EMCData Center Management 201Achieving Cloud-Based IT Resilience for Virtualized Environments - Mike Dupuy, AxcientHybrid Cloud \u0026amp; Emerging Technologies202First Look @ vRealize Network Insight, Formerly Arkin - Sean O\u0026rsquo;Dell, VMwareNetworking \u0026amp; Security203Demystifying Backup – 4 Tips When Considering a Backup Solution, Hewlett Packard EnterpriseStorage \u0026amp; Availability20418 Things NOT To Do with Your vSphere Backups - Joe Morton, VeeamvSphere \u0026amp; Virtualization205vCenter 6.0 Deployment \u0026amp; Availability - Adam Eckerle, VMwarevSphere \u0026amp; Virtualization206vRealize Suite Standard: Intelligent Operations - Jack White, VMwareData Center Management208Community Session: SaaS is an Important Part of Your Cloud Strategy - Gina RosenthalCommunity20910:50 AM - 11:00 AMBreak | Mingle with PartnersBreakSagamore Ballroom 4-711:00 AM - 11:40 AMVirtual Volumes Backstory and Benefits - Andy Banta, NetApp SolidFireData Center Managment201What Is Your Cloud Strategy? - Jason Nizich, Data StrategyHybrid Cloud \u0026amp; Emerging Technologies 202Addressing Latency, IT\u0026rsquo;s Menace - Brent Earls, MirazonOther203Five Requirements of VM-Aware Storage - TintriStorage \u0026amp; Availability204State of the Union of Converged Infrastructure - SISOther205Troubleshooting Storage Performance - Adam Osterholt, VMwarevSphere \u0026amp; Virtualization 206The Practical Path Using VMware NSX - Brad Christian, VMware 208Community Session: Design Fundamentals - Building a Solid Foundation for Your EUC Environment - Sean MasseyCommunity20911:40 AM - 11:50 AMBreak | Mingle with PartnersBreakSagamore Ballroom 4-711:50 AM - 12:30 PMThe Application-Centered Approach: A Primer to Modern User Environment Management - Sean Massey, AHEADEUC Desktop Virtualization201Why You Need DR - Greg Reasner, ZertoData Center Management 202Mastering PowerShell to Call RESTful API Endpoints - Chris Wahl, RubrikHybrid Cloud \u0026amp; Emerging Technologies203BYOD, The Silent Digital Disruptor - Scott DeShong, PresidioOther204Top 10 Things You MUST Know Before Implementing VVols - 2016 Edition, Hewlett Packard EnterpriseStorage \u0026amp; Availability205Virtual SAN 6.2 Technical What\u0026rsquo;s New - Josh Fidel, VMwareStorage \u0026amp; Availability206Community Session: Tales from the Trenches: Upgrading to vSphere to 6.x - Paul Woodward Jr.Community20912:30 PM - 1:00 PMLunch | Mingle with PartnersLunchSagamore Ballroom 1-31:00 PM - 1:50 PMLaughs Over Lunch with Comedian Don McMillanGeneral SessionSagamore Ballroom 1-31:50 PM - 2:00 PMBreak | Mingle with PartnersBreakSagamore Ballroom 4-72:00 PM - 2:40 PMVMs, Containers, and Cloud, Oh My! Controlling and Automating Your Stack and the New Stack, VMTurboData Center Management201Cryptolocker, DR, and End User Oopsies: All in a Minute\u0026rsquo;s Work, SimpliVityHybrid Cloud \u0026amp; Emerging Technologies202Hyperconvergence in Secondary Storage for VMware Environments, CohesityOther203Introducing Cisco HyperFlex – Next Generation Hyperconverged, CiscovSphere \u0026amp; Virtualization204Beyond the Hypervisor. When, Where, and How Flash Improves SAN, SDS, HCI, and Cloud, SanDiskOther205Performance Best Practices for vSphere - G. Blair Fritz, VMwarevSphere \u0026amp; Virtualization206Enabling Faster Application Delivery and Unified Management with VMware App Volumes - VMwareEUC Desktop Virtualization208Community Session: The Evolution of iSCSI and VMware - Andy BantaCommunity2092:40 PM - 3:20 PMBreak | Mingle with PartnersBreakSagamore Ballroom 4-72:40 PM - 3:00 PMIndy Bacon WalkBreakSagamore Ballroom 4-73:00 PM - 3:20 PMPartner Giveaways | Exhibits ClosedBreakSagamore Ballroom 4-73:20 PM - 4:00 PMSimple and Secure with Dell Wyse - Cloud Client Computing, Dell Client Cloud ComputingEUC Desktop Virtualization201Virtually In-Scope – Securing PCI Workloads for Hybrid Cloud, Trend MicroHybrid Cloud \u0026amp; Emerging Technologies202How to Protect Your VM Environment from Ransomware - Unified Recovery in Minutes for Backup/DR, QuorumHybrid Cloud \u0026amp; Emerging Technologies203What\u0026rsquo;s New in Log Management and Analytics with vRealize Suite 7 - Jack White, VMwareData Center Management206Highly Secure Horizon View and NSX Design - Brad Christian, VMwareEUC Desktop Virtualization208Automatically Deliver VMware Services to the Right Users, at the Right Place, at the Right Time - Thomas BrownCommunity2094:00 PM - 4:05 PMBreakBreakSagamore Ballroom 4-74:05 PM - 5:00 PMClosing Reception \u0026amp; Giveaways sponsored by _VMware NSX_ReceptionSagamore Ballroom 1-3\nDon\u0026rsquo;t forget there will be plenty of vendors there giving away free stuff and performing demos as well. A large group of partners will be on hand to answer questions and get you familiar with other solutions in the ecosystem. You can\u0026rsquo;t help but to learn something new.\nOne last thing, there will be a group of industry experts on hand to provide career advice to anyone who is looking for some. Are you not sure what to do next? Are you confused about the direction the industry is heading? Stop by and talk to one of the consultants and get a professional headshot photo as well. Great for your Linkedin page for sure.\nLocation Indianapolis Convention Center 100 S. Capitol Avenue Indianapolis, IN 46225\nDate/Time Wednesday July 20, 2016\n8 a.m. to 5 p.m.\n","permalink":"https://theithollow.com/2016/07/08/come-indianapolis-vmug-conference/","summary":"\u003cp\u003eJoin me on July 20th in Indianapolis Indiana for a day of fun and learning at the annual Indianapolis VMware Users Group Conference! For those of you not familiar with VMUG, its an independent customer-led organization created to maximize members\u0026rsquo; use of VMware and partner solutions through knowledge sharing, training, collaboration and events. VMUG is the largest organization worldwide focused on virtualization users. Don\u0026rsquo;t worry if you just want the day off from work, that\u0026rsquo;s just one of the benefits, but \u003cstrong\u003eBE SURE TO REGISTER FOR FREE HERE:\u003c/strong\u003e \u003ca href=\"https://www.vmug.com/p/cm/ld/fid=13570\"\u003ehttps://www.vmug.com/p/cm/ld/fid=13570\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2012/09/vmuglogo.png\"\u003e\u003cimg alt=\"VMUGLogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2012/09/vmuglogo.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Join Me At The Indianapolis VMUG Conference!"},{"content":"vRealize Automation lets us publish vRealize Orchestrator workflows to the service catalog, but to get more functionality out of these XaaS blueprints, we can add the provisioned resources to the items list. This allows us to manage the lifecycle of these items and even perform secondary \u0026ldquo;Day 2 Operations\u0026rdquo; on these items later.\nFor the example in this post, we\u0026rsquo;ll be provisioning an AWS Security group in an existing VPC. For now, just remember that AWS Security groups are not managed by vRA, but with some custom work, this is all about to change.\nPrerequisites If you\u0026rsquo;re following along at home, ensure that you\u0026rsquo;ve got the following items available.\nAn AWS VPC with already setup AWS vRO Plugin Installed The AWS vRO Plugin is configured to work with your Amazon Account vRealize Automation 7 already installed with basics. For more info, check out my guide. Add a Custom Resource Here we need to define a new type that vRA will understand how to manage. To do this go to the Design tab \u0026ndash;\u0026gt; XaaS \u0026ndash;\u0026gt; Custom Resources. Click the green plus \u0026ldquo;+\u0026rdquo; sign to add a new custom resource. Under the resource type, we need to look for an orchestrator type. NOTE: if you don\u0026rsquo;t have the AWS vRO plugin installed, you won\u0026rsquo;t be able to find the \u0026ldquo;AWS:\u0026rdquo; type listed here. These are objects in vRealize Orchestrator that must exist before you can use them in vRA. I\u0026rsquo;ve added AWS:EC2SecurityGroup as the Orchestrator type and given it a description.\nOn the next screen, the option to modify the form will be available, just like with an XaaS Blueprint. Feel free to modify the form however necessary.\nCreate XaaS Blueprint Creating an XaaS Blueprint is probably nothing new if you\u0026rsquo;ve gone through the vRA guide. Lets go to the XaaS Blueprint section and add a new workflow to create a new AWS Security Group. In the workflow tab, find the \u0026ldquo;Create Security Group\u0026rdquo; workflow from vRealize Orchestrator. Notice that in the right hand pane, the output is \u0026ldquo;AWS:EC2SecurityGroup.\u0026rdquo;\nNext, on the \u0026ldquo;General\u0026rdquo; tab enter a name for the blueprint and a description.\nAgain, we\u0026rsquo;re able to modify the blueprint form format on the next tab.\nThe last tab is the \u0026ldquo;Provisioned Resource\u0026rdquo; tab. Prior to setting up our own custom resource earlier, the only option for this dropdown was \u0026ldquo;No Provisioning.\u0026rdquo; Now we have a new resource called \u0026ldquo;securityGroup.\u0026rdquo; Be sure to select the new resource here so that vRealize Automation knows that the result from the workflow will be the new custom resource in vRA.\nCreate a Custom Action This part is optional if your goal is understanding how custom resources can be provisioned in vRA, but it makes good sense to have an action so that it can be used to remove the security group later on and demonstrate more robust capabilities. Go to the Design Tab \u0026ndash;\u0026gt; XaaS \u0026ndash;\u0026gt; Resource Actions and click the green plus \u0026ldquo;+\u0026rdquo; sign to add a new resource action. In the workflow tab, select the action we will want to be able to perform on our AWS security group. In our case, we select the \u0026ldquo;Delete Security Group.\u0026rdquo;\nNext select the resource type which will be Security Group, our newly created resource type. The input parameter will be group.\nNext give the action a descriptive name as well as a description and version number. As with the rest of our XaaS blueprints we can modify the form. In this case there is nothing to add to the form.\nPublish and Entitle For each of the blueprints and actions that we\u0026rsquo;ve built, ensure that they are published. If you need help with publishing catalog items please see the following post for more details.\nNext, go to the Administration tab to add the appropriate entitlements.\nDon\u0026rsquo;t forget to add the \u0026ldquo;Delete Security Group\u0026rdquo; to the entitled actions. If you need more help with entitlements please see the guide post here. Provision Custom Item Lets go and request our item from the blueprint now to see what happen. In the picture below you can see that I have AWS Service and one of my blueprints is \u0026ldquo;Create Security Group.\u0026rdquo; I\u0026rsquo;ll request this blueprint.\nAdd a description for the request and a reason for the request. Standard practice for any normal XaaS Request.\nOn the next tab of the request, we select and enter our information to run the vRO workflow. Such as which AWS account to use, the name of the security group and a description for the security group.\nThe Results Once the blueprint is requested, the vRealize Orchestrator workflow will run and hopefully create the security group. The screenshot below shows the token where the workflow ran successfully. If we take a quick peak over at my AWS console, the security group was created with the name and description that I specified in the request.\nHere\u0026rsquo;s the neat part. If you go to the items tab in vRA, we\u0026rsquo;ll see a new tab called \u0026ldquo;AWS.\u0026rdquo; Click that AWS tab.\nIn the AWS tab, we see a new tab called \u0026ldquo;Security Group\u0026rdquo; and in the items list, we see the Security Group that we just added, only now as an item managed by vRealize Automation!\nSince we created a resource action, we can click on our new item and we\u0026rsquo;ll find an action associated with our security group. This action allows us to delete the group.\nSummary vRealize Automation does a great job of managing servers by default, but not all of the things we do with vRA are server related. XaaS allows us to give our end users access to perform a wide variety of tasks that aren\u0026rsquo;t related to server provisioning. vRA can be extended to allow for other types of objects to be managed through the cloud management portal. Perhaps, adding or disabling AD Users might be another great use case for this extensibility. Can you create those workflows?\n","permalink":"https://theithollow.com/2016/07/05/add-custom-items-vrealize-automation/","summary":"\u003cp\u003evRealize Automation lets us publish vRealize Orchestrator workflows to the service catalog, but to get more functionality out of these XaaS blueprints, we can add the provisioned resources to the items list. This allows us to manage the lifecycle of these items and even perform secondary \u0026ldquo;Day 2 Operations\u0026rdquo; on these items later.\u003c/p\u003e\n\u003cp\u003eFor the example in this post, we\u0026rsquo;ll be provisioning an AWS Security group in an existing VPC. For now, just remember that AWS Security groups are not managed by vRA, but with some custom work, this is all about to change.\u003c/p\u003e","title":"Add Custom Items to vRealize Automation"},{"content":"The cloud doesn\u0026rsquo;t need to be a total shift to the way you manage your infrastructure. Sure, it has many differences, but you don\u0026rsquo;t have to redo everything just to provision cloud workloads. One thing you\u0026rsquo;ll probably want to do is connect your Active Directory Domain to your cloud provider so that you can continue to administer one group of users. Face it, you\u0026rsquo;re not going to create a user account in AD, then one in Amazon and then another one in Azure. You want to be able to manage one account and have it affect everything. Microsoft Azure allows you to extend your on-prem domain to the Azure portal. This post focuses on the AD Connector and doing a sync.\nNOTE: Microsoft has recently been moving services from its \u0026ldquo;Classic Portal\u0026rdquo; over to the new Azure portal. However, some resources you\u0026rsquo;ll notice look like they come from an older portal. While this is confusing, Microsoft has added the services to the Azure portal menu, but you may be redirected to a different screen to configure them. Active Directory is one of these services at the time of this blog post.\nAzure Preparation Make sure that you have a routable domain name listed in directory services. To do this, go to your Azure portal login and go to Browse \u0026ndash;\u0026gt; Active Directory\nNOTE: Microsoft recently ported Azure Active Directory over from the Classic Portal to the new Resource Manager portal. Steps in this post should be similar, but screen shots may be different as of 9/27/16 when Azure AD went GA on ARM.\nWhen the Active Directory Screen comes up, go to the Domain\u0026rsquo;s page. Here you\u0026rsquo;ll notice an on.microsoft.com domain which is the default domain added when you setup the service. You can add a domain here as long as you own the domain. I\u0026rsquo;ve added my domain name here and got it verified by following the process located here: Guided Walk Through\nNOTE: You may notice that your account already had a custom domain name added if you\u0026rsquo;re using Office365 for email services. If the account used to setup an Azure account is the same as Office365 account, they will share these resources.\nOnce you\u0026rsquo;ve gotten your domain setup, you can set your primary domain so that your users show up with the custom domain instead of the .onmicrosoft.com default domain thats given to you when you sign up. Install Azure AD Connect Now that Azure is setup and ready, we need to install the Azure AD Connect Utility on your server. The first thing to be done is to download the utility. This utility will give you several options for installation. This post focuses on a directory sync but federation is also an available option.\nBefore you begin, make sure that .Net Framework 4.2.1 or later is installed. I\u0026rsquo;m assuming for this post that you\u0026rsquo;re running the AD Connect installer on a Windows Server 2012R2 server. Run the installer and agree to the license terms and privacy notice. (There may or may not be a quiz on those at the end so be prepared.)\nThe next screen will let you choose whether or not to use a customized install or the express settings. Even if you want to go express, doesn\u0026rsquo;t it make you feel better to use the customized one so you can at least see the pieces being configured? I chose customize.\nThe next screen lets you change the location of the install, whether you want to use an existing SQL database, a service account of your own, and customized sync groups. I modified the SQL server to use my own SQL database, and entered my own service account that the utility will run under. Click Install.\nUnder User Sign-In, select the method that will be used. Password Synchronization might not be the best way to manage this for an Enterprise since there will be a bit of a lag between syncs and stores a password hash in the cloud. The benefit is that it\u0026rsquo;s easy to setup. A Federation instance with AD FS may be considered the more secure option for an enterprise so that it keeps all the data on-prem and only passes a token for authentication. Anyway, this post is for a Password Synchronization setup. Click Next.\nNext, enter Azure Credentials that will be used to create a service account. Click Next.\nNext, add your local domain information. A verification step will be completed after the directory is added, to ensure that it has proper login information. Click Next.\nNext you\u0026rsquo;ll need to select the attribute to use as the login. I\u0026rsquo;ve chosen the User Principal Name. Remember when we added our domain to the Azure portal and verified it? The name here has to match one of the UPN suffixes that are in Azure.\nNOTE: If you are like me and had a .local domain this won\u0026rsquo;t work and you\u0026rsquo;ll need to add a UPN suffix to your domain in the Domains and Trusts console.\nClick Next.\nNext, you can select specific OUs to sync or sync everything. I limited my domain sync to only a few OUs. Click Next.\nThe next screen asks how to handle duplicate ids across directories. Since I\u0026rsquo;m in a single forest with users that won\u0026rsquo;t overlap, the default selection works for me. Click Next.\nNext, you can further filter users. For instance if you earlier selected the whole domain to sync, but only want one security group to be synced, you can add that filter here. I\u0026rsquo;ve synchronized all users on this screen because I\u0026rsquo;ve filtered by a couple of specific OUs that will only contain my Azure users. Click Next.\nThe next screen allows for some additional features to be added. Additional filtering can be added through \u0026ldquo;Azure AD app and attribute filtering\u0026rdquo;. \u0026ldquo;Password writeback\u0026rdquo; allows for Azure account passwords to be changed and then sync\u0026rsquo;d back to the on-prem AD. \u0026ldquo;Directory extension attribute sync\u0026rdquo; allows for your AD attributes to be sync\u0026rsquo;d to Azure. Click Next.\nOn the last screen, you\u0026rsquo;ll have the option to start a sync as soon as the configuration completes and/or stage the config so you can see what will happen before the sync occurs. Click Install.\nWait for the configuration to complete.\nThe Results Once the install and configuration is complete, you\u0026rsquo;ll notice a new database named \u0026ldquo;ADSync\u0026rdquo; has been created in your SQL server. Log back into your Azure portal and look at your list of users. There should be more listed now and you should be able to see a \u0026ldquo;Sourced From\u0026rdquo; field that shows where they came from. Also notice that there is an On-Premises Directory Synchronization Service Account as well.\nNow you can login to your Azure portal with the accounts provisioned in your own on-premises Active Directory domain.\n","permalink":"https://theithollow.com/2016/06/27/setup-azure-ad-connector/","summary":"\u003cp\u003eThe cloud doesn\u0026rsquo;t need to be a total shift to the way you manage your infrastructure. Sure, it has many differences, but you don\u0026rsquo;t have to redo everything just to provision cloud workloads. One thing you\u0026rsquo;ll probably want to do is connect your Active Directory Domain to your cloud provider so that you can continue to administer one group of users. Face it, you\u0026rsquo;re not going to create a user account in AD, then one in Amazon and then another one in Azure. You want to be able to manage one account and have it affect everything. Microsoft Azure allows you to extend your on-prem domain to the Azure portal. This post focuses on the AD Connector and doing a sync.\u003c/p\u003e","title":"Setup the Azure AD Connector"},{"content":"If you\u0026rsquo;re brand new to Ansible but have some vRealize Automation and Orchestration experience, this post will get you started with a configuration management tool.\nThe goal in this example is to deploy a CentOS server from vRealize Automation and then have Ansible configure Apache and deploy a web page. It assumes that you have no Ansible server setup, but do have a working vRealize Automation instance. If you need help with setting up vRealize Automation 7 take a look at the guide here.\nInstall Ansible To get started, we need Ansible setup. So I deployed a CentOS server and performed my necessary environment configurations like IP Addressing, adding SSH, disabling my firewall (uh, yeah its a lab). Next I SSH into the ansible server and run the following two commands.\nsudo easy_install pip sudo pip install ansible THAT\u0026rsquo;S IT!\nSetup Ansible To use ansible, we need to create an inventory of the servers that the Ansible server should be going out execute commands on. Create a file called inventory and if you want to run a test, add a DNS name or IP address of a machine to configure Apache on.\nNow we need to create a playbook. I\u0026rsquo;ve named a file playbook1.yml. This file will be the desired configuration state of the machines listed in inventory. The exact code that I used is listed below. Even without knowing anything about Ansible code, you can probably guess that its going to ensure the machine will install httpd via yum and then copy some files of mine from hollowweb directory over to the html directory and lastly start the httpd service.\n--- - hosts: all tasks: - name: Install Apache Web Server yum: pkg=httpd state=installed notify: - start httpd # Copy website files - name: Upload Hollow Web Files -index copy: src=hollowweb/index.html dest=/var/www/html - name: Upload Hollow Web Files -image001.png copy: src=hollowweb/image001.png dest=/var/www/html handlers: - name: start httpd service: name=httpd state=started When you\u0026rsquo;re done, your directory should have an inventory file, and playbook1.yml file. My example also has a hollowweb directory which is where I\u0026rsquo;m storing my web server files in my vSphere templates. Your actual web server files could reside in GIT, Artifactory or a file share. If you want to test out your configuration now, simple run the following command on your Ansible server.\nansible-playbook -i inventory playbook1.yml The server listed in inventory should install your apache server and run all the commands listed earlier.\nNOTE: Ansible is agentless and requires SSH access to the guest machines. This may mean that you need to setup your SSH keys and knownhosts files to allow the communication. We won\u0026rsquo;t want to have to put in passwords or anything when we start to automate the process.\nSetup vRO workflows Now, all we\u0026rsquo;re going to do is add a vRO workflow that will add machines to inventory and execute the playbook commands that we did earlier. I added a workflow that formats a command, and then passes that command over to a workflow that executes it on my Ansible server.\nThe script formatting is shown below, you can see in this one, I\u0026rsquo;m going to add an IP address to the inventory file. Now a second workflow to again, format a script and pass it to the Ansible server to run via SSH. This time the script runs the Ansible playbook. The script I used is below.\nLastly, create a third script (well, that\u0026rsquo;s what happens when you build modular scripts to be reused). This script will gather the variables passed over from vRA, and then execute the two workflows we build above.\nSetup vRA We\u0026rsquo;ll assume for now that you can publish a new server in the catalog and have setup event subscriptions so vRO workflows can be executed during provisioning. Point the MachineProvisioned Event subscription to the third workflow we built above.\nRequest your blueprint! With any luck you\u0026rsquo;ll have your webpage displayed on your newly provisioned machine. Learn more If you want to dive deeper into Ansible and figure out what else you can do with it, please check out the Ansible guides.\nhttp://docs.ansible.com/ansible/playbooks.html\nI\u0026rsquo;d love to hear what kind of things you created with your own Ansible instances. Post them in the comments!\n","permalink":"https://theithollow.com/2016/06/20/ansible-vrealize-automation/","summary":"\u003cp\u003eIf you\u0026rsquo;re brand new to Ansible but have some vRealize Automation and Orchestration experience, this post will get you started with a configuration management tool.\u003c/p\u003e\n\u003cp\u003eThe goal in this example is to deploy a CentOS server from vRealize Automation and then have Ansible configure Apache and deploy a web page. It assumes that you have no Ansible server setup, but do have a working vRealize Automation instance. If you need help with setting up vRealize Automation 7 take a look at the \u003ca href=\"/2016/01/11/vrealize-automation-7-guide/\"\u003eguide here\u003c/a\u003e.\u003c/p\u003e","title":"Ansible with vRealize Automation Quickstart"},{"content":"The number of clusters that should be used for a vSphere environment comes up for every vSphere design. The number of clusters that should be used isn’t a standard number and should be evaluated based on several factors.\nNumber of Hosts Let’s start with the basics, if the design calls for more virtual machines than can fit into a single cluster, then it’s obvious that multiple clusters must be used. The same is true for a design that calls for more hosts that can fit into a single cluster or any other cluster maximums.\nvSphere 6 MaximumsvSphere 5.5 MaximumsHosts per Cluster6432VMs per Cluster80004000\nMaybe the design is simple enough that if there is 128 ESXi 6.0 hosts, they are just split up into two 64 host clusters. Designs are rarely this simple and so we’ll look at some other considerations.\nLicensing Software vendors have adapted to dealing with virtual infrastructures. Some software vendors even require you to license every host that might house their software, even if the software is in a virtual machine. With Distributed Resource Scheduling (DRS) we’re used to vMotioning workloads between hosts to better utilize our resources, but software licensing can have a serious cost associated with it if you have to license every host in the environment.\nIf software licensing such as Microsoft SQL Server is used, consider having a SQL cluster where all hosts in only this cluster are licensed, and all of your SQL VMs are placed in this cluster. This can dramatically cut down on licensing costs.\nHost Types Maybe you don’t have the luxury of buying new servers as part of the design and you have to re-use what already exists. This presents a new challenge.\nHosts that have different processor types might want to be placed into a different cluster. If the processors differ by vendor (AMD and Intel) then VMs won’t be able to be vMotioned without powering them off first. This would really hamper things like DRS so it makes sense to have an AMD cluster and an Intel Cluster. If the processors are from the same vendor, you can get away with turning on Enhanced vMotion Compatibility (EVC) which will mask capabilities of newer processors to find the least common instruction set. Depending on the situation, clusters might want to still be arranged by processor types so that that EVC doesn’t have to mask newer features from the latest processors.\nWasted Resources As a general rule, fewer large clusters is better for performance than several smaller clusters. A simple example is demonstrated below. On the left there are four hosts that are split between two clusters. We would start to encounter performance issues if we tried to add another virtual machine into cluster one even though there are plenty of resources in cluster two. If we combine all of the four hosts into a single cluster, like the example on the right, there are plenty of resources left to deploy new workloads. In addition, a single cluster keeps it simple for an administrator deploying a new workload by removing the need for a placement decision.\nTake a look at the previous example again and look at how many hosts would be needed to protect the virtual machines though VMware HA. Clusters one and two would each need to reserve one host, or 50% of their resources, while Cluster three would only need one host, or 25% of it’s resources.\nManagement Cluster It might be a good idea to have one cluster that is dedicated to housing management components. Many times there are components that are responsible for managing the environment that shouldn’t be in the cluster that they manage. A great example of this is NSX. If you set the wrong firewall rule for the cluster, you may inadvertently firewall yourself from the NSX management components and won’t be able to easily fix it. An additional benefit to a management cluster is that they are usually only a couple of hosts so its easy to find a tier 0 virtual machine in the event that vCenter is down.\nPhysical Location It’s a no-brainer that slow links between hosts would be a good reason to separate hosts into different clusters. A vMotion over a slow T-1 link would really ruin your day. This could also come into consideration if you’ve got a converged infrastructure and want to keep resources from traversing pods. For example, keeping all of your hosts that connect to the same pair of Cisco Fabric Interconnects in a UCS environment might be a good idea to limit the number of network hops.\nSummary There are a lot of considerations to take into account when picking how many clusters to use. Hopefully this has given you some ideas on how to lay them out for your design.\n","permalink":"https://theithollow.com/2016/06/13/cluster-decision-sizing/","summary":"\u003cp\u003eThe number of clusters that should be used for a vSphere environment comes up for every vSphere design. The number of clusters that should be used isn’t a standard number and should be evaluated based on several factors.\u003c/p\u003e\n\u003ch1 id=\"number-of-hosts\"\u003eNumber of Hosts\u003c/h1\u003e\n\u003cp\u003eLet’s start with the basics, if the design calls for more virtual machines than can fit into a single cluster, then it’s obvious that multiple clusters must be used. The same is true for a design that calls for more hosts that can fit into a single cluster or any other cluster maximums.\u003c/p\u003e","title":"Determine the Number of vSphere Clusters to Use"},{"content":"If you do a lot of work with orchestration, you\u0026rsquo;re almost certain to be familiar with working with a REST API. These REST APIs have become the primary way that different systems can interact with each other. How about database operations? How about the ability to use a generic database to house CMDB data, change tracking or really anything you can think of.\nI came across a nifty program called DreamFactory that allows us to add an API to our databases. The examples in this post are all around MS SQL Server, but it also has support for PostgreSQL, NO SQL, SQL Lite, DB2, Salesforce and even Active Directory or LDAP.\nSo my environment has a basic SQL Server with a custom database housing a custom table. Assume that I wanted to be able to create, update, delete entries in this database by using our go to REST API tools.\nI signed up for an account with DreamFactory and got my download. After this I spun up my CentOS 7 instance copied over the file and ran it.\nOnce the package is installed, I was able to login. The first time logging in, I was asked to setup an account. And then login.\nThe program has the RBAC stuff you\u0026rsquo;d expect, but for this post we\u0026rsquo;re just worried about setting up my API for my SQL Database. Go to Services and create an API. My Service type is SQL DB and then entered the name of the service, a label and description. When done, go to the Config tab.\nUnder the Config tab, select the SQL Server driver. After this, replace the Connection String entries to match your environment. The connection string was really easy since they gave you an example to use and you just change the parameters to match your settings. Then enter login credentials to your database.\nSave your service! Now if you go to the API Docs, tab, you\u0026rsquo;ll be able to browse your service in a Swagger-UI looking browser.\nYou can see that my database shows up and I can run test API calls against it.\nNow I can go back to my orchestrator appliance or custom web applications to make API calls into my database. This is a pretty neat tool that I wanted to share with the rest of the community. Check this out on your own when you have time.\n","permalink":"https://theithollow.com/2016/06/06/add-rest-sql-database/","summary":"\u003cp\u003eIf you do a lot of work with orchestration, you\u0026rsquo;re almost certain to be familiar with working with a REST API. These REST APIs have become the primary way that different systems can interact with each other. How about database operations? How about the ability to use a generic database to house CMDB data, change tracking or really anything you can think of.\u003c/p\u003e\n\u003cp\u003eI came across a nifty program called \u003ca href=\"https://www.dreamfactory.com/\"\u003eDreamFactory\u003c/a\u003e that allows us to add an API to our databases. The examples in this post are all around MS SQL Server, but it also has support for PostgreSQL, NO SQL, SQL Lite, DB2, Salesforce and even Active Directory or LDAP.\u003c/p\u003e","title":"Add REST to a SQL Database"},{"content":"Its a hot buzzword these days and probably on a lot of people\u0026rsquo;s Linkedin Profile as well. The idea that you are an engineer that knows many things about many different silos of technology. You\u0026rsquo;re the guy that can break down those walls between storage, networking, servers, cloud and all these specific disciplines. Companies are finding lots of value in these type of engineers who can see the big picture, but just remember there are a few caveats that come with this job function.\nHere\u0026rsquo;s what you need to know\nIf you plan to go down this \u0026ldquo;full stack\u0026rdquo; path you should know this things.\nBe prepared to feel completely clueless much of the time - You\u0026rsquo;re a \u0026ldquo;Jack-of-all-Trades\u0026rdquo; and likely a \u0026ldquo;Master of None\u0026rdquo;, or at least a master of little. You\u0026rsquo;ll be working with subject mater experts (SME) much of the time and they\u0026rsquo;ll know more than you do about a specific technology. Don\u0026rsquo;t let this intimidate you. These SMEs may be clueless as to how their technology works with others and they\u0026rsquo;ll look to you to be the person who can figure it out for them.\nExpect to be considered an Expert by others - You likely won\u0026rsquo;t agree with people when they consider you an expert and may even cringe when they say it out loud. You\u0026rsquo;ll know enough to know that there is so much to learn.\nYou can be an SME in one or more areas - Just because you know a little about a lot, doesn\u0026rsquo;t mean you can know a lot about a little. You can be an \u0026ldquo;Expert\u0026rdquo; in one subject area, but don\u0026rsquo;t expect to be an expert in all of them.\nYou\u0026rsquo;ll be a busy person - You\u0026rsquo;re the go to person when it comes to all things computer related. You seem to know how the pieces fit together so you\u0026rsquo;ll be called upon often when new projects arise. The more you show your talents, the more people will think you must know EVERYTHING.\nSoft skills will be important - As if it\u0026rsquo;s not enough to learn all these different technologies, you\u0026rsquo;ll be expected to communicate with executives, as well as each of the IT disciplines (storage, compute, etc). You understand how things fit together and you\u0026rsquo;ll need to articulate that vision between teams and up to IT management. Public speaking, and writing skills will be important to you.\nLearn how to play education wack-a-mole - You have a wide variety of new things to learn and old things to keep up with. You understand switching, but new projects require you to know routing protocols. You understand Amazon Web Services and new projects require you to know Microsoft Azure. You understand vSphere but now you\u0026rsquo;re required to do Hyper-V work. Not only do you have to learn this new stuff, but Amazon has released a new service, and VMware has released a new version. Your current certifications need to be updated. This is going to happen constantly and you\u0026rsquo;ll need to be able to keep up with this. Stephen Foskett once explained this process as walking UP the DOWN escalator. You have to keep moving at a certain speed just to keep up, and if you want to get Ahead, walk faster, or in this case learn faster.\nDon\u0026rsquo;t be disheartened - Knowing how many things you\u0026rsquo;re not an expert on could be difficult to cope with. You\u0026rsquo;re good at things and want to be good at all the times you work with, but it\u0026rsquo;s just not possible to know everything. People won\u0026rsquo;t expect you to either. There is no shame in saying \u0026ldquo;I don\u0026rsquo;t know\u0026rdquo; now and then. Try to remember that knowing a little about everything is a skill in itself and there is real value in it.\nYou\u0026rsquo;ll have fun - If you are interested in being a full stack engineer already, then you\u0026rsquo;ll have a lot of fun. There are so many new things that you get to touch and play with that you can\u0026rsquo;t help but enjoy what you\u0026rsquo;re doing. Just be prepared to feel overwhelmed as well.\nDon\u0026rsquo;t get caught up in the buzzwords - \u0026ldquo;Full Stack\u0026rdquo; engineer is just a cooler name for a \u0026ldquo;Jack-of-all-Trades\u0026rdquo; so don\u0026rsquo;t get caught up in any of the hype. It\u0026rsquo;ll go away when the next buzzword comes around. I\u0026rsquo;m thinking it might be something like \u0026ldquo;Cloud Jockey?\u0026rdquo; \u0026lt;\u0026ndash;Trademarked.\nSummary\nIt\u0026rsquo;s tough to keep up with everything, especially in the Information Technology sector, but you do the best that you can. If your role is a full stack engineer you\u0026rsquo;re going to have to try really hard to know everything but expect that it won\u0026rsquo;t happen. Your actual job is to be an expert and being an expert.\n","permalink":"https://theithollow.com/2016/05/31/wanna-full-stack-engineer/","summary":"\u003cp\u003eIts a hot buzzword these days and probably on a lot of people\u0026rsquo;s Linkedin Profile as well. The idea that you are an engineer that knows many things about many different silos of technology. You\u0026rsquo;re the guy that can break down those walls between storage, networking, servers, cloud and all these specific disciplines. Companies are finding lots of value in these type of engineers who can see the big picture, but just remember there are a few caveats that come with this job function.\u003c/p\u003e","title":"So You Wanna Be a Full Stack Engineer..."},{"content":"vRealize Code Stream now comes pre-packaged with JFrog Artifactory which allows us to do some cool things while we\u0026rsquo;re testing and deploying new code. To begin this post, lets take a look at what an artifactory is and how we can use it.\nAn artifactory is a version control repository, typically used for binary objects like .jar files. You might already be thinking, how is this different from GIT? My Github account already has repos and does its own version control. True, but what if we don\u0026rsquo;t want to pull down an entire repo to do work? Maybe we only need a single file of a build or we want to be able to pull down different versions of the same file without creating branches, forks, additional repos or committing new code? This is where an artifactory service can really shine.\nJFrog Artifactory As I mentioned vRA Code Stream allows us to use JFrog Artifactory as well. to access this, go to the https://Your_vRA_Appliance/Artifactory. The login by default is:\nUsername: vmadmin\nPassword: vmware\nFrom there, You can add your own repo under the Admin tab. I\u0026rsquo;ve created a Generic Repo for housing my files.\nNow if we look at the Artifacts tab, we can see my new repo and I\u0026rsquo;ve added two different zip files. These zip files contain some files that I\u0026rsquo;m using for a web server.\nNext, I\u0026rsquo;ve added a property to each of the files in this repo. My first file I added the property named \u0026ldquo;Build\u0026rdquo; and a Value of \u0026ldquo;1.\u0026rdquo; My second file in the repo has the same property name of \u0026ldquo;Build\u0026rdquo; but the value is \u0026ldquo;2.\u0026rdquo;\nSetup Artifactory with vRealize Automation Go to the Administration tab \u0026ndash;\u0026gt; Artifactory Management. Enter in a server name, and the url for your artifactory server, as well as the username and password.\nCreate Code Stream Pipeline Now we can build a new Code Stream Pipeline. For this I\u0026rsquo;ve gone to the Code Stream tab, and then created a new pipeline. Give the pipeline a name, and description before moving on. HEY, YOU THERE! Enter a Description. Don\u0026rsquo;t skip over it, Enter a description! We\u0026rsquo;ll wait for you.\nNext, I\u0026rsquo;ve added a property named \u0026ldquo;Build.\u0026rdquo; This is an input value that can be changed when the pipeline is executed. For our example, the version of the Build that is entered for the pipeline will match up with the value of the build files in Artifactory.\nNext we need to add our development pipeline stages and tasks. The example below has several things going on, but the most relevant ones to this post are the \u0026ldquo;Resolve Artifact\u0026rdquo; and \u0026ldquo;Deploy Website Files.\u0026rdquo; Let\u0026rsquo;s take a closer look at the configuration of these two tasks.\nFirst, the \u0026ldquo;Resolve Artifact\u0026rdquo; task is going to search through artifactory to find the package, binaries or files that match our rules. You can see below that this task is going to look through the \u0026ldquo;HollowRepo\u0026rdquo; repository, for an object with a property named \u0026ldquo;Build\u0026rdquo; and the value of this property should equal to \u0026ldquo;${pipeline.build}.\u0026rdquo; This last value is a variable and it maps to the input variable we created when first setting up the pipeline.\nNow the next task\u0026rsquo;s job is to deploy some files onto a server of ours. Here the first screen displays some information about the server I\u0026rsquo;m deploying the files to, the login credentials and the script I use to grab the files, unzip them and put them in the right directory. Your mileage will vary here, but the main part to understand for this post is on the advanced tab setting.\nOn the advanced tab, we\u0026rsquo;ll see a field called \u0026ldquo;Artifact Group.\u0026rdquo; Here I\u0026rsquo;ve got another variable called \u0026ldquo;${Development.Resolve Artifact.ARTIFACT_OUTPUT\u0026rdquo; which is the output of our Resolve Artifact task. Then I\u0026rsquo;ve got a few properties that I\u0026rsquo;m using in my script. The main thing to understand here is that the deploy task will deploy files that the \u0026ldquo;Resolve Artifact\u0026rdquo; task has found.\nRun a Pipeline Now that we\u0026rsquo;ve got everything built, we execute the pipeline. We\u0026rsquo;re asked for a comment and we\u0026rsquo;ve learned our lesson about not putting in descriptions right!? And most importantly we can modify the build value that is our input property.\nSummary Ok, so why did we do this? Now we can use this pipeline to deploy new code to our web servers just by changing the value of the input property. Code stream will allow us to do neat stuff like get whatever artifact that you want for your test, deploy that on a new server spun up by vRA and tested with Jenkins. If it passes the Jenkins test, we can then do stuff like automatically deploy the code to production. The sky is the limit here with what you can do for a release pipeline. I\u0026rsquo;d love to hear how you did it in the comments.\n","permalink":"https://theithollow.com/2016/05/23/code-stream-artifactory/","summary":"\u003cp\u003evRealize Code Stream now comes pre-packaged with JFrog Artifactory which allows us to do some cool things while we\u0026rsquo;re testing and deploying new code. To begin this post, lets take a look at what an artifactory is and how we can use it.\u003c/p\u003e\n\u003cp\u003eAn artifactory is a version control repository, typically used for binary objects like .jar files. You might already be thinking, how is this different from GIT? My Github account already has repos and does its own version control. True, but what if we don\u0026rsquo;t want to pull down an entire repo to do work? Maybe we only need a single file of a build or we want to be able to pull down different versions of the same file without creating branches, forks, additional repos or committing new code? This is where an artifactory service can really shine.\u003c/p\u003e","title":"vRealize Code Stream with Artifactory"},{"content":"As some of you may know, I recently obtained the VMware Certified Design Expert - Cloud Management and Automation (VCDX-CMA) certification. This was the second VCDX that I\u0026rsquo;ve earned, the first of which being in Data Center Virtualization (DCV). This is a pretty difficult process and less than 250 people globally have the distinction of VCDX at this time. There are 4 unique tracks that a VCDX can be earned in, seen below and abbreviated as DCV, EUC, NV, CMA.\nIf you\u0026rsquo;re not familiar with the process to obtain a VCDX certification, you must first complete a VMware Certified Professional (VCP) exam, a pair of VMware Certified Advanced Professional (VCAP) exams and then you can submit a full design including supporting documentation. The design is reviewed by other VCDXs and its determined whether or not you will be allowed to defend that design in front of a panel of three VCDXs and a moderator. This process is the same regardless of which of the four tracks in which you are trying to obtain the certification.\nA second VCDX I went through the above process for the DCV and was elated that I completed the journey. This was a mountain that I actually didn\u0026rsquo;t think I could climb, but I surprised myself and hit my goal. Right after my first VCDX though, my job role changed and I was focusing more on Cloud Management so the CMA certification started to tickle the cert hoarding gene. One of the best things about having your first VCDX was that you didn\u0026rsquo;t need to defend your design again if you went for a second VCDX. I think that VMware Education assumed that you have the design process down pretty well if you make it through your first defense since the panelist can be pretty tough cookies when it comes to grilling you on your decision making process.\nWell, why not I thought, I have the prerequisites for the CMA, I just need a design and I\u0026rsquo;ll have that when I finish the work I\u0026rsquo;m doing for my current customer. I just need to write up the supporting documentation (which is no small task) but then I could potentially get my second VCDX without too much hassle. So I did. A long time seemed to go by after I submitted my design document for review but sure enough one day I got an email back from the VCDX team stating that my design had all of the qualities sufficient for a VCDX design! Wow, that\u0026rsquo;s amazing news\u0026hellip; but what is the rest of this email saying???? Why are there other dates listed on this email????\nA VCDX Interview I read on in the email to find out that a new requirement has been added for individuals attempting their second VCDX certification. \u0026ldquo;OH NOES\u0026rdquo; I say to myself in my head, and probably out loud at this point. I read on to find out that I don\u0026rsquo;t need to do a full defense, but I must be available for a 1 hour phone \u0026ldquo;interview\u0026rdquo;. This is all the information that I get about the process which may be the most nerve racking part. Will I have to go through a mock design session? Will I be grilled again about my submission? Will this be a formality and I\u0026rsquo;ll have to answer some basic questions about the product to ensure that I know what I\u0026rsquo;m talking about for a different track? My mind races with possibilities. Nevertheless I study a bit on my own design as well as areas of the product that I feel are weaker than others.\nThe day of the \u0026ldquo;interview\u0026rdquo; I\u0026rsquo;m ready to go at my computer in case I need to do a screen sharing session with someone. I dial the phone number I\u0026rsquo;ve been given and prepare to do battle with other highly qualified individuals. (Its nothing like battle, and this is more like a conversation I\u0026rsquo;d have at work, but seems like it\u0026rsquo;s ratcheted up to a higher notch with who might be on the phone.) When I get on the conference call i have figured out that there will again be three VCDX panelists who will be asking me questions about the product, and my design for 1 hour. This is JUST like a defense, but no mock design sessions, no troubleshooting session and the design review is a shorter time period. Also, while it\u0026rsquo;s nice that this second \u0026ldquo;interview\u0026rdquo; is over the phone instead of in person, it seems more difficult to convey an idea and tougher to scribble something out on a whiteboard online.\nAftermath The \u0026ldquo;Interview\u0026rdquo; (more like a blood bath) was over, I was sure that I had not wow\u0026rsquo;d any of the panelists. I was sure that I had failed, it was over and I had wasted a lot of time writing documents, studying and mostly worrying. I tried to forget about the defense (that\u0026rsquo;s what it was a mini defense) and go on with my weekend. I told myself, it didn\u0026rsquo;t make any difference if I passed or not, even if I didn\u0026rsquo;t, I still have a VCDX for DCV right? Then I started thinking that there was hope and maybe I passed. Then a wave of doubt would wash over me again. This is pretty much the same set of feelings I had when I defended my first design when I passed. Maybe this was a good sign?\nA few short days later I woke up to an email stating that I had passed. Then a big feeling of relief came over me. It was pretty cool to have two of these certifications, since when i set out on this journey, I had doubted that I could do it at all.\nI hope those post gives some info to prospective double VCDXs on what they can expect for a second attempt and furthermore hope that it sets other potential first timers minds at ease. Remember, I had a VCDX already and there were still nerves and doubts about my skills. It\u0026rsquo;s ok to be nervous about things, how you push through them is what is important.\n","permalink":"https://theithollow.com/2016/05/16/second-vcdx-design-interview/","summary":"\u003cp\u003eAs some of you may know, I recently obtained the VMware Certified Design Expert - Cloud Management and Automation (VCDX-CMA) certification. This was the second VCDX that I\u0026rsquo;ve earned, the first of which being in Data Center Virtualization (DCV). This is a pretty difficult process and less than 250 people globally have the distinction of VCDX at this time. There are 4 unique tracks that a VCDX can be earned in, seen below and abbreviated as DCV, EUC, NV, CMA.\u003c/p\u003e","title":"Second VCDX Design \"Interview\" experience"},{"content":"By now, we\u0026rsquo;re probably Jenkins experts. So lets see how we can use Jenkins with vRealize Code Stream. To give you a little background, vRealize Code Stream is a release automation solution that can be added to VMware\u0026rsquo;s vRealize Automation solution. It\u0026rsquo;s a nifty little tool that will let us deploy a server from blueprint, call some Jenkins jobs and deploy code from an artifactory repository. One of the best features is that you can build your release in stages and have gating rules between them so you can automate going from Development to UAT to Production or whatever else you can think of.\nJenkins Project Setup To get started, lets begin with the Jenkins job itself. Here we\u0026rsquo;ll setup a new project and the only requirement for the job to be called from Code Stream is to have an input parameter of \u0026ldquo;vRCSTestExecutionId\u0026rdquo;. So We need to parameterize the project and add this variable.\nNext, for my job, it requires me to pass a serverip address. This is used in a Selenium test on a website to ensure that the website is working properly.\nThe last part of my project is to execute a batch command which runs my Selenium job against the server IP address that is passed to the Jenkins project.\nvRealize Code Stream Setup Before we can build a code stream pipeline, we must do some initial setup. First, it requires installing a license on your vRA appliance to allow access to the product. After this, we need to go to the Administration tab of your vRealize Automation Tenant and add an endpoint for Jenkins. Add a new endpoint and select \u0026ldquo;Jenkins (Code Stream) as the Plug-in type. Click Next.\nGive the new endpoint a name and a description. Click Next.\nGive the Jenkins Instance a name, and enter login credentials that will have access to run your Jenkins code. Also be sure to enter the URL for your Jenkins server. Click Finish.\nBuild a Code Stream Pipeline Now we can build our Code Stream pipeline. The screenshot below shows the pipeline that will be executed during my test. To describe whats happening, when it\u0026rsquo;s executed, the development stage will first deploy a server from blueprint. The next task in that stage is to run our Jenkins job that we setup above. After this there is a gate that requires a manual approval to go to the next stage, but this could always be an automatic gate based on the results of your Jenkins job. Then when the Production stage kicks off, it deploys a new web server. The idea here is that we automatically deploy to development for testing and if its successful, then we can automatically deploy to production.\nTo give you some more detail, the configuration of WebServer task finds a blueprint that we\u0026rsquo;ve specified and request it. In my case it\u0026rsquo;s a blueprint called WebServer-Git which deploy a CentOS server, installs apache and downloads my web server files through GIT. The next task is the Jenkins task. Here we choose the endpoint from the dropdown and select our Jenkins endpoint. After that select the job from the list. Remember that the only jobs that will be available in this list are the ones with the vRCSTestExecutionId as a input parameter. After you select the job, the Code Stream task will enumerate the input parameters so that you can enter in the appropriate values. Here I need to add the server ip address that was built in the previous task. To do that I can use the output attributes from the build task.\nSummary Now that all of the pieces are setup, you can run your pipeline and have machines deployed and tested with Jenkins before moving on to another environment like UAT or Production. Keep an eye on your release pipelines to see if your code is working and being deployed in a brand new environment each time.\n","permalink":"https://theithollow.com/2016/05/09/using-jenkins-vrealize-code-stream/","summary":"\u003cp\u003eBy now, we\u0026rsquo;re probably Jenkins experts. So lets see how we can use Jenkins with vRealize Code Stream. To give you a little background, vRealize Code Stream is a release automation solution that can be added to VMware\u0026rsquo;s vRealize Automation solution. It\u0026rsquo;s a nifty little tool that will let us deploy a server from blueprint, call some Jenkins jobs and deploy code from an artifactory repository. One of the best features is that you can build your release in stages and have gating rules between them so you can automate going from Development to UAT to Production or whatever else you can think of.\u003c/p\u003e","title":"Using Jenkins with vRealize Code Stream"},{"content":"If you\u0026rsquo;ve been following the rest of this series about using Jenkins, you\u0026rsquo;re starting to see that there are a lot of capabilities that can be used to suit whatever use case you have for deploying and testing code. This post focuses on a great plugin that was recently pushed out by Kris Thieler (aka inkysea) and Paul Gifford. These guys have published a Jenkins Plugin for vRealize Automation.\nJust like we\u0026rsquo;ve done in other posts, the first step is to install the plugin in the Manage Plugins section of Jenkins.\nSample Use Case So here is my basic use case for the purposes of this article. Assume I\u0026rsquo;ve got a web server with a website. I\u0026rsquo;m using vRA and vRO to deploy a virtual machine and then download my site code from GIT. The example website is shown below.\nNow I need a MAJOR overhaul to the website and will be pushing some code changes to GIT. We can use the GIT plugin that we\u0026rsquo;ve used before along with the vRA plugin to deploy a new web server and pull down my new code each time I deploy my code.\nTo get started, I\u0026rsquo;ll create a new Jenkins Project. The project will be configured to monitor my GIT branch for changes. This could be done either with polling, or via a webhook. Next, we configure the vRA settings in the project. Some more detailed instructions can be found over at the wiki page but the basics require you to enter in the vRA URL, Tenant, login information and the Blueprint that will be deployed. More settings are available such as adding JSON configuration parameters and using variables instead of hard coding in the vRA URLs. If you\u0026rsquo;ll be doing this a lot, I encourage you to go check out the details on the wiki page.\nTime for a Test So here we go. I\u0026rsquo;ve made my serious changes to my website code.\nYou can see that I\u0026rsquo;ve made my changes in GIT to the index.html file. Now I push my code to my Bitbucket (Git) server. Jenkins gets notified of the changes, and starts its build.\nvRA will then kick off the build of the blueprint and any stubs, event subscriptions or approvals that might be setup for that blueprint. After a few moments, the machine will be built and show up in the vRA console.\nIf I go login to the new machine that was provisioned, I can see that the new code has been deployed.\nSummary It\u0026rsquo;s really awesome that Jenkins can not only test your code changes automatically and on a schedule, but can also leverage other tools like vRealize Automation to build an entire environment. The example in this article was very basic, but you could imagine using this in concert with a configuration management tool like Puppet, Chef, Ansible, or a way to test changes to blueprints in vRA as well as app or web code. The sky is the limit here and I\u0026rsquo;d love to hear in the comments what you came up with.\n","permalink":"https://theithollow.com/2016/05/02/use-vrealize-automation-jenkins/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been following the rest of this series about using Jenkins, you\u0026rsquo;re starting to see that there are a lot of capabilities that can be used to suit whatever use case you have for deploying and testing code. This post focuses on a great plugin that was recently pushed out by \u003ca href=\"http://twitter.com/inkysea\"\u003eKris Thieler\u003c/a\u003e (aka \u003ca href=\"http://inkysea.com\"\u003einkysea\u003c/a\u003e) and Paul Gifford. These guys have published a Jenkins Plugin for vRealize Automation.\u003c/p\u003e\n\u003cp\u003eJust like we\u0026rsquo;ve done in other posts, the first step is to install the plugin in the Manage Plugins section of Jenkins.\u003c/p\u003e","title":"Use vRealize Automation with Jenkins"},{"content":"Today Rubrik announced not only their new 2.2 code base, but also a brand new appliance that is heavily focused towards environments requiring higher levels of security.\nr528 Hybrid Cloud Appliance Today Rubrik has announced their new r528 Hybrid Cloud appliance that has a serious focus on ensuring that data breaches don\u0026rsquo;t come from your backup solution. How does it help prevent breaches you might ask? Encrypt everything. First the r528 \u0026ldquo;brik\u0026rdquo;, as they call their appliances, encrypts the backups in flight between your vCenter server and the Rubrik appliance. Once the data gets to the appliance, it is placed onto its FIPS 140-2 Level 2 Self Encrypting Drives (SEDs). Its important to note that since Rubrik chose not to do encryption through their Operating System, but rather at the hardware level, there is virtually no performance hit for encryption.\nThe new brik allows you to use your own Key Management Solution that is KMIP 1.0 compliant or you could also use the Trusted Platform Module that comes with the appliance. The ensures that even if a hard drive thief absconds with your equipment, that it will be unreadable by them.\nAdditional security features that are include with the new brik include:\n— AES-256 hardware circuitry\n— Encrypts everything written\n— Decrypts everything read\n— Completely secure even if a drive is removed\n— Encrypt data at rest at a cluster-wide level\n— Instantaneous Secure Erase\n— Rotate passwords per security policy\nIt\u0026rsquo;s easy to see why Rubrik has added this functionality. Being able to sell these briks to uber secure government agencies or even healthcare or financial agencies looking for a product that will breeze through a compliance audit is surely desirable.\nFor those of you looking for specs on the new appliance, here you go:\nCPU - 4 X Intel 8-Core 2.4 GHz Haswell\nMemory - 256 GB DDR4\nStorage - 12 X 8 TB SED HDDs and 2 X 800 GB SED SSDs\nNetwork - 2 Dual Port 10GbE, 2 Dual Port 1 GBase-T, 2 !GBase-T (IMPI)\nSize - 2 Rack Units\nMax Power Consumption 847 Watts\nMax Thermal Dissipation 2890 BTU/hour\n2.2 Codebase Released Not only is there a new appliance thats now available, but Rubrik has new code being announced as well. Rubrik has added new functionality at a pretty quick pace since they came out of stealth mode in June of 2015 at VFD 5. in their latest code release, they\u0026rsquo;re continuing to make the appliance do more stuff while making it simple to use. One of these features is adding Auto Protect and SLA inheritance. Now you can set protection on a vCenter object such as a folder or cluster and any machines added to this object, or below it can be protected automatically by Rubrik. No more adding VMs to backups after you deploy them, now they can be grabbed automatically.\nAdditional features are aimed around controlling backup operations. The new features are described below:\nInternal Task Schedules - Rubrik can halt backups if storage latency is already high on the VMware datastore Halting Running activities - The cluster can halt tasks when performance exceeds latency thresholds Recurring First Full Snapshots - The ability to take additional full backups during the backup lifecycle Retention Period additions - Rubrik increased the retention values for SLA policies Blackout Windows - Rubrik can set time frames where no backup jobs will kick off Global Pause - The appliance will pause backups from running for planned maintenance operations. Optional App Consistency - The UI will present whether a VM has application or crash consistency In-Place File Restore - Now files can be restored to their original location if desired. Summary Backups are still boring, and no one wants to deal with them, but they\u0026rsquo;re still critical to your corporations data. Security has to be maintained not only for your production workloads, but also when those workloads are backed up or archived. Rubrik is helping make this process easier and have added some more functionality to manage your backup operations with this latest code base. If you need even more control, check out the vRO Workflows that colleague Nick Colyer and I helped write for Rubrik, available on Github.\nRubrik has previously sponsored theITHollow.com and some promotional items were received from Rubrik. These factors did not influence the content of the article and no guidance was given about how the article should be written.\n","permalink":"https://theithollow.com/2016/04/26/rubrik-gets-serious-security/","summary":"\u003cp\u003eToday Rubrik announced not only their new 2.2 code base, but also a brand new appliance that is heavily focused towards environments requiring higher levels of security.\u003c/p\u003e\n\u003ch1 id=\"r528-hybrid-cloud-appliance\"\u003er528 Hybrid Cloud Appliance\u003c/h1\u003e\n\u003cp\u003eToday Rubrik has announced their new r528 Hybrid Cloud appliance that has a serious focus on ensuring that data breaches don\u0026rsquo;t come from your backup solution. How does it help prevent breaches you might ask? Encrypt everything. First the r528 \u0026ldquo;brik\u0026rdquo;, as they call their appliances, encrypts the backups in flight between your vCenter server and the Rubrik appliance. Once the data gets to the appliance, it is placed onto its FIPS 140-2 Level 2 Self Encrypting Drives (SEDs). Its important to note that since Rubrik chose not to do encryption through their Operating System, but rather at the hardware level, there is virtually no performance hit for encryption.\u003c/p\u003e","title":"Rubrik Gets Serious about Security"},{"content":"in previous posts we discussed how you can use Jenkins to test various pieces of code including Powershell. Jenkins is a neat way to test your code and have a log of the successes and failures but let\u0026rsquo;s face it, you were probably testing your code as you were writing it anyway right? Well, what if you could push your code to GIT and have that code tested each time a GIT push was executed? Then you can have several people working on the same code and when the code gets updated in your repositories, it will be tested and logged. This makes it really nice to see when the code stopped working and who published the code to GIT. Now we\u0026rsquo;re really starting to see the power of this CI/CD stuff.\nPrerequisites Before we begin, you must make sure of a couple of things. First, ensure that you have the GIT plugin for Jenkins installed on your server. You\u0026rsquo;ll need this plugin to configure the additional settings for your project. Go to \u0026ldquo;Manage Jenkins\u0026rdquo; and then \u0026ldquo;Manage Plugins\u0026rdquo; to ensure your plugins are installed.\nSecond, any nodes that will run your code, will first need to download the code from GIT, so make sure that the GIT client is installed on them, and that they have access to your GIT repo as well.\nBuild a GIT Project Now go and setup a new Jenkins project and give it a name and description like we\u0026rsquo;ve done in the past. Scroll down until you reach the \u0026ldquo;Source Code Management\u0026rdquo; section. (If you\u0026rsquo;re missing this section, make sure your plugins are installed.) Here, select Git and then enter the repository that will store your code. You\u0026rsquo;ll also need to add some credentials that will be used to connect to the repo. You can enter a username password, certificate, or SSH keys. Below that you\u0026rsquo;ll need to select which branch will be built from. I\u0026rsquo;ve chosen to build from master.\nNext, scroll down until you see the build triggers. Here select \u0026ldquo;Poll SCM\u0026rdquo; and then enter a schedule. In my example below I\u0026rsquo;m checking for updated builds every two minutes. Don\u0026rsquo;t worry, if there are no new commits, the job won\u0026rsquo;t run anything. Also, I want to point out here that it is possible to setup a web hook from your Git repo to trigger a Jenkins build every time code is pushed into the repository. I\u0026rsquo;m doing this with Bitbucket in my lab, but it can be accomplished with Github Enterprise and many others as well.\nNext, enter the code that should be run after the Git clone is executed on your node. In my case I\u0026rsquo;m executing a powershell script that calls my Host_Settings PowerCLI script.\nPush Code to Git! That\u0026rsquo;s it, you\u0026rsquo;ve setup your Jenkins job and it will run every time you push your code into your Git repository. Pretty neat huh? It\u0026rsquo;s really nice to be able to pull code down from a repo, update it, push it back to the repo and have it automatically tested. Maybe you go a few steps further and have Jenkins email your team on failed builds. Even better, maybe you test code in your branches and have Jenkins push it to your Master branch when it completes a successful test! Happy coding!\n","permalink":"https://theithollow.com/2016/04/25/push-code-git-test-jenkins/","summary":"\u003cp\u003ein previous posts we discussed how you can use Jenkins to test various pieces of code including Powershell. Jenkins is a neat way to test your code and have a log of the successes and failures but let\u0026rsquo;s face it, you were probably testing your code as you were writing it anyway right? Well, what if you could push your code to GIT and have that code tested each time a GIT push was executed? Then you can have several people working on the same code and when the code gets updated in your repositories, it will be tested and logged. This makes it really nice to see when the code stopped working and who published the code to GIT. Now we\u0026rsquo;re really starting to see the power of this CI/CD stuff.\u003c/p\u003e","title":"Push Code to GIT and test with Jenkins"},{"content":"Jenkins is a Continuous Integration / Continuous Development (CI/CD) tool that can be used to deploy code and test it based on a schedule, triggered by a commit in GIT or after other jobs have been completed. Jobs can all be kicked off manually.\nThe pages below might help you to get familiar with Jenkins and how it could be leveraged in an organization.\nJenkins Installation Create Jenkins Project Add Jenkins Nodes Test PowerCLI Code Commit Code to GIT to Trigger Job Use vRealize Automation with Jenkins Integrate Jenkins with vRealize Code Stream ","permalink":"https://theithollow.com/2016/04/19/getting-started-jenkins-guide/","summary":"\u003cp\u003eJenkins is a Continuous Integration / Continuous Development (CI/CD) tool that can be used to deploy code and test it based on a schedule, triggered by a commit in GIT or after other jobs have been completed. Jobs can all be kicked off manually.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"jenkins\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/04/jenkins-300x300.png\"\u003e\u003c/p\u003e\n\u003cp\u003eThe pages below might help you to get familiar with Jenkins and how it could be leveraged in an organization.\u003c/p\u003e\n\u003ch1 id=\"jenkins-installation\"\u003e\u003ca href=\"/2016/03/28/jenkins-installation/\"\u003eJenkins Installation\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"create-jenkins-project\"\u003e\u003ca href=\"/2016/04/04/create-a-jenkins-job/\"\u003eCreate Jenkins Project\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"add-jenkins-nodes\"\u003e\u003ca href=\"/2016/04/11/add-a-jenkins-node-for-windows-powershell/\"\u003eAdd Jenkins Nodes\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"test-powercli-code\"\u003e\u003ca href=\"/2016/04/18/test-powercli-code-with-jenkins/\"\u003eTest PowerCLI Code\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"commit-code-to-git-to-trigger-job\"\u003e\u003ca href=\"/2016/04/25/push-code-git-test-jenkins/\"\u003eCommit Code to GIT to Trigger Job\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"use-vrealize-automation-with-jenkins\"\u003e\u003ca href=\"http://wp.me/p32uaN-1Em\"\u003eUse vRealize Automation with Jenkins\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"integrate-jenkins-with-vrealize-code-stream\"\u003e\u003ca href=\"http://wp.me/p32uaN-1EO\"\u003eIntegrate Jenkins with vRealize Code Stream\u003c/a\u003e\u003c/h1\u003e","title":"Getting Started with Jenkins Guide"},{"content":"In the previous post we discuss how to setup a Windows Node to test PowerShell code. In this post, we\u0026rsquo;ll configure a new Jenkins project to test some very basic PowerCLI code.\nTo start, we need to have some basics setup on our Windows Node that we setup previously as a slave. In our case, we need to make sure that we have PowerCLI installed on the host. Let\u0026rsquo;s think about this logically for a second. Jenkins is going to tell our Windows node to execute some PowerCLI scripts as a test. If the Windows node doesn\u0026rsquo;t understand PowerCLI, then our tests just won\u0026rsquo;t work. I would suggest that you install PowerCLI on your Windows node and then do a quick test to make sure you can connect to your vCenter server.\nOnce those pre-requisites are complete, lets setup a new project. On the Jenkins homepage click \u0026ldquo;New Item\u0026rdquo;. Then give the project a name and choose the \u0026ldquo;Freestyle project\u0026rdquo; option.\nFor the code that I\u0026rsquo;m running, I need to pass a username and a password into the script that is used to later connect to my vCenter server. To do this I click the \u0026ldquo;This build is paramaterized\u0026rdquo; option in the Jenkins project. Then I add a pair of parameters. The first one will be a \u0026ldquo;Password Parameter\u0026rdquo; which we can select from the drop down list. The second parameter is just a string. When all of my parameters have been added they\u0026rsquo;ll show up in the project and look something like this. Next, we need to ensure that when we try to run this job, that it doesn\u0026rsquo;t run on the local node since it\u0026rsquo;s a Linux server and we clearly need Windows to run PowerShell. To do this select the \u0026ldquo;Restrict where this project can be run\u0026rdquo; box and then in the Label Expression window enter the name of our Jenkins Windows node. Next we scroll down and add a build step. Here we\u0026rsquo;ll select the \u0026ldquo;Windows PowerShell\u0026rdquo; build type. Next I\u0026rsquo;ve added my code to the step and you can see it listed below. Notice that there are two variables listed in the code and we\u0026rsquo;ve accounted for them in the project already. Also notice that at the end of the script we\u0026rsquo;re exporting some info to a CSV file.\nRun the Job When you execute the job, there is an opportunity to change the values of the parameters or leave them as their defaults. The job should execute and you can review the job and console log to see how it ran. Also, in the script we just executed, it should have exported a CSV file, which we can find on our Jenkins Windows node.\nSummary This is another brief post about how you can use Jenkins to test your code. It\u0026rsquo;s a simplified project to get you familiar with the tool. You may be thinking to yourself, \u0026ldquo;why is this better than just running it on a windows server and then storing it in GIT?\u0026rdquo; Well, in the next post we\u0026rsquo;ll show how Jenkins can be integrated with GIT. Once we do that, you\u0026rsquo;ll see the real value of these continuous integration tools to automatically test code when it gets pushed to your code repository.\n","permalink":"https://theithollow.com/2016/04/18/test-powercli-code-with-jenkins/","summary":"\u003cp\u003eIn the previous post we discuss how to setup a Windows Node to test PowerShell code. In this post, we\u0026rsquo;ll configure a new Jenkins project to test some very basic PowerCLI code.\u003c/p\u003e\n\u003cp\u003eTo start, we need to have some basics setup on our Windows Node that we setup previously as a slave. In our case, we need to make sure that we have PowerCLI installed on the host. Let\u0026rsquo;s think about this logically for a second. Jenkins is going to tell our Windows node to execute some PowerCLI scripts as a test. If the Windows node doesn\u0026rsquo;t understand PowerCLI, then our tests just won\u0026rsquo;t work. I would suggest that you install PowerCLI on your Windows node and then do a quick test to make sure you can connect to your vCenter server.\u003c/p\u003e","title":"Test PowerCLI Code with Jenkins"},{"content":"Not all of your Jenkins projects will consist of \u0026ldquo;Hello World\u0026rdquo; type routines. What if we want to run some PowerShell jobs? Or better yet, PowerCLI? Our Jenkins instance was built on CentOS and doesn\u0026rsquo;t run Windows PowerShell very well at all. Luckily for us, in situations like this, we can add additional Jenkins nodes and yes they can also be Windows hosts!\nLogin to your Jenkins Instance and go to Manage Jenkins and then click on Manage Nodes.\nYou should see the master Jenkins node listed already in the grid. Now click on \u0026ldquo;New Node\u0026rdquo; to add another Jenkins Node to the list.\nGive the node a name and select \u0026ldquo;Dumb Slave\u0026rdquo;. Add a number of executors and a remote root directory. This controls how many simultaneous tests the new node can run at a time and what directory to be run under. The next item to select is the Launch Method. I\u0026rsquo;ve chosen to run the Jenkins agent through the Java Web Start. If this is a production instance you probably want to do this by adding a Windows service but this is a lab and is quicker to get up and running. Click Save.\nConfiguration is done, now you should be able to see a new node in the list, but it\u0026rsquo;s got a dreary red \u0026ldquo;X\u0026rdquo; next to it. Click on the name of the new node.\nHere we see some instructions on how to start the agent on the Windows Node. Copy the command from this site. Open the URL from the windows node and wait for the Jenkins app to open and say \u0026ldquo;Connected.\u0026rdquo;\nNow if we look at the Jenkins Nodes again, we\u0026rsquo;ll see that they\u0026rsquo;re both online and ready to run our tests.\nSummary Now we can run both Linux tests as well as Windows tests on our shiny new Jenkins hosts. In the next post we\u0026rsquo;ll cover how to run a PowerShell script on our Windows machine.\n","permalink":"https://theithollow.com/2016/04/11/add-a-jenkins-node-for-windows-powershell/","summary":"\u003cp\u003eNot all of your Jenkins projects will consist of \u0026ldquo;Hello World\u0026rdquo; type routines. What if we want to run some PowerShell jobs? Or better yet, PowerCLI? Our Jenkins instance was built on CentOS and doesn\u0026rsquo;t run Windows PowerShell very well at all. Luckily for us, in situations like this, we can add additional Jenkins nodes and yes they can also be Windows hosts!\u003c/p\u003e\n\u003cp\u003eLogin to your Jenkins Instance and go to Manage Jenkins and then click on Manage Nodes.\u003cimg alt=\"JenkinsWIN1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/02/JenkinsWIN1-1024x649.png\"\u003e\u003c/p\u003e","title":"Add a Jenkins Node for Windows Powershell"},{"content":"In this post we\u0026rsquo;ll create a Jenkins project on our brand new shiny server that we just deployed. The project we create will be very simple but should show off the possibilities of using a Jenkins server to test your code.\nTo get started login to your Jenkins server at the http://jenkinsservername:8080 port and then click the \u0026ldquo;New Item\u0026rdquo; link. From there give your new project a name. In this example our project is a Freestyle project which will let us throw code right into the project and run it on the Jenkins server or subsequent Jenkins Nodes.\nGive the Project a name and a description. We\u0026rsquo;re seasoned programers and we always put great comments in our code right? Let\u0026rsquo;s be sure to add great project names and descriptions to our projects in Jenkins too shall we?\nWe can scroll down until we get to the Build section and then click the \u0026ldquo;Add build step.\u0026rdquo; Select the \u0026ldquo;Execute shell\u0026rdquo; option. This will mean that whatever code we decide to run will be run from a shell prompt, and in this case will be run directly on the Jenkins server.\nNow in the command window we\u0026rsquo;ll put our code to run. In this example we\u0026rsquo;re just going to write some text. When you\u0026rsquo;re done click \u0026ldquo;Save.\u0026rdquo;\nNow we\u0026rsquo;re back at the Jenkins main screen where we see the new project we created. Notice under the S (S for status) column the status is grey which means its never been run. Also notice that under the W (W for Weather) column, we have bright sunshine. Weather is the aggregated status of multiple builds. Since there have been no failed builds yet, everything is sunny!\nClick the clock icon on the right side of the project. This will schedule the build. Click on the name of the project and we\u0026rsquo;ll see the build history in the left hand side of the screen. This will show any builds for this project. Click on the build that we just ran. (Click #1)\nFrom there we can select the \u0026ldquo;Console Output\u0026rdquo; to show what the command shell ran and any results. You can see that the build finished with a result of \u0026ldquo;SUCCESS.\u0026rdquo; We\u0026rsquo;re amazing coders so this should not be a surprise to us.\nLet\u0026rsquo;s see what were to happen if we had a failure. Go back in and configure the same project but this time let\u0026rsquo;s make a typo in the shell command like the one below. (Hint: \u0026ldquo;Checko\u0026rdquo; isn\u0026rsquo;t a valid shell command in Linux)\nRun the project again and we see the output as \u0026ldquo;FAILURE.\u0026rdquo;\nNow on the main screen we\u0026rsquo;ll see that the status shows a red ball. This means that the last run was a failure. In addition, the Weather shows that it\u0026rsquo;s cloudy which isn\u0026rsquo;t good.Go back and fix your code in Jenkins and run the build again. Once the build is successful again we\u0026rsquo;ll get our blue status ball, but the weather won\u0026rsquo;t be bright and sunny. It will be \u0026ldquo;partly\u0026rdquo; sunny. Run the build a few more times successfully and the weather will go back to sunny. Summary ","permalink":"https://theithollow.com/2016/04/04/create-a-jenkins-job/","summary":"\u003cp\u003eIn this post we\u0026rsquo;ll create a Jenkins project on our brand new shiny server that we just deployed. The project we create will be very simple but should show off the possibilities of using a Jenkins server to test your code.\u003c/p\u003e\n\u003cp\u003eTo get started login to your Jenkins server at the http://jenkinsservername:8080 port and then click the \u0026ldquo;New Item\u0026rdquo; link. From there give your new project a name. In this example our project is a Freestyle project which will let us throw code right into the project and run it on the Jenkins server or subsequent Jenkins Nodes.\u003c/p\u003e","title":"Create a Jenkins Project"},{"content":"Installing a Jenkins instance is pretty simple if you\u0026rsquo;re a Linux guy. But even if you\u0026rsquo;re not a Linux admin, this isn\u0026rsquo;t going to make you sweat too much. First, start by deploying yourself a Linux instance. The OS version in this post is based on CentOS 7 if you are interested in following along.\nOnce you\u0026rsquo;re up and running, make sure you can ping into the box and have SSH access. If you\u0026rsquo;re new to this, you can find instructions on setting up an SSH daemon here. Now that it\u0026rsquo;s setup we can install Jenkins by running the following commands.\nsudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key sudo yum install jenkins\nIf you want to see all the stuff going on in the background, let my screen capture be your guide.\nOnce completed the Jenkins install, lets go through some quick configurations that we can use in the future. Go to the DNS name or IPAddress of your new Jenkins node with a port of 8080.\nOnce there go to the \u0026ldquo;Manage Jenkins\u0026rdquo; link. and then click \u0026ldquo;Configure Global Security\u0026rdquo;.\nOn the \u0026ldquo;Configure Global Security\u0026rdquo; page click the check mark to \u0026ldquo;Enable Security\u0026rdquo; and for our case we\u0026rsquo;ll use the Jenkins\u0026rsquo; own user database but you could connect it to LDAP if you so desire. Also click the \u0026ldquo;Allow users to sign up\u0026rdquo; so that new users can be added. For the authorization, leave the \u0026ldquo;Anyone can do anything\u0026rdquo; as long as this is your test lab environment.\nNow that you\u0026rsquo;ve configured security, you can click the \u0026ldquo;sign up\u0026rdquo; link to add your user information. This will be your login for the Jenkins server from now on. Once you\u0026rsquo;ve setup a user, click the \u0026ldquo;log in\u0026rdquo; link and login with the credentials used in the previous step.\nNext go back to the \u0026ldquo;Manage Jenkins\u0026rdquo; page and click the \u0026ldquo;Manage Plugins.\u0026rdquo;\nGo to the Available tab and search for \u0026ldquo;Powershell Plugin.\u0026rdquo; You don\u0026rsquo;t need this right away but in future posts we\u0026rsquo;ll use the PowerShell plugin to do some PowerCLI against a vSphere environment. Click the \u0026ldquo;Download now and Install after restart\u0026rdquo; link.\nWhen you\u0026rsquo;ve installed the plugin it might also be a good idea to go to the \u0026ldquo;Updates\u0026rdquo; tab and select any out-dated plugins. Update those as well before you get going with future posts just to add new functionality and remove bugs.\nSummary We haven\u0026rsquo;t done anything useful with Jenkins yet, but you can just feel that we\u0026rsquo;re about to, right!? The install isn\u0026rsquo;t so scary either, even if you aren\u0026rsquo;t a Linux guy. We\u0026rsquo;ve got ourselves a nice web GUI up and running that we can poke around a bit to get familiar and we\u0026rsquo;ll be adding a simple job in the next post.\n","permalink":"https://theithollow.com/2016/03/28/jenkins-installation/","summary":"\u003cp\u003eInstalling a Jenkins instance is pretty simple if you\u0026rsquo;re a Linux guy. But even if you\u0026rsquo;re not a Linux admin, this isn\u0026rsquo;t going to make you sweat too much. First, start by deploying yourself a Linux instance. The OS version in this post is based on CentOS 7 if you are interested in following along.\u003c/p\u003e\n\u003cp\u003eOnce you\u0026rsquo;re up and running, make sure you can ping into the box and have SSH access. If you\u0026rsquo;re new to this, you can find instructions on \u003ca href=\"https://www.digitalocean.com/community/tutorials/initial-server-setup-with-centos-7\"\u003esetting up an SSH daemon here\u003c/a\u003e. Now that it\u0026rsquo;s setup we can install Jenkins by running the following commands.\u003c/p\u003e","title":"Jenkins Installation"},{"content":"I had some extra materials left over from a home improvement project I had been working on and decided to put them to use on a custom made rack for my lab. My requirements for the rack design were pretty simple.\nHold my equipment Make it somewhat portable Needed to be able to work on the equipment from both the front and the back side Able to discretely hide cabling Here is what I came up with. It\u0026rsquo;s a set of three shelves attached to four posts. The posts in the back are longer because I thought I might add some additional patch paneling in the back. The rack is built on top of casters so I can roll the lab to a different area of my basement if I need to move it\u0026rsquo;s location for some reason. From the back, I added some cable management which was easy to screw right into the rack with some wood screws. I am also able to tie up any cables that are too long by tucking them up underneath the two by fours with some fasteners.\nI added a sheet of whiteboard from Home Depot, to the side of the rack so I can make notes or scribble any design changes that I might need to make.\nI took things a bit too far with the cabling and have two cables that run up one side of the rack and info my rafters. These cables go to my wireless router and to my cable internet. In addition, I have a single power cable that is plugged into a outlet right next to the rack. It\u0026rsquo;s nice that my Dad is an electrician and could help me install a new receptacle exactly where my lab goes.\nThe three cables are the only thing that ties my lab to any place in my basement. Since my single power cable goes to a UPS, I can unplug my lab and roll it someplace else as long as I do it within about 8 minutes before my UPS runs out of battery. Its very nice that its on wheels and not really tied to the wall, this way I can push it up against the wall and out of the way when I\u0026rsquo;m not working on it, and I can swing it out from the wall to get behind it and (re)cable it when I want. Hopefully this gives someone else an idea on how to customize their home lab and share their info. Thanks for reading.\n","permalink":"https://theithollow.com/2016/03/21/5916/","summary":"\u003cp\u003eI had some extra materials left over from a home improvement project I had been working on and decided to put them to use on a custom made rack for my lab. My requirements for the rack design were pretty simple.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eHold my equipment\u003c/li\u003e\n\u003cli\u003eMake it somewhat portable\u003c/li\u003e\n\u003cli\u003eNeeded to be able to work on the equipment from both the front and the back side\u003c/li\u003e\n\u003cli\u003eAble to discretely hide cabling\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eHere is what I came up with. It\u0026rsquo;s a set of three shelves attached to four posts. The posts in the back are longer because I thought I might add some additional patch paneling in the back. The rack is built on top of casters so I can roll the lab to a different area of my basement if I need to move it\u0026rsquo;s location for some reason.\n\u003cimg alt=\"IMG_0268\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/IMG_0268-e1451621076829-768x1024.jpg\"\u003e\u003c/p\u003e","title":"Custom Made Computer Lab Rack"},{"content":"Amazon has a pretty cool service that allows you to create a template for an entire set of infrastructure. This isn\u0026rsquo;t a template for a virtual machine, or even a series of virtual machines, but a whole environment. You can create a template with servers, security groups, networks and even PaaS services like their relational database service (RDS). Hey, in today\u0026rsquo;s world, infrastructure as code is the direction things are going and AWS has a pretty good solution for that already.\nWhat about if you\u0026rsquo;re working with a Hybrid Cloud? You probably want a single portal that can deploy your on-premises infrastructure as well as utilize these cool things in AWS. If you\u0026rsquo;re using vRealize Automation, you can deploy EC2 instance natively, but some of the other services can be tricky. This post shows how you can leverage some of these services by using an XaaS Blueprint.\nInitial Setup First, we need to setup a Linux Proxy. This is a linux VM that has the AWS CLI installed on it. We\u0026rsquo;ll need a way to communicate with AWS and these tools will do the trick. On the Linux proxy perform the following steps.\ncurl \u0026quot;https://s3.amazonaws.com/aws-cli/awscli-bundle.zip\u0026quot; -o \u0026quot;awscli-bundle.zip\u0026quot; unzip awscli-bundle.zip sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws Once the tools have been installed, run the \u0026ldquo;aws configure\u0026rdquo; command and fill out some default configuration information. This includes access keys, secret access keys as well as a default region to use and a default output format. Now just to demo that we can create an AWS Cloud Formation Template Stack, we\u0026rsquo;ll run a test. The picture below, shows that a stack is created from the command that we\u0026rsquo;ve executed on the Amazon CLI tools.\nvRealize Orchestrator Workflow Here\u0026rsquo;s where the magic happens. We will build a vRO workflow that formats a string and then passes that command over to an \u0026ldquo;Run SSH Command\u0026rdquo; workflow. All we\u0026rsquo;re doing here is taking the command we ran from the cli and putting it into a vRO workflow. Then vRO will call try to execute this commando on the Linux Proxy.\nBelow, my format Command script will gather some parameters that I want to be able to customize when sending to AWS to create my stack. You can see that I\u0026rsquo;ve added variables for stackName and KeyName in my string. cmd = \u0026quot;aws cloudformation create-stack --stack-name \u0026quot; + stackName + \u0026quot; --template-url https://s3.amazonaws.com/cf-templates-lmixyjh9tl6m-us-east-1/2016044zXm-template1kgn967yo2mqs38fr --parameters ParameterKey=KeyName,ParameterValue=\u0026quot; + KeyName System.log(cmd);\nIf you try to manually build the stack from the AWS Console, you\u0026rsquo;ll see that there are a few more variables that I could have used, but I kept this simple with just a stack name and a key name. The rest of the parameters must have a default setting if you don\u0026rsquo;t plan to pass them along with the script.\nPass the cmd variable over to the SSH Workflow to have your Linux Proxy server execute the command for you.\nXaaS Blueprint Now that you\u0026rsquo;ve got a vRO workflow that can build a stack for a Cloud Formation Template, log into your vRealize Automation instance and publish an XaaS Blueprint. (This was previously done from the Advanced Services Designer in version 6 or before). Create the XaaS blueprint, publish it and then entitle it for your users. If you need more instructions on this, please see the vRealize Automation 7 Guide for instructions.\nCreate a Request Now that all the setup is done, lets make a request. Click the AWS Cloud Formation Template catalog item that was published in vRA. Enter a description and a reason for the request.\nEnter any of the parameters that you\u0026rsquo;ve seen fit to require. Notice in the example below, that even though my vRO workflow requires two parameters, I hard-coded the Amazon Key Name so that it cannot be changed. This means that I only need to enter a stack name to deploy it.\nIf we want to see whats going on, check the \u0026ldquo;Requests\u0026rdquo; tab of vRA to ensure the request finished. Check vRO to ensure that the workflow ran successfully, and you can check in AWS to make sure that a stack is being created.\nAfter a few minutes (depending on how complex your template is) you should see a stack completed message with your stack name specified.\nA quick test reveals that I am serving up the AMI Test page through Apache. Summary Maybe this isn\u0026rsquo;t quite as slick as having a way to manage the entire lifecycle of your virtual machines through vRA, but it is a really cool way to have a single portal to deploy on-premises, EC2 and Cloud Formation Templates. The tricky piece is managing the lifecycle of the stack. Perhaps an additional element in your vRO workflow that spins down the stack hours, or days later, or a second XaaS Blueprint that lets users destroy a stack based on the name. This would all depend on the specific use case, but this post should get your brain moving. I hope you\u0026rsquo;ve found it useful in your cloud endeavors.\n","permalink":"https://theithollow.com/2016/03/14/aws-cloud-formation-templates-in-vrealize-automation/","summary":"\u003cp\u003eAmazon has a pretty cool service that allows you to create a template for an entire set of infrastructure. This isn\u0026rsquo;t a template for a virtual machine, or even a series of virtual machines, but a whole environment. You can create a template with servers, security groups, networks and even PaaS services like their relational database service (RDS). Hey, in today\u0026rsquo;s world, infrastructure as code is the direction things are going and AWS has a pretty good solution for that already.\u003c/p\u003e","title":"AWS Cloud Formation Templates in vRealize Automation"},{"content":"In the previous post we went over how to get the basics configured for NSX and vRealize Automation integration. In this post we\u0026rsquo;ll build a blueprint and deploy it! Let\u0026rsquo;s jump right in and get started.\nBlueprint Designer Login to your vRA tenant and click on the Design Tab. Create a new blueprint just like we have done in the past posts. This time when you are creating your blueprint, click the NSX Settings tab and select the Transport zone. I\u0026rsquo;ve also added a reservation policy that can help define with reservations are available for this blueprint.\nNow that you\u0026rsquo;ve got the designer open, you can drag and drop your blueprints into the grid just like you have always done. But now, once you\u0026rsquo;ve added your servers in, you can drag and drop in Network \u0026amp; Security components. I\u0026rsquo;ve decided to add three \u0026ldquo;On-Demand Routed Networks\u0026rdquo;\nOnce you\u0026rsquo;ve added your network to the grid, you\u0026rsquo;ll need to configure it. Give the network a name and then select the parent network profile that we created in the previous post. This should be a routed profile.\nOnce your networks have been configured, click on your blueprint and go to the network properties. Select the network in which to join the virtual machine.\nWhen I was all done with my three-tiered routed app the blueprint designer looked like this.\nNote: I do want to mention that you can not only add your networks into the designer, but can also add security configurations. Maybe your web server should be firewalled and only port 443 allowed. You can drag that security profile into the grid as well. Pretty neat!\nDeploy the blueprint I\u0026rsquo;m not going to go through the motions of requesting a new machine from blueprint, but publish your blueprint to the catalog and request a new one. When you\u0026rsquo;re done, you\u0026rsquo;ll see something similar to this in the items list.\nThe Distributed Logical Router will have three additional interfaces added to it.\nThere will be three more switches added as well that correspond with the additional interfaces on the DLR.\nThe three virtual machines created will be on different networks.\nSummary OK, so the new designer makes things really easy to deploy multiple networks and security settings with your servers. The visual way that servers and networks can be deployed should really make network deployments more popular. If you\u0026rsquo;re building out vRA 7 in your environment and you\u0026rsquo;ve been considering using NSX for a while, this may be the tipping point.\n","permalink":"https://theithollow.com/2016/03/09/vrealize-automation-7-deploy-nsx-blueprints/","summary":"\u003cp\u003eIn the \u003ca href=\"http://wp.me/p32uaN-1Cy\"\u003eprevious post\u003c/a\u003e we went over how to get the basics configured for NSX and vRealize Automation integration. In this post we\u0026rsquo;ll build a blueprint and deploy it! Let\u0026rsquo;s jump right in and get started.\u003c/p\u003e\n\u003ch2 id=\"blueprint-designer\"\u003eBlueprint Designer\u003c/h2\u003e\n\u003cp\u003eLogin to your vRA tenant and click on the Design Tab. Create a new blueprint just like we have done in the \u003ca href=\"/2016/01/28/vrealize-automation-7-blueprints/\"\u003epast posts\u003c/a\u003e. This time when you are creating your blueprint, click the NSX Settings tab and select the Transport zone. I\u0026rsquo;ve also added a reservation policy that can help define with reservations are available for this blueprint.\u003c/p\u003e","title":"vRealize Automation 7 - Deploy NSX Blueprints"},{"content":"Its time to think about deploying our networks through vRA. Deploying servers are cool, but deploying three tiered applications in different networks is cooler. So lets add VMware NSX to our cloud portal and get cracking.\nThe first step is to have NSX up and running in your vSphere environment. Once this simple task is complete, a Distributed Logical Router should be deployed with an Uplink interface configured. The diagram below explains what needs to be setup in vSphere prior to doing any configurations in vRealize Automation. A Distributed Logical Router with a single uplink to an Edge Services Gateway should be configured first, then any new networks will be built through the vRealize Automation integration. While the section of the diagram that is manual, will remain roughly the same throughout, the section handled by vRealize Automation will change often, based on the workloads that are deployed. Note: be sure to setup some routing between your Provider Edge and the DLR so that you can reach the new networks that vRA creates.\nBelow, you can also see my NSX DLR prior to any vRealize Automation configurations being done.\nNow, make sure that your vRealize Orchestrator endpoint is setup correctly and configured. Before we do anything with NSX we need to make sure that the NSX plugin is installed on your vRO endpoint. NSX will utilize this plugin to setup new networks, switches etc. Be sure to do this before continuing.\nEndpoint Setup The first configuration that needs to happen in vRealize Automation is to re-configure your vCenter endpoint to add your NSX connection. Find the vCenter endpoint and add a URL and set of credentials that connect to the NSX manager.\nNetwork Profiles Now we need to setup some network profiles. For the purposes of this demonstration, I\u0026rsquo;ve setup four network profiles. My Transit network profile which is external and three routed network profiles. The transit network profile will be used in the reservations to show which uplink is used to get to the physical network. In this case it goes through our DLR to our Edge Services Gateway.\nThe transit network setup looks something like the example below where my gateway is the next hop route to our Edge Services Gateway.\nIn the IP Ranges tab, I\u0026rsquo;ve added some IP Addresses that are available on my transit network\nNow if we look at the Routed Network Profiles, here we\u0026rsquo;re added some networking information that probably doesn\u0026rsquo;t even exist yet in your networks. These networks will be automatically created by vRA by leveraging NSX. There are a couple of important things to review here, The first is the external network profile. This profile should be the external Transit profile that we created just a moment ago. This tells vRA which uplink will be used as a gateway network to the rest of the environment. The next thing is to determine the subnet mask for the whole profile, and then a range subnet mask which is a segment of that range.\nOnce you\u0026rsquo;ve setup the details, click on the IP Ranges tab where you should be able to click \u0026ldquo;Generate Ranges.\u0026rdquo; This will create each of the subnets that can be used by vRA for your segmented applications.\nReservations Now that we\u0026rsquo;ve setup the network profiles, we can create or modify our vRA reservation. The first step here is to map the external network profile we created earlier, to the port group that it belongs with. Next, under the advanced settings section, select the transport zone that was created in NSX. Below this you can add security groups to the reservation automatically if you would like. Lastly, under routed gateways, select the Distributed Logical Router that as created and then in the drop downs, select the interface and the network profile that corresponds with your external network.\nIf your routed gateways don\u0026rsquo;t show up, make sure you\u0026rsquo;ve run a discovery on your compute cluster for \u0026ldquo;Networking and Security\u0026rdquo;. Also, make sure that you\u0026rsquo;ve created a Distributed Logical Router and not an Edge.\nSummary In this post, we setup our basic configurations in vRealize Automation and connected it to our NSX manager. The reservations and network profiles are now ready for us to build some blueprints with on-demand networks, which we\u0026rsquo;ll discuss in the next post.\n","permalink":"https://theithollow.com/2016/03/07/6234/","summary":"\u003cp\u003eIts time to think about deploying our networks through vRA. Deploying servers are cool, but deploying three tiered applications in different networks is cooler. So lets add VMware NSX to our cloud portal and get cracking.\u003c/p\u003e\n\u003cp\u003eThe first step is to have NSX up and running in your vSphere environment. Once this simple task is complete, a Distributed Logical Router should be deployed with an Uplink interface configured. The diagram below explains what needs to be setup in vSphere prior to doing any configurations in vRealize Automation. A Distributed Logical Router with a single uplink to an Edge Services Gateway should be configured first, then any new networks will be built through the vRealize Automation integration. While the section of the diagram that is manual, will remain roughly the same throughout, the section handled by vRealize Automation will change often, based on the workloads that are deployed. Note: be sure to setup some routing between your Provider Edge and the DLR so that you can reach the new networks that vRA creates.\u003c/p\u003e","title":"vRealize Automation 7 - NSX Initial Setup"},{"content":"XaaS isn\u0026rsquo;t a made up term, well maybe it is, but it supposed to stand for \u0026ldquo;Anything as a Service.\u0026rdquo; vRealize Automation will allow you to publish vRO workflows in the service catalog. This means that you can publish just about any thing you can think of, and not just server blueprints. If you have a workflow that can order your coffee and have it delivered to you, then you can publish it in your vRA service catalog. Side note, if you have that workflow, please share it with the rest of us.\nCreate a XaaS Blueprint Before you begin, make sure that the user who will be adding these new service blueprints is an XaaS Architect.\nTo create an XaaS Blueprint, go to the Design Tab \u0026ndash;\u0026gt; XaaS \u0026ndash;\u0026gt; XaaS Blueprints. Click the \u0026ldquo;New\u0026rdquo; button to add a new blueprint.\nSelect the vRO blueprint that should be added to the service catalog.\nGive the blueprint a name and description. Click Next.\nThe inputs from the vRO blueprint should be added to the main form. It is possible to customize how the form will look when published to end users. Rearrange details, add or remove fields and click next when you\u0026rsquo;re ready. On the provisioned resource tab, leave the field at \u0026ldquo;No provisioning\u0026rdquo;.\nWhen you\u0026rsquo;re done, you\u0026rsquo;ll see your XaaS blueprint in the list. Remember that before anything can be requested from that, it must be published.\nSummary An XaaS Blueprint is a great way to add functionality to your cloud portal. The cloud doesn\u0026rsquo;t need to be used for just server provisioning. Helpdesk requests, or any other types of automated services can also be made available to your users.\n","permalink":"https://theithollow.com/2016/02/29/vrealize-automation-7-xaas-blueprints/","summary":"\u003cp\u003eXaaS isn\u0026rsquo;t a made up term, well maybe it is, but it supposed to stand for \u0026ldquo;Anything as a Service.\u0026rdquo; vRealize Automation will allow you to publish vRO workflows in the service catalog. This means that you can publish just about any thing you can think of, and not just server blueprints. If you have a workflow that can order your coffee and have it delivered to you, then you can publish it in your vRA service catalog. \u003cem\u003eSide note, if you have that workflow, please share it with the rest of us.\u003c/em\u003e\u003c/p\u003e","title":"vRealize Automation 7 – XaaS Blueprints"},{"content":"In a previous post we went over installing an Enterprise Install of vRealize Automation behind a load balancer. This install required us to setup a Load Balancer with three VIPs but also required that we only had one active member in each VIP. A load balancer with a single member doesn\u0026rsquo;t really balance much load does it?\nAfter the installation is done, some modifications need to be made on the Load Balancer. The instructions on this can be found in the official vRealize Automation Load Balancing Configuration Guide if you want to learn more. There are several examples on how to setup load balancing on an F5 load balancer and NSX for example. This post will focus on a KEMP load balancer which is free for vExperts and it will all be shown through with GUI examples.\nIaaS Manager Service Let\u0026rsquo;s start with the Iaas Manager Service. Modify your IaaSmgmt service VIP and make sure the load balancing method is \u0026ldquo;Round Robin\u0026rdquo;. Then we want to add some health check parameters. the URL that we should check is located at /VMPSProvision and we\u0026rsquo;re going to run a \u0026ldquo;GET\u0026rdquo; command on this. The return response should be \u0026ldquo;ProvisionService\u0026rdquo; without the quotes.\nWeb Services Now we\u0026rsquo;ll move onto the web services VIP. Let\u0026rsquo;s modify our IaaSweb VIP and start with updating the persistence options. We want session persistence to be based on the Source Address and have a timeout of 30 minutes. The Load Balancing policy should again be Round Robin and now we can move on to the health check.\nRemember at the end of the Enterprise Install, we\u0026rsquo;re given a screen with Load Balancer Information that we should go back and check? Well, the Web Services health check has the wrong case. Notice that the Monitor for the IaaS Web component says \u0026ldquo;registered\u0026rdquo; but it should actually be \u0026ldquo;REGISTERED.\u0026rdquo;\nIf we check the URL we can see the monitor clearly.\nFinish adding your health check with a URL of /wapi/api/status/web with a GET method that is looking for a pattern named \u0026ldquo;REGISTERED.\u0026rdquo;\nvRealize Appliance Now we can do the vRealize Appliance URL. Edit the VIP and add an additional port of 8444 to the existing 443. Port 8444 is used for remote console access which is a useful access method that you might want. Change the persistence options to a source based method, with a timeout of 30 minutes just like we did for the web services. The load balancing method is going to be \u0026ldquo;Round Robin\u0026rdquo; like it has been for our other services, and then it\u0026rsquo;s time to do our health check again.\nThis time we want to look for a URL of /vcac/services/api/health with a GET method, and we\u0026rsquo;re only looking for a 200 or 204 response pattern back so nothing needs to be added in the \u0026ldquo;Reply Pattern\u0026rdquo; box.\nSummary Now we\u0026rsquo;ve added all of our load balancing rules and we can see our VIPs are all up and happy with our members. The enterprise environment now has some failover capabilities that a simple installation is lacking.\n","permalink":"https://theithollow.com/2016/02/24/vrealize-automation-7-load-balancer-rules/","summary":"\u003cp\u003eIn a previous post we went over installing an \u003ca href=\"/2016/02/22/vrealize-automation-7-enterprise-install/\"\u003eEnterprise Install of vRealize Automation\u003c/a\u003e behind a load balancer. This install required us to setup a Load Balancer with three VIPs but also required that we only had one active member in each VIP. A load balancer with a single member doesn\u0026rsquo;t really balance much load does it?\u003c/p\u003e\n\u003cp\u003eAfter the installation is done, some modifications need to be made on the Load Balancer. The instructions on this can be found in the official \u003ca href=\"http://pubs.vmware.com/vra-70/topic/com.vmware.ICbase/PDF/vrealize-automation-70-load-balancing.pdf\"\u003evRealize Automation Load Balancing Configuration Guide\u003c/a\u003e if you want to learn more. There are several examples on how to setup load balancing on an F5 load balancer and NSX for example. This post will focus on a KEMP load balancer which is free for vExperts and it will all be shown through with GUI examples.\u003c/p\u003e","title":"vRealize Automation 7 - Load Balancer Rules"},{"content":"OK, You\u0026rsquo;ve done a vRealize Automation 7 simple install and have the basics down. Now it\u0026rsquo;s time to put your grown up pants on, and get an enterprise install done. This is a pretty long process, so be ready, but trust me, this is much better in version 7 than in the past.\nLoad Balancer To start with, you will want to configure your load balancer. An enterprise install means that you\u0026rsquo;ll want at least two of each type of service so that you can protect yourself from a failure. There are three Virtual IPs (VIPs) that should be created prior to starting your install. The table below lists an example list of VIPs with their associated members and ports.\n[table id= 3 /]\nHere is the catch with the VIPs. During the installation, only one member of each VIP should be used. You can either setup your load balancer with a single member for each pool, or you could ensure that your load balancing method ensures that only a single member in the pool is used. For instance you could add all your members to a pool but weight them so that only the first member is getting any traffic through the load balancer.\nYou can see in my example below, I\u0026rsquo;ve setup my KEMP LoadMaster to round robin across a single master.\nNext you\u0026rsquo;ll deploy a pair of vRealize Automation Appliances, a pair of Windows Servers for the IaaS components, and some number Windows servers for DEM workers and DEM Agents. In my lab, I\u0026rsquo;ve decided to go with a pair of vRA appliances, a pair of IaaS Servers and a pair of DEM Servers that will also hold agents. This makes six servers in total, but depending on your workloads and design, you may need more or less of these servers. Once you\u0026rsquo;ve got all your servers deployed and ready to go, let\u0026rsquo;s login to our first vRA appliance by going to https://[vraIPAddressorFQDN]:5480.\nWhen we login to the appliance we\u0026rsquo;ll see that we are brought to an Installation Wizard. Here we want to select \u0026ldquo;Enterprise deployment\u0026rdquo; and make sure to select the \u0026ldquo;Install Infrastructure as a Service\u0026rdquo;. Then click Next.\nThe next screen we\u0026rsquo;ll see a link to the vCAC-IaaSManagementAgent-Setup.msi. Download this file and run it on every one of your IaaS Severs and DEM Workers and Agent Servers.\nWhen you\u0026rsquo;re installing the Management Agent on your Windows servers, enter the vRA appliance address that you want to connect with. This should be your primary vRA appliance that we were running our wizard from. Do not enter the Load Balancer VIP or the other vRA appliance or it will cause you issues. Then enter the root login and password for the appliance and click \u0026ldquo;Load\u0026rdquo; to get the SHA1 fingerprint. Select the checkbox to confirm that the fingerprint matches and then click next.\nEnter a login name that has administrative privileges on the Windows host, so it can make changes on behalf of the vRA wizard. Click Next and Finish.\nWhen we go back to the vRA installation wizard, we should now see that our Windows servers are listed on the screen now. Enter a time server in and click \u0026ldquo;Fix Times\u0026rdquo; Then click Next.\nThe next screen asks us for the name and login for the secondary vRealize Automation appliance. Enter the info and click Next.\nThe Server Roles screen will ask us where certain components should be installed. Select the servers that you want to install components on and click Next. Now we\u0026rsquo;re at the Prerequisite Checker screen and we need to click the \u0026ldquo;Run\u0026rdquo; button to ensure that all of the correct software and patches are installed prior to installing vRA components. Click Run.\nMy guess is it will come back and show you a screen like the one below where you have some things missing. Not to worry, Click the \u0026ldquo;Fix\u0026rdquo; button and the wizard will try to fix these settings in conjunction with your management agent.\nOnce you see a screen with all pretty little green checkmarks, you can click \u0026ldquo;Next.\u0026rdquo;\nNow we need to enter the fully qualified name for our vRealize Automation appliance Virtual IP Address that is on our load balancer. Enter in the name that you\u0026rsquo;ve put into the load balancer and click \u0026ldquo;Next.\u0026rdquo;\nEnter a Single Sign-On password and make sure that it meets the complexity requirements listed in the wizard. Click \u0026ldquo;Next.\u0026rdquo;\nNow we\u0026rsquo;ve come to the IaaS Host screen. Here we need to enter the Fully Qualified Domain Name of the VIPs for both the Web Service and the Manager Service. Also, enter a passphrase that will be used to encrypt contents of the database on this screen. Click \u0026ldquo;Next.\u0026rdquo;\nEnter the SQL Server information that will be used to house IaaS database information. You can have the wizard create the database for you if you have the appropriate permissions. Otherwise you can set this up prior and select the database during this step. Click \u0026ldquo;Next.\u0026rdquo;\nSelect the website name and port if they are to be changed from the default. Enter administrative credentials for each of the IaaS servers. Click \u0026ldquo;Validate.\u0026rdquo; Then click \u0026ldquo;Next.\u0026rdquo;\nFor the manager service we need to also enter administrative credentials to install this service. Click \u0026ldquo;Next.\u0026rdquo; Now we enter an instance name for the DEM Managers and some credentials for them. You can also enter an instance description if you would like. Click \u0026ldquo;Validate\u0026rdquo; then click \u0026ldquo;Next.\u0026rdquo;\nHere we enter information for our Agents. The agent is used to connect to an endpoint and our default agent should be named \u0026ldquo;vCenter\u0026rdquo; spelled just like this and case sensitive. Enter login credentials to install the agent and enter the FQDN of the vCenter Endpoint that you\u0026rsquo;re connecting to. Click \u0026ldquo;Next.\u0026rdquo;\nNow it\u0026rsquo;s time to add some SSL certificates to your vRealize Automation infrastructure. To start, we add a certificate for the vRealize Automation appliances URL. You can import a certificate or generate your own certificate. I\u0026rsquo;ve chosen to enter some information in and generate a self-signed certificate. This should be avoided for a production workload, but that is not covered in this post. Click \u0026ldquo;Next.\u0026rdquo;\nNow we need to repeat the same steps for the IaaS Web servers. Click \u0026ldquo;Next.\u0026rdquo;\nLastly, repeat the procedure for the Manager Services Certificates. Click \u0026ldquo;Next.\u0026rdquo;\nThe next screen gives you a recap of what VIPs should be setup based on the information that you\u0026rsquo;ve entered into the wizard so far. Double check at this point to make sure the DNS names are all entered correctly and the members are added to the Load balancer correctly. Click \u0026ldquo;Next.\u0026rdquo;\nOn the next screen you\u0026rsquo;ll click the \u0026ldquo;Validate\u0026rdquo; button to have the wizard check with all of the servers and make sure that everything looks good to go before running the installation. This takes about 10 minutes or more, so let it run and come back and check on it. Click \u0026ldquo;Next\u0026rdquo; once all of the commands show up as \u0026ldquo;Succeeded.\u0026rdquo;\nThe next screen is a nice reminder to take some snapshots of your infrastructure before you get going with the install. I would take the hint here and go take some snapshots.\nRun your install! That was a long wizard, but pretty straight forward. The installation will take quite a while to complete. Once you start it, go grab a cup of coffee, and then read a book about coffee, and then check on the install. This really does take some time but you\u0026rsquo;ll get updates when items have finished or if an item fails the installer will pause. If this happens, you can read the error message and are allowed to retry the failed components if necessary. Once you get that pretty screen with all green check marks, you\u0026rsquo;re done with the install. Click \u0026ldquo;Next.\u0026rdquo;\nIf you want to, you can look at your load balancer now and you should see that the VIPs are now up! Go back to your install wizard. The next screen is to add a license key. Add a valid vRA7 license key and click \u0026ldquo;Submit Key.\u0026rdquo; Click \u0026ldquo;Next.\u0026rdquo;\nThe next screen gives you the option to send telemetry data to VMware for them to better the project. This is completely optional. Click \u0026ldquo;Next.\u0026rdquo;\nLastly, you can add some initital content to your vRealize Automation instance. I usually skip this section because I want a nice clean vRA instance that doesn\u0026rsquo;t have any demo information in it already. This is again up to you. Click \u0026ldquo;Next.\u0026rdquo;\nYOU DID IT! That Congratulations message is a welcomed sight after a long wizard. This last screen will also give you some information about making some post install modifications to your load balancers as well. Be careful though, these instructions have a typo in them. Please see the post on Load Balancing Rules to give you more details on this.\nSummary A vRealize Enterprise install has a lot of pieces in it, but they are necessary in order to provide fault tolerance and to distribute load. This post has shown how to complete the installation.\n","permalink":"https://theithollow.com/2016/02/22/vrealize-automation-7-enterprise-install/","summary":"\u003cp\u003eOK, You\u0026rsquo;ve done a vRealize Automation 7 simple install and have the basics down. Now it\u0026rsquo;s time to put your grown up pants on, and get an enterprise install done. This is a pretty long process, so be ready, but trust me, this is much better in version 7 than in the past.\u003c/p\u003e\n\u003ch1 id=\"load-balancer\"\u003eLoad Balancer\u003c/h1\u003e\n\u003cp\u003eTo start with, you will want to configure your load balancer. An enterprise install means that you\u0026rsquo;ll want at least two of each type of service so that you can protect yourself from a failure. There are three Virtual IPs (VIPs) that should be created prior to starting your install. The table below lists an example list of VIPs with their associated members and ports.\u003c/p\u003e","title":"vRealize Automation 7 – Enterprise Install"},{"content":"We\u0026rsquo;ve deployed a virtual machine from a vRA blueprint, but we still have to manage that machine. One of the cool things we can do with vRealize Automation 7 is to add a custom action. This takes the virtual machine object and runs a vRealize Orchestration blueprint against that input. We call these actions \u0026ldquo;Day 2 Operations\u0026rdquo; since they happen post provisioning.\nTo create a new custom resource action go to the Design Tab \u0026ndash;\u0026gt; Design \u0026ndash;\u0026gt; Resource Actions. Click the \u0026ldquo;New\u0026rdquo; button to add a new action.\nSelect the Orchestrator workflow from the list.\nThe vRO workflow should have an input parameter that can be passed from a server blueprint. I\u0026rsquo;m using a VC:VirtualMachine parameter because I know it will identify the virtual machine and is passed automatically.\nOn the Input Resource tab, select the IaaS VC Virtual Machien as the resource type and the Input Parameter should be filled in already.\nOn the details tab enter the name and a description. The Type in my case is blank because i\u0026rsquo;m not using it for provisioning or deprovisioning. Change the form to match your requirements. I like to keep the form as empty as possible so that users are able to request the action from a blueprint and vRO attributes fill in the rest.\nWhen you\u0026rsquo;re done, be sure to \u0026ldquo;Publish\u0026rdquo; the blueprint so that it can be used.\nNow we need to configure the action, much like we\u0026rsquo;ve configured our catalog items in a previous post.\nGive the action an icon and click Finish.\nNow, when we provision a virtual machine, we can see the Action that we created in our list. We can now run this action from the Items screen.\nSummary Custom Actions are a great way to allow our users to manage their own resources after they\u0026rsquo;ve provisioned them. Since its a vRealize Orchestrator workflow, we can use these actions to put guardrails on actions to protect users from themselves. For instance maybe we replace the \u0026ldquo;Snapshot\u0026rdquo; action with a custom action that also deletes the snapshots after 3 days. It certainly can reduce helpdesk tickets that come in and ask for a snapshot to be taken.\n","permalink":"https://theithollow.com/2016/02/15/vrealize-automation-7-custom-actions/","summary":"\u003cp\u003eWe\u0026rsquo;ve deployed a virtual machine from a vRA blueprint, but we still have to manage that machine. One of the cool things we can do with vRealize Automation 7 is to add a custom action. This takes the virtual machine object and runs a vRealize Orchestration blueprint against that input. We call these actions \u0026ldquo;Day 2 Operations\u0026rdquo; since they happen post provisioning.\u003c/p\u003e\n\u003cp\u003eTo create a new custom resource action go to the Design Tab \u0026ndash;\u0026gt; Design \u0026ndash;\u0026gt; Resource Actions. Click the \u0026ldquo;New\u0026rdquo; button to add a new action.\u003c/p\u003e","title":"vRealize Automation 7 – Custom Actions"},{"content":"Custom Properties are used to control aspects of machines that users are able to provision. For example, memory and CPU are required information that are necessary for users to deploy a VM from a blueprint. Custom properties can be assigned to a blueprint or reservation to control how memory and CPU should be configured.\nCustom properties are really powerful attributes that can vastly change how a machine behaves. I like to think of custom properties as the \u0026ldquo;Windows Registry\u0026rdquo; of vRealize Automation. Changing one property can have a huge effect on deployments.\nTo add a custom property to a blueprint, open the blueprint in the \u0026ldquo;Design\u0026rdquo; tab. Select the blueprint we\u0026rsquo;re working with and then click the vSphere Machine that is on the \u0026ldquo;Design Canvas\u0026rdquo;.\nNow click on the properties tab of the machine object and click the \u0026ldquo;Custom Properties\u0026rdquo; tab. Here we can click the \u0026ldquo;New\u0026rdquo; button to add a new property.\nFrom here, we need to enter a name and a value. What you enter here will really vary but a list of the custom properties can be found at the official VMware vRealize Automation Documentation. I do want to call out a few custom properties that you may find very valuable, especially if you\u0026rsquo;re just getting started.\nNot all of the virtual machine properties are passed to vRealize Orchestrator in vRA7 unless you add the following custom property to the blueprint. Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.[LIFECYCLESTATE]. Where Lifecycle state is the name of the lifecycle that the machine would be in.\nFor instance Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.BuildingMachine with a Value of __*,* will pass all hidden properties as well as all of the normal properties. If you didn\u0026rsquo;t figure it out, the __* (Double underscore, asterisk) denotes a hidden property. You can see the actual property that I\u0026rsquo;ve added to my blueprint in the screenshot below. Notice that I\u0026rsquo;ve added two custom properties so that all the attributes are passed during both the BuildingMachine and Requested Lifecycle states.\nWhen the custom properties below are passed over to vRealize Orchestrator we can list all the properties of the machine to do custom workflows. The screenshot below shows a list of properties that are passed over by default.\n[2016-01-05 11:42:40.016] [I] BlueprintName: CentOS [2016-01-05 11:42:40.021] [I] ComponentId: CentOS [2016-01-05 11:42:40.022] [I] ComponentTypeId: Infrastructure.CatalogItem.Machine.Virtual.vSphere [2016-01-05 11:42:40.022] [I] EndpointId: 12250e26-da94-4c0a-b19d-5c5d7c73ebcb [2016-01-05 11:42:40.023] [I] RequestId: 16d179cc-a1ce-4261-831e-cd54ed009c3f [2016-01-05 11:42:40.024] [I] VirtualMachineEvent: null [2016-01-05 11:42:40.025] [I] WorkflowNextState: null [2016-01-05 11:42:40.028] [I] State: VMPSMasterWorkflow32.Requested [2016-01-05 11:42:40.029] [I] Phase: PRE [2016-01-05 11:42:40.030] [I] Event: null [2016-01-05 11:42:40.030] [I] ID: 4e87d827-50b4-407a-b9b7-955db9d644af [2016-01-05 11:42:40.033] [I] Name: HollowAdmin0003 [2016-01-05 11:42:40.034] [I] ExternalReference: null [2016-01-05 11:42:40.034] [I] Owner: user@domain.local [2016-01-05 11:42:40.035] [I] Type: 0 [2016-01-05 11:42:40.036] [I] Properties: HashMap:1409151584 [2016-01-05 11:42:40.040] [I] vRA VM Properties : Cafe.Shim.VirtualMachine.TotalStorageSize : 16 Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.BuildingMachine : __*,* Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.Requested : __*,* VirtualMachine.Admin.TotalDiskUsage : 16384 VirtualMachine.CPU.Count : 1 VirtualMachine.Cafe.Blueprint.Component.Cluster.Index : 0 VirtualMachine.Cafe.Blueprint.Component.Id : CentOS VirtualMachine.Cafe.Blueprint.Component.TypeId : Infrastructure.CatalogItem.Machine.Virtual.vSphere VirtualMachine.Cafe.Blueprint.Id : CentOS VirtualMachine.Cafe.Blueprint.Name : CentOS VirtualMachine.Disk0.IsClone : true VirtualMachine.Disk0.Label : Hard disk 1 VirtualMachine.Disk0.Size : 16 VirtualMachine.Disk0.Storage : Synology02 VirtualMachine.Memory.Size : 2048 VirtualMachine.Network0.Name : VMs-VLAN VirtualMachine.Storage.Name : Synology02 __Cafe.Request.BlueprintType : 1 __Cafe.Request.VM.HostnamePrefix : Use group default __Cafe.Root.Request.Id : 276b4854-db3b-4cca-9a06-fc070c1081d1 __Clone_Type : CloneWorkflow __InterfaceType : vSphere __Legacy.Workflow.ImpersonatingUser : __Legacy.Workflow.User : user@domain.local __VirtualMachine.Allocation.InitialMachineState : SubmittingRequest __VirtualMachine.ProvisioningWorkflowName : CloneWorkflow __api.request.callback.service.id : 6da9b261-33ce-495e-b91a-4f50c202635d __api.request.id : 16d179cc-a1ce-4261-831e-cd54ed009c3f __clonefrom : CentOS-Template __clonefromid : 18ff68d0-5190-4b42-99b3-641145378a3a __clonespec : CentOS __request_reason : __trace_id : FhP8UgRD _number_of_instances : 1\nYou can see from that list, there are a bunch of properties assigned to a VM by default and you can makeup your own properties if you\u0026rsquo;d like. Maybe you want a variable passed named \u0026ldquo;DR Server\u0026rdquo; and the value is either Yes or No. vRealize Orchestrator could read that property, if you add it to your blueprint, and you can make a decision based on that. In another case, maybe you want to make a decision about what datastore in which the vm should be placed, and you use vRO to update that property so the machine is deployed in a different datastore.\nThere are three more things you need to add to a custom property besides \u0026ldquo;Name\u0026rdquo; and \u0026ldquo;Value.\u0026rdquo; These are Encrypted, Overridable and Show in Request. Lets take a look at these.\nEncrypted - Removes clear text from the vRealize Automation GUI. Hint: use this for passwords. Overridable - Allows the user to change the property during provisioning. Show in Request - Prompts the user to enter this property during provisioning. Summary Custom Properties are a must have piece of your vRealize Automation instance if you plan to do any serious customization or decision making. These properties allow you to add variables and make the solution fit within your organizational structure. Learn them, love them and get used to dealing with them.\n","permalink":"https://theithollow.com/2016/02/10/vrealize-automation-7-custom-properties/","summary":"\u003cp\u003eCustom Properties are used to control aspects of machines that users are able to provision. For example, memory and CPU are required information that are necessary for users to deploy a VM from a blueprint. Custom properties can be assigned to a blueprint or reservation to control how memory and CPU should be configured.\u003c/p\u003e\n\u003cp\u003eCustom properties are really powerful attributes that can vastly change how a machine behaves. I like to think of custom properties as the \u0026ldquo;Windows Registry\u0026rdquo; of vRealize Automation. Changing one property can have a huge effect on deployments.\u003c/p\u003e","title":"vRealize Automation 7 - Custom Properties"},{"content":"In vRealize Automation 7 a new concept was introduced called a \u0026ldquo;Subscription.\u0026rdquo; A subscription is a way to allow you to execute a vRealize Orchestrator workflow based on some sort of event that has taken place in vRA. Simple idea huh? Well some of you might be thinking to yourself, \u0026ldquo;Yeah, this is called a stub, Duh!\u0026rdquo; The truth is that stubs are still available in vRealize Automation 7 but are clearly being phased out and we should stop using them soon because they are likely to not be around in future versions. The idea of an event subscription is a lot like a stub when in the context of machine provisioning, but there are a lot more events that can be triggered than the stubs that have been around in previous versions. Let\u0026rsquo;s take a look.\nTo being we will go to the Administration Tab \u0026ndash;\u0026gt; Events \u0026ndash;\u0026gt; Subscriptions. Here we can add a new subscription by click the \u0026ldquo;New\u0026rdquo; button with the plus sign.\nThe first screen that shows up is the \u0026ldquo;Event Topic\u0026rdquo; screen. An event topic describes the type of event that we\u0026rsquo;re going to watch for. You can see that we can trigger an action from a variety of different types of events. It\u0026rsquo;s important to note that the machine provisioning events are similar to stubs, but the other events would be new concepts and can be triggered by vRA reconfigurations like changing a blueprint. Maybe you trigger an email from vRO every time a business group is changed or something. For the purposes of this post, I\u0026rsquo;m using the \u0026ldquo;Machine Provisioning\u0026rdquo; event which would be very similar to a \u0026ldquo;Stub.\u0026rdquo; If you\u0026rsquo;d like to see what the other Event Topics are for, please check the official VMware vRealize Automation Documentation for Event Topics.\nOnce you\u0026rsquo;ve selected an Event Topic, you\u0026rsquo;ll notice that the schema will be displayed on the right hand side of the screen. This explains the data that will be passed to vRealize Orchestrator during the event. Once you\u0026rsquo;ve selected an Event Topic, click Next.\nNow we get to the conditions tab. By default, the \u0026ldquo;Run for all events\u0026rdquo; option is selected. I encourage you to leave this alone and run it one time with some really basic \u0026ldquo;Hello World\u0026rdquo; type workflow just to see what it does but in the rest of this post we\u0026rsquo;re going to set some specific conditions.\nChange the radio option to \u0026ldquo;Run based on conditions\u0026rdquo; and then choose \u0026ldquo;All of the following.\u0026rdquo; This will allow us to enter a list of conditions and every one of them must be met before the action is triggered.\nNext click the drop down and select \u0026ldquo;Lifecycle state name\u0026rdquo;.\nIn the next box choose \u0026ldquo;Equals\u0026rdquo;.\nAnd in the last box leave the radio button on \u0026ldquo;Constant\u0026rdquo; and the select the \u0026ldquo;VMPSMasterWorkflow32.BuildingMachine.\u0026rdquo; This building machine lifecycle state should be familiar because its the same name as a stub in vRealize Automation 6 but if you\u0026rsquo;re new to this, it just means that this is the stage of the provisioning lifecycle where the machine is actually being built.\nTo recap what we\u0026rsquo;ve done in the past few steps, we said we only want our workflow to trigger when the lifecycle state = WMPSMasterWorkflow32.BuildingMachine.\nWe\u0026rsquo;re not finished yet, we\u0026rsquo;re going to add one more condition here. Click the \u0026ldquo;Add expression\u0026rdquo; link to add another condition.\nThis time select \u0026ldquo;state phase\u0026rdquo; and then \u0026ldquo;equals\u0026rdquo;. Then in the last dropdown leave the radio button on \u0026ldquo;Constant\u0026rdquo; and select \u0026ldquo;Event\u0026rdquo;. Ok, so what is a state phase? Well, in version 7 of vRealize Automation, we don\u0026rsquo;t have just one \u0026ldquo;Building Machine\u0026rdquo; option, but rather three! A pre-building machine, a post-building machine and the actual building machine event. If you didn\u0026rsquo;t specify the state phase and you build a new virtual machine from a blueprint, the \u0026ldquo;Building Machine\u0026rdquo; event subscription would trigger 3 times! One for each phase.\nIf you need to know more information about the lifecycle states, please check out the official VMware vRealize Automation Documentation on lifecycle states.\nOnce we\u0026rsquo;ve added all of our conditions, click next to go to the \u0026ldquo;Workflow\u0026rdquo; tab. Select the vRealize Orchestrator workflow that you want to run when the event occurs. Then click Next.\nOn the details tab, enter a name for the subscription, and a description.\nThere is also an option for blocking, which means that workflows have to wait on this workflow to finish before running. If you don\u0026rsquo;t click the \u0026ldquo;blocking\u0026rdquo; checkbox then any other subscriptions may run simultaneously. To determine what order blocking tasks run in, you will then have to enter a priority. You\u0026rsquo;ll also be able to put in a timeout period to move on to the next workflow if your first one seems to have taken too long to execute.\nOnce you\u0026rsquo;ve finished setting up your event, be sure to click on the subscription in the list and then click \u0026ldquo;Publish.\u0026rdquo; I always forget to do this piece. :)\nSummary Subscription events should be a pretty quick concept to grasp if you\u0026rsquo;re familiar with stub workflows in previous versions. They can be very powerful and there are many more opportunities to jump out of vRA and execute a task via Orchestrator now and that is a very good thing. More options are better but we need to take a few minutes to learn the new lifecycle states and phases before we can use them effectively.\n","permalink":"https://theithollow.com/2016/02/08/vrealize-automation-7-subscription/","summary":"\u003cp\u003eIn vRealize Automation 7 a new concept was introduced called a \u0026ldquo;Subscription.\u0026rdquo; A subscription is a way to allow you to execute a vRealize Orchestrator workflow based on some sort of event that has taken place in vRA. Simple idea huh? Well some of you might be thinking to yourself, \u0026ldquo;Yeah, this is called a stub, Duh!\u0026rdquo; The truth is that stubs are still available in vRealize Automation 7 but are clearly being phased out and we should stop using them soon because they are likely to not be around in future versions. The idea of an event subscription is a lot like a stub when in the context of machine provisioning, but there are a lot more events that can be triggered than the stubs that have been around in previous versions. Let\u0026rsquo;s take a look.\u003c/p\u003e","title":"vRealize Automation 7 – Subscriptions"},{"content":"I\u0026rsquo;m often reminded of a scene from the movie \u0026ldquo;Boiler Room\u0026rdquo; when I see public spats between employees of competing technologies. Ben Affleck plays a young, wealthy and charismatic salesman who is trying to encourage the firm\u0026rsquo;s new employees to have a certain swagger about them. He says, \u0026ldquo;Act as if\u0026rdquo; and then gives some descriptions of things you can act like, for instance the President of the Firm. His point was that you should have a certain confidence about you that doesn\u0026rsquo;t need to be explained to people. It exists, it\u0026rsquo;s there, people know it, and you haven\u0026rsquo;t said anything to them about it.\nIts not uncommon these days to see companies with generally similar approaches to solving technology problems have a public argument over social media about why one product is better than another.\nOur product does 20% better at this thing than yours does!\nWell, Our product doesn\u0026rsquo;t cost as much as yours, to do the same thing!\nStop it! My message here is very simple.\nCustomers will be turned off by this type of talk from both sides. They\u0026rsquo;ll see both companies as lesser products that are fighting to gain whatever sales they can, just to stay afloat. This may not even be the reality of the situation, but it will be the perception. Here\u0026rsquo;s a question for you: Do you remember Apple explaining to people why the iPod was better than Microsoft\u0026rsquo;s Zune? Neither do I. Just be awesome and ignore the rest.\nNow, you may say that your customers won’t see these social media spats anyway, so it’s not a big deal, but it is. These types of conversations spread and become part of your swagger. Having reasons X, Y and Z why your product is far superior to another becomes your sales pitch and that\u0026rsquo;s the wrong message. This is not an ideal that you would want floating throughout your organization.\nBe the Lion I\u0026rsquo;m very lucky to work for a company filled with really gifted and passionate people who love the work that they do. But one of the best things about my job is I don\u0026rsquo;t see people in my company that are running around trying to figure out how to be better than our competitors. We are always striving to do what we do better, learn new things, offer new services, and just be awesome. Maybe it happens, but I don\u0026rsquo;t see anyone in our organization talking about our competitors while in customer sales meetings. When we walk into meetings with customers its about showing our methodologies and our passion for those services. If we do this, the customer will see our value and that\u0026rsquo;s all that matters.\nSummary If you have so much passion about your company that you take to social media to talk about it, this is a great thing. But if you\u0026rsquo;re using this time to fight FUD, or sling FUD, then you\u0026rsquo;re losing the war to win the battle. If you\u0026rsquo;ve got the best product, it will show in your confidence and ultimately your bottom line. Bickering over details just makes your solution seem less impressive.\n","permalink":"https://theithollow.com/2016/02/05/act-as-if/","summary":"\u003cp\u003eI\u0026rsquo;m often reminded of a scene from the movie \u0026ldquo;Boiler Room\u0026rdquo; when I see public spats between employees of competing technologies. Ben Affleck plays a young, wealthy and charismatic salesman who is trying to encourage the firm\u0026rsquo;s new employees to have a certain swagger about them. \u003cimg alt=\"download (1)\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/02/download-1.jpg\"\u003eHe says, \u0026ldquo;Act as if\u0026rdquo; and then gives some descriptions of things you can act like, for instance the President of the Firm. His point was that you should have a certain confidence about you that doesn\u0026rsquo;t need to be explained to people. It exists, it\u0026rsquo;s there, people know it, and you haven\u0026rsquo;t said anything to them about it.\u003c/p\u003e","title":"Act as If..."},{"content":"You\u0026rsquo;ve created your blueprints and entitled users to use them. How do we get them to show up in our service catalog? How do we make them look pretty and organized? For that, we need to look at managing catalog items.\nLog in as a tenant administrator and go to the Administration Tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Catalog Items. From here, we\u0026rsquo;ll need to look for the blueprint that we\u0026rsquo;ve previously published. Click on the blueprint. The configure catalog item screen will appear. Here, we can assign this catalog item an icon. If you\u0026rsquo;re looking for some great icons to use I would recommend starting with vmtocloud icon pack found here. Next, change the status to Active so that it will show up in the catalog, and lastly, select which service this catalog item should be listed under. Remember that a service is like a group of catalog items. Also, if you want, there is a check box to show the item as \u0026ldquo;New and Noteworthy.\u0026rdquo; This just highlights the catalog item in the service catalog.\nIf we click the entitlements tab, we\u0026rsquo;ll see who has been entitled to the item. Click Finish.\nWhen we go to the service catalog, we should see some nicely laid out items, with icons and grouped together by services. If you don\u0026rsquo;t see the correct things, check to make sure the user logged in has the correct entitlements.\nSummary A blueprint that is published has to get configured so that it shows up all nice and neat in the service catalog. Managing catalog items is the way to do this.\n","permalink":"https://theithollow.com/2016/02/02/vrealize-automation-7-manage-catalog-items/","summary":"\u003cp\u003eYou\u0026rsquo;ve created your blueprints and entitled users to use them. How do we get them to show up in our service catalog? How do we make them look pretty and organized? For that, we need to look at managing catalog items.\u003c/p\u003e\n\u003cp\u003eLog in as a tenant administrator and go to the Administration Tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Catalog Items. From here, we\u0026rsquo;ll need to look for the blueprint that we\u0026rsquo;ve previously published. Click on the blueprint.\n\u003cimg alt=\"vra7-catitem1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/01/vra7-catitem1.png\"\u003e\u003c/p\u003e","title":"vRealize Automation 7 – Manage Catalog Items"},{"content":"An entitlement is how we assign users a set of catalog items. Each of these entitlements can be managed by the business group manager or a tenant administrator can manage entitlements for all business groups in their tenant.\nTo create a new entitlement go to Administration tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Entitlements. Click the \u0026ldquo;New\u0026rdquo; button to add a new entitlement.\nUnder the General tab, enter a name for the entitlement and a description. Change the status to \u0026ldquo;Active\u0026rdquo; and select a Business Group. Note: If only a single business group has been created, this will not be selectable since it will default to the only available group. Then select the users who will be part of this entitlement.\nNext, under the \u0026ldquo;Items \u0026amp; Approvals\u0026rdquo; tab, we get to pick which things this user(s) will have access to. We do not need to fill out all of these types, but we can if we choose to do so.\nWe can entitle users to Services, Items, and/or Actions. I chose to entitle this user to my \u0026ldquo;Private Cloud\u0026rdquo; service that we created earlier. This will ensure that any catalog items I assign to that service will automatically be entitled. If I chose an item, I\u0026rsquo;d need to do it for each item but this may be preferable in your use case. Lastly, I selected every action because the user I\u0026rsquo;m entitling is an administrator. As you might guess, if this user should have restricted access then not all items should be checked. For example, if you want your users to be able to build their own servers, but not destroy them, then don\u0026rsquo;t entitle them to the \u0026ldquo;Destroy\u0026rdquo; action.\nSummary Now we\u0026rsquo;ve setup our cloud management portal to assign our users catalog items and actions that they can execute on those items. Entitlements are a key piece to making sure that your users have access to the stuff they need, but not too much access or items that might be confusing to them.\n","permalink":"https://theithollow.com/2016/02/01/vrealize-automation-7-entitlements/","summary":"\u003cp\u003eAn entitlement is how we assign users a set of catalog items. Each of these entitlements can be managed by the business group manager or a tenant administrator can manage entitlements for all business groups in their tenant.\u003c/p\u003e\n\u003cp\u003eTo create a new entitlement go to Administration tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Entitlements. Click the \u0026ldquo;New\u0026rdquo; button to add a new entitlement.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"vra7-Entitlements1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-Entitlements1-1024x449.png\"\u003e\u003c/p\u003e\n\u003cp\u003eUnder the General tab, enter a name for the entitlement and a description. Change the status to \u0026ldquo;Active\u0026rdquo; and select a Business Group. Note: If only a single business group has been created, this will not be selectable since it will default to the only available group. Then select the users who will be part of this entitlement.\u003cimg alt=\"vra7-Entitlements2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-Entitlements2-1024x326.png\"\u003e\u003c/p\u003e","title":"vRealize Automation 7 – Entitlements"},{"content":"Blueprints are arguably the thing you\u0026rsquo;ll spend most of your operational time dealing with in vRealize Automation. We\u0026rsquo;ve finally gotten most of the setup done so that we can publish our vSphere templates in vRA.\nTo create a blueprint in vRealize Automation 7 go to the \u0026ldquo;Design\u0026rdquo; tab. Note: If you\u0026rsquo;re missing this tab, be sure you added yourself to the custom group with permissions like we did in a previous post, and that you\u0026rsquo;ve logged back into the portal after doing so.\nClick the \u0026ldquo;New\u0026rdquo; button to add a new blueprint.\nGive the new blueprint a name and a Unique ID. The ID can\u0026rsquo;t be changed later so be sure to make it a good one. Next, enter a description as well as the lifecycle information. Archive (days) determines how long an item will be kept after a lease expires. The lease is how long an item can be provisioned before it\u0026rsquo;s automatically removed. Click OK.\nNow we\u0026rsquo;ve given our blueprint some basic characteristics. The next step is to put all of our \u0026ldquo;stuff\u0026rdquo; into the blueprint. For my very basic example, I\u0026rsquo;m going to drag the \u0026ldquo;vSphere Machine\u0026rdquo; object onto our design canvas. This adds a vCenter template into our blueprint. As you can see we have a lot of options to be added to our blueprint, such as multiple machine types, networks, software and other services. A really neat change to version 7 over version 6 if you ask me.\nOnce we\u0026rsquo;ve added our components into the blueprint, we need to give each of them some characteristics. To start, we\u0026rsquo;re going to give the component an ID and description.\nOn the Build Information tab, I\u0026rsquo;m going to make sure the blueprint type is \u0026ldquo;Server\u0026rdquo; and I\u0026rsquo;m going to change the Action to \u0026ldquo;Clone\u0026rdquo;. Click the ellipsis and select one of your vSphere templates. And lastly on this tab enter a customization spec exactly how it is named in vSphere, including case sensitivity.\nThe next tab is the \u0026ldquo;Machine Resources\u0026rdquo; tab. Here we need to enter in the size of this virtual machine, or the max sizes that a user could request. Fill out your values and go to the next tab.\nThe storage tab will let us customize the sizes of our disks. I\u0026rsquo;ve left my disk sizes the same as my vSphere template, but you can change them if needed.\nThe network tab, I\u0026rsquo;ve also left blank. I\u0026rsquo;m letting the network in my vSphere template dictate what networks I\u0026rsquo;ll be deployed on. For a larger corporate installation, you\u0026rsquo;ll want to specify some network info here\nThe security tab is to be used specifically with NSX or vCNS. So fare we\u0026rsquo;re not using this so we\u0026rsquo;ll leave it blank for now. Custom properties deserve their own blog post or series of blog posts. They will allow us to do lots of cool things during provisioning, but they are not required to deploy a machine from blueprint. If you understand them, you can enter them here for the blueprint. When you\u0026rsquo;re all done fiddling with your settings click \u0026ldquo;Finish\u0026rdquo;. When you\u0026rsquo;re done, you\u0026rsquo;ll see your blueprint listed in the grid. Before it can be assigned to people though, it must be published. Click the blueprint in the grid and then select the \u0026ldquo;Publish\u0026rdquo; button.\nSummary In this post we created our very first blueprint. Don\u0026rsquo;t worry if we messed up a step, I\u0026rsquo;m sure we\u0026rsquo;ll be creating lots of these little guys! In future posts we\u0026rsquo;ll be assigning this blueprint to our users and services so that we can request a server.\n","permalink":"https://theithollow.com/2016/01/28/vrealize-automation-7-blueprints/","summary":"\u003cp\u003eBlueprints are arguably the thing you\u0026rsquo;ll spend most of your operational time dealing with in vRealize Automation. We\u0026rsquo;ve finally gotten most of the setup done so that we can publish our vSphere templates in vRA.\u003c/p\u003e\n\u003cp\u003eTo create a blueprint in vRealize Automation 7 go to the \u0026ldquo;Design\u0026rdquo; tab. Note: If you\u0026rsquo;re missing this tab, be sure you added yourself to the custom group with permissions like we did in a previous post, and that you\u0026rsquo;ve logged back into the portal after doing so.\u003c/p\u003e","title":"vRealize Automation 7 – Blueprints"},{"content":"If you\u0026rsquo;ve been reading the whole series of posts on vRealize Automation 7, then you\u0026rsquo;ll know that we\u0026rsquo;ve already been setting up roles in our cloud portal, but we\u0026rsquo;re not done yet. If you need any permissions besides just requesting a blueprint, you\u0026rsquo;ll need to be added to a custom group first.\nTo create a custom group, login as a tenant administrator and go to the Administration Tab \u0026ndash;\u0026gt; Users and Groups \u0026ndash;\u0026gt; Custom Groups. From there click the \u0026ldquo;New\u0026rdquo; button to add a new custom group.\nOnce the \u0026ldquo;New Group\u0026rdquo; screen appears give it a name and description. On the right hand side, select the built in roles that you\u0026rsquo;d like to assign to this group. In my case, this is a lab and I\u0026rsquo;m assigning all roles to the group and assuming that this group managed EVERYTHING in my vRA infrastructure. If you\u0026rsquo;re doing this for a corporation this information should be locked down by what tasks the group will be handing. To find out what each of the roles do, take a look in the bottom right hand corner of the \u0026ldquo;New Group\u0026rdquo; screen. The permissions will be listed as you click on each role. When you\u0026rsquo;re done, click \u0026ldquo;Next\u0026rdquo;.\nOn the following screen select your users. Again this is my home lab and all of my Domain Admins will manage this vRA7 portal.\nSummary We\u0026rsquo;ve created a lot of permissions and roles already, but the custom groups are important for us to build blueprints and manage catalogs. If you\u0026rsquo;re moving on to the next post in the series, be sure you log out of vRA7 and back in before continuing since some of your permissions probably just changed!\n","permalink":"https://theithollow.com/2016/01/28/vrealize-automation-7-custom-groups/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been reading the whole series of posts on vRealize Automation 7, then you\u0026rsquo;ll know that we\u0026rsquo;ve already been setting up roles in our cloud portal, but we\u0026rsquo;re not done yet. If you need any permissions besides just requesting a blueprint, you\u0026rsquo;ll need to be added to a custom group first.\u003c/p\u003e\n\u003cp\u003eTo create a custom group, login as a tenant administrator and go to the Administration Tab \u0026ndash;\u0026gt; Users and Groups \u0026ndash;\u0026gt; Custom Groups. From there click the \u0026ldquo;New\u0026rdquo; button to add a new custom group.\u003c/p\u003e","title":"vRealize Automation 7 – Custom Groups"},{"content":"Services might be a poor name for this feature of vRealize Automation 7. When I think of a service, I think of some sort of activity that is being provided but in the case of vRA a service is little more than a category or type. For example, I could have a service called \u0026ldquo;Private Cloud\u0026rdquo; and put all of my vSphere blueprints in it and another one called \u0026ldquo;Public Cloud\u0026rdquo; and put all of my AWS blueprints in it. In the screenshot below you can see the services in a catalog. If you highlight the \u0026ldquo;All Services\u0026rdquo; service, it will show you all blueprints regardless of their service category. Otherwise, selecting a specific service will show you only the blueprints in that category.\nNOTE: if you only create a single service, the tab that is highlighted on the left side does not appear. Creating a second service forces this pane to be displayed.\nAll blueprints must be part of a service for it to be provisioned. To create a service go to Administration tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Services. Click the \u0026ldquo;New\u0026rdquo; button to add a new service. Give the service a name and description. Then click the browse button to add an icon for your service. If you\u0026rsquo;re looking for some standard icons to use, I recommend the vCAC icon pack from vmtocloud.com.\nChange the status to Active and then give it an owner. You can also set which hours the service is available to your users and the default change window for the service if you\u0026rsquo;d like. Again, I\u0026rsquo;m in a small lab so I\u0026rsquo;m not messing with this much. Click \u0026ldquo;OK\u0026rdquo;. Summary Services are pretty simple to setup and there isn\u0026rsquo;t much to them once you understand that they are just a grouping for blueprints and not something more complex. We\u0026rsquo;ll add our blueprints to a service so that we can group them better later on.\n","permalink":"https://theithollow.com/2016/01/26/vrealize-automation-7-services/","summary":"\u003cp\u003eServices might be a poor name for this feature of vRealize Automation 7. When I think of a service, I think of some sort of activity that is being provided but in the case of vRA a service is little more than a category or type. For example, I could have a service called \u0026ldquo;Private Cloud\u0026rdquo; and put all of my vSphere blueprints in it and another one called \u0026ldquo;Public Cloud\u0026rdquo; and put all of my AWS blueprints in it. In the screenshot below you can see the services in a catalog. If you highlight the \u0026ldquo;All Services\u0026rdquo; service, it will show you all blueprints regardless of their service category. Otherwise, selecting a specific service will show you only the blueprints in that category.\u003c/p\u003e","title":"vRealize Automation 7 – Services"},{"content":"vRealize Automation 7 uses the concept of reservations to grant a percentage of fabric group resources to a business group. To add a reservation go to Infrastructure \u0026ndash;\u0026gt; Reservations. Click the \u0026ldquo;New\u0026rdquo; button to add a reservation and then select the type of reservation to be added. Since I\u0026rsquo;m using a vSphere Cluster, I selected Virtual \u0026ndash;\u0026gt; vCenter. Depending on what kind of reservations you\u0026rsquo;ve selected, the next few screens may be different, but I\u0026rsquo;m assuming many people will use vSphere so I\u0026rsquo;ve chosen this for my example.\nEnter a Name for the reservation and the tenant (which should already be selected). Next, in the dropdown select your business group that will have access to the reservation. Leave reservation policy empty for now but enter a priority. If a business group has access to more than one reservation, the priority is used to determine which to use up first. Lastly, select \u0026ldquo;Enable this reservation\u0026rdquo;. Click \u0026ldquo;Next\u0026rdquo;.\nOn the resources tab, select the compute resource and then we need to add some quotas. Quotas limit how large the reservation will be, so we can limit it by a number of machines, the amount of memory or how much storage is being used. Be sure to enter a memory amount and at least one datastore to be used for deploying cloud resources. Click \u0026ldquo;Next\u0026rdquo;.\nOn the network tab, select the networks that can be used to deploy resources and for now leave the \u0026ldquo;Network Profile\u0026rdquo; blank.The bottom section is used with NSX or vCNS but we\u0026rsquo;ll leave that for another post.\nOn the properties tab, you can add custom properties that will be associated with all catalog items deployed through this reservation. For now we\u0026rsquo;ll leave this empty. Click \u0026ldquo;Next\u0026rdquo;.\nLastly, the alerts page we can set the thresholds on when to alert our administrators about resource usage.\nSummary Reservations are how we limit our business groups to a certain amount of resources in our cloud. They are necessary to prevent our vSphere environment from being over provisioned with virtual machines and can empower business group managers to handle their own resources instead of the IT Administrators.\n","permalink":"https://theithollow.com/2016/01/25/vrealize-automation-7-reservations/","summary":"\u003cp\u003evRealize Automation 7 uses the concept of reservations to grant a percentage of fabric group resources to a business group. To add a reservation go to Infrastructure \u0026ndash;\u0026gt; Reservations. Click the \u0026ldquo;New\u0026rdquo; button to add a reservation and then select the type of reservation to be added. Since I\u0026rsquo;m using a vSphere Cluster, I selected Virtual \u0026ndash;\u0026gt; vCenter. Depending on what kind of reservations you\u0026rsquo;ve selected, the next few screens may be different, but I\u0026rsquo;m assuming many people will use vSphere so I\u0026rsquo;ve chosen this for my example.\u003c/p\u003e","title":"vRealize Automation 7 – Reservations"},{"content":"The job of a business group is to associate a set of resources with a set of users. Think of it this way, your development team and your production managers likely need to deploy machines to different sets of servers. I should mention that a business group doesn\u0026rsquo;t do this by itself. Instead it is combined with a reservation which we\u0026rsquo;ll discuss in the next post. But before we can build those out, lets setup our business groups as well as machine prefixes.\nA machine prefix lets us take some sort of string and prepend it to some set of numbers to give us a new machine name. We want to make sure that our machines don\u0026rsquo;t have the same names so we\u0026rsquo;ll need a scheme to set them up in some sort of pool like we do with IP addresses. To setup a machine prefix go to Infrastructure \u0026ndash;\u0026gt; Administration \u0026ndash;\u0026gt; Machine Prefixes. Click the \u0026ldquo;New\u0026rdquo; button with the plus sign on it to add a new prefix. Enter a string to be used in the name that will always be added to a new machine name. Next add a number of digits to append to the end of that string, and lastly enter a number for the next machine to start with. In my example below, my next machine would be named \u0026ldquo;vra7-01\u0026rdquo; without the quotes.\nNOTE: Be sure to click the green check mark after adding this information. It\u0026rsquo;s easy to click OK at the bottom of the screen without saving the record.\nNow that we created the machine prefix, we can add our business group. Go to Administration \u0026ndash;\u0026gt; Users and Groups \u0026ndash;\u0026gt; Business Groups. Click the \u0026ldquo;New\u0026rdquo; button again to add a new group. When the first screen opens, Give the group a name, description and an email address in which to send business group activities. Click \u0026ldquo;Next\u0026rdquo;.\nNext, we\u0026rsquo;re presented with a screen to add users to three different roles. The group manager role will entitle the users to blueprints and will manage approval policies. The support role will be users that can provision resources on behalf of the users, and the users role will be a list of users who can request catalog items. Click \u0026ldquo;Next\u0026rdquo;.\nOn the Infrastructure screen, select a machine prefix from the drop down. You don\u0026rsquo;t have to have a prefix for the group but this is a best practice in case so that each of your blueprints don\u0026rsquo;t have to have their own assigned. The default prefix can be overridden by the blueprint.\nOptionally you can enter an Active Directory container which will house the computer objects if you\u0026rsquo;re using WIM provisioning. I\u0026rsquo;ve left this blank since we\u0026rsquo;re using VMware templates to deploy VMs.\nSummary The business groups are an important piece to deploying blueprints because if a user isn\u0026rsquo;t in a group, it can\u0026rsquo;t be entitled to a catalog item. These business groups will likely be your corporate teams that need to self-provision resources and their manager or team leads. In our next post, we\u0026rsquo;ll assign the business group with some resources, through the use of reservations.\n","permalink":"https://theithollow.com/2016/01/21/vrealize-automation-7-business-groups/","summary":"\u003cp\u003eThe job of a business group is to associate a set of resources with a set of users. Think of it this way, your development team and your production managers likely need to deploy machines to different sets of servers. I should mention that a business group doesn\u0026rsquo;t do this by itself. Instead it is combined with a reservation which we\u0026rsquo;ll discuss in the next post. But before we can build those out, lets setup our business groups as well as machine prefixes.\u003c/p\u003e","title":"vRealize Automation 7 – Business Groups"},{"content":"In the last post we setup an vCenter endpoint that defines how our vRealize Automation solution will talk to our vSphere environment. Now we must create a fabric group. Fabric Groups are a way of segmenting our endpoints into different types of resources or to separate them by intent. These groups are mandatory before you can build anything so don\u0026rsquo;t think that since you don\u0026rsquo;t need to segment your resources, that you can get away with not creating one.\nTo add a Fabric Group, login to your vRealize Automation tenant as a IaaS Administrator account which we setup in a previous post. Now go to the Infrastructure Tab \u0026ndash;\u0026gt; Endpoints \u0026ndash;\u0026gt; Fabric Groups. Click the \u0026ldquo;New Fabric Group\u0026rdquo; button to create a new group. Once the \u0026ldquo;New Fabric Group\u0026rdquo; screen opens, you should first check to see if there are any resources in the \u0026ldquo;Compute resources:\u0026rdquo; section. If there are no resources, check to make sure that all of your endpoint connections are correct and the credentials are working. If you need to dig into this more deeply, you can check the vRealize Automation logs to make sure the endpoints are being discovered properly.\nI should note here that if you just setup your endpoints, go grab a cup of coffee before setting up the Fabric Group. The resources take a little bit to discover, but trust me on this. Version 7\u0026rsquo;s discover works MUCH faster that in previous version. My lab vCenter was discovered in under 5 minutes.\nNow, once your compute resources have been discovered, enter a name for the fabric group, a description and some fabric administrators who will be able to modify the resources and reservations that we\u0026rsquo;ll create in our next post. Lastly, and most importantly, select the compute resources (Clusters in a vCenter) that will be used to deploy vRealize Automation workloads.\nClick OK\nSummary Fabric Groups are a necessary piece of a vRA7 installation and can be used to separate fabric administrators or simply to limit which compute resources in your endpoint can be used. In this post we added all of our vCenter resources, but we could just have easily selected only the \u0026ldquo;WorkloadCluster\u0026rdquo; and prevented vRA from ever deploying to the Management Cluster.\n","permalink":"https://theithollow.com/2016/01/19/vrealize-automation-7-fabric-groups/","summary":"\u003cp\u003eIn the last post we setup an vCenter endpoint that defines how our vRealize Automation solution will talk to our vSphere environment. Now we must create a fabric group. Fabric Groups are a way of segmenting our endpoints into different types of resources or to separate them by intent. These groups are mandatory before you can build anything so don\u0026rsquo;t think that since you don\u0026rsquo;t need to segment your resources, that you can get away with not creating one.\u003c/p\u003e","title":"vRealize Automation 7 – Fabric Groups"},{"content":"Now that we\u0026rsquo;ve setup our new tenant, lets login as an infrastructure admin and start assigning some resources that we can use. To do this we need to start by adding an endpoint. An endpoint is anything that vRA uses to complete it\u0026rsquo;s provisioning processes. This could be a public cloud resource such as Amazon Web Services, an external orchestrator appliance, or a private cloud hosted by Hyper-V or vSphere.\nIn the example below, we\u0026rsquo;ll add a vSphere endpoint. Go to the Infrastructure Tab \u0026ndash;\u0026gt; Credentials and then click the \u0026ldquo;New\u0026rdquo; button to add a login. Give it a name and description that will help you remember what the credentials are used for. I like to name my credentials the same as the endpoint in which they\u0026rsquo;re connecting. Enter a User Name and a password, which will be encrypted. When done, click the green check mark to save the credentials. DON\u0026rsquo;T FORGET TO DO THIS OR IT WON\u0026rsquo;T BE SAVED!\nNow that we\u0026rsquo;ve got some credentials to use, go to Infrastructure Tab \u0026ndash;\u0026gt; Endpoints and then click the \u0026ldquo;New\u0026rdquo; button again. Here I\u0026rsquo;m selecting Virtual \u0026ndash;\u0026gt; vSphere (vCenter) because thats the type of endpoint I\u0026rsquo;m connecting to. Your mileage may vary. Fill out the name which should match the agents that were created during the installation. If you kept all of the defaults during the install, the first vCenter agent is named \u0026ldquo;vCenter\u0026rdquo; spelled exactly like this with the capital \u0026ldquo;C\u0026rdquo;. Give it a description and then enter the address. The address for a vCenter should be https://vcenterFQDN/sdk. Now click the ellipsis next to credentials and select the username/password combination that we created earlier.\nOptional: If you\u0026rsquo;re using a product like VMware NSX or the older vCNS product, click the \u0026ldquo;Specify manager for network and security platform\u0026rdquo; and then enter an address and new set of credentials for this login.\nWhen you\u0026rsquo;re done click save.\nSummary In this post we connected vRealize Automation 7 to a vSphere environment and we added at least one set of credentials. This should allow us to start creating fabric groups and reservations in the next few posts, but first vRA will need to do a quick discovery on the endpoint.\n","permalink":"https://theithollow.com/2016/01/18/vrealize-automation-7-endpoints/","summary":"\u003cp\u003eNow that we\u0026rsquo;ve setup our new tenant, lets login as an infrastructure admin and start assigning some resources that we can use. To do this we need to start by adding an endpoint. An endpoint is anything that vRA uses to complete it\u0026rsquo;s provisioning processes. This could be a public cloud resource such as Amazon Web Services, an external orchestrator appliance, or a private cloud hosted by Hyper-V or vSphere.\u003c/p\u003e","title":"vRealize Automation 7 – Endpoints"},{"content":"Now it\u0026rsquo;s time to create a new tenant in our vRealize Automation portal. Let\u0026rsquo;s login to the portal as the system administrator account as we have before. Click the Tenants tab and then click the \u0026ldquo;New\u0026rdquo; button.\nGive the new tenant a name and a description. Then enter a URL name. This name will be appended to this string: https://[vraappliance.domain.name]/vcac/org/ and will be the URL that users will login to. In my example the url is https://vra7.hollow.local/vcac/org/labtenant. Click \u0026ldquo;Submit and Next\u0026rdquo;.\nEnter a local user account. I used the vraadmin account much like I did in the previous post about setting up authentication. Click Next.\nIn the administrators tab, I added the vraadmin account as both a tenant administrator and an IaaS Administrator. I will admit, I\u0026rsquo;m omitting some information here. After I added the vraadmin account, I logged into the tenant as this account. I setup directory services for this account the same way I did for the default tenant. Once this was done, I added the \u0026ldquo;Domain Admins\u0026rdquo; group as a Tenant Administrator and IaaS Administrator. This end result is seen in the screenshot below.\nThats it for setting up a new tenant. I did want to mention that in vRA7, when you login to a tenant with a directory configured, you\u0026rsquo;ll have the option to login to either the directory you setup or the default tenant domain. Be sure to select the right domain before trying to login. Summary There isn\u0026rsquo;t a ton of work to be done to setup a new tenant, but I\u0026rsquo;ve found that the most important lesson is to create a blank default tenant and then setup your new tenant under that where all your goodies will be. We\u0026rsquo;ll be discussing some of these goodies in the next few posts.\n","permalink":"https://theithollow.com/2016/01/14/vrealize-automation-7-create-tenants/","summary":"\u003cp\u003eNow it\u0026rsquo;s time to create a new tenant in our vRealize Automation portal. Let\u0026rsquo;s login to the portal as the system administrator account as we have before. Click the Tenants tab and then click the \u0026ldquo;New\u0026rdquo; button.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"vra7-base_1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-base_1-1.png\"\u003e\u003c/p\u003e\n\u003cp\u003eGive the new tenant a name and a description. Then enter a URL name. This name will be appended to this string: https://[vraappliance.domain.name]/vcac/org/ and will be the URL that users will login to. In my example the url is \u003ca href=\"https://vra7.hollow.local/vcac/org/labtenant\"\u003ehttps://vra7.hollow.local/vcac/org/labtenant\u003c/a\u003e. Click \u0026ldquo;Submit and Next\u0026rdquo;.\u003cimg alt=\"vra7-NewTenant1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-NewTenant1-1024x457.png\"\u003e\u003c/p\u003e","title":"vRealize Automation 7 – Create Tenants"},{"content":"In order to setup Active Directory Integrated Authentication, we must login to our default tenant again but this time as our \u0026ldquo;Tenant Administrator\u0026rdquo; (we setup in the previous post) instead of the system administrator account that is created during initial setup.\nOnce you\u0026rsquo;re logged in, click the Administration tab \u0026ndash;\u0026gt; Directories Management \u0026ndash;\u0026gt; Directories and then click the \u0026ldquo;Add Directory\u0026rdquo; button. Give the directory a descriptive name like the name of the ad domain for example. Then select the type of directory. I\u0026rsquo;ve chosen the \u0026ldquo;Active Directory (Integrated Windows Authentication)\u0026rdquo; option. This will add the vRA appliance to the AD Domain and use the computer account for authentication. Note: you must setup Active Directory in the default (vsphere.local) tenant before it can be used in the subtenants.\nNext choose the name of the vRA appliance for the \u0026ldquo;Sync Connector\u0026rdquo; and select \u0026ldquo;Yes\u0026rdquo; for the Authentication. I\u0026rsquo;ve chosen sAMAccountName for the Directory Search Attribute. After this, we need to enter an account with permissions to join the vRA appliance to the Active Directory Domain. Lastly, enter a Bind UPN that has permissions to search Active Directory for user accounts. Click \u0026ldquo;Save and Next\u0026rdquo;.\nNow, select the domain you just added. Click Next.\nNow we can map vIDM properties to your active directory properties. The properties I used are shown in the screenshot below. I tweaked the default values a tad bit, but for the most part, all of the properties were already mapped correctly to start with.\nNow we enter a Distinguished Name to search for groups to sync with. I chose the root DN for my domain, and selected all of the groups. Click Next.\nI repeated the process with user accounts. Click Next.\nThe next screen shows you details about the user and groups that will be synced. You can edit your settings or click \u0026ldquo;Sync Directory\u0026rdquo; to complete the setup.\nSummary In this post, we\u0026rsquo;ve added an external identity source to sync logins with. This is much more preferable than adding local user accounts and having to make your users remember multiple accounts. In future posts, we\u0026rsquo;ll add these users to business groups, tenant administrators, fabric administrators and other custom groups.\n","permalink":"https://theithollow.com/2016/01/13/vrealize-automation-7/","summary":"\u003cp\u003eIn order to setup Active Directory Integrated Authentication, we must login to our default tenant again but this time as our \u0026ldquo;Tenant Administrator\u0026rdquo; (we setup in \u003ca href=\"/2016/01/12/vrealize-automation-7-base-setup/\"\u003ethe previous post\u003c/a\u003e) instead of the system administrator account that is created during initial setup.\u003c/p\u003e\n\u003cp\u003eOnce you\u0026rsquo;re logged in, click the Administration tab \u0026ndash;\u0026gt; Directories Management \u0026ndash;\u0026gt; Directories and then click the \u0026ldquo;Add Directory\u0026rdquo; button. Give the directory a descriptive name like the name of the ad domain for example. Then select the type of directory. I\u0026rsquo;ve chosen the \u0026ldquo;Active Directory (Integrated Windows Authentication)\u0026rdquo; option. This will add the vRA appliance to the AD Domain and use the computer account for authentication. \u003cstrong\u003eNote:\u003c/strong\u003e you must setup Active Directory in the default (vsphere.local) tenant before it can be used in the subtenants.\u003c/p\u003e","title":"vRealize Automation 7 - Authentication"},{"content":"We\u0026rsquo;ve got vRA installed and thats a good start. Our next step is to login to the portal and start doing some configuration. Go to https://vra-appliance-name-orIP and enter the administrator login that you specified during your install. Unlike prior versions of vRealize Automation, no domain vsphere.local domain suffix is required to login.\nTo start, Lets add some local users to our vSphere.local tenant. Click on the vsphere.local tenant.\nClick on the \u0026ldquo;Local users\u0026rdquo; tab and then click the \u0026ldquo;New\u0026rdquo; button to add a local account. I\u0026rsquo;ve created a vraadmin account that will be a local account only used to manage the default tenant configurations.\nClick the Administrators Tab and add the account you just created to the Tenant Admins and IaaS Admins groups. Click Finish.\nClick on the Branding Tab. If you want to change any of the look and feel of your cloud management portal, uncheck the \u0026ldquo;Use default\u0026rdquo; check box and upload headers, change colors to fit your needs.\nClick on the Email Servers tab.\nClick the \u0026ldquo;New\u0026rdquo; button to add your mail server. I\u0026rsquo;m adding an outbound server only at this time.\nAdd the information for your mail server and click the \u0026ldquo;Test Connection\u0026rdquo; button to ensure it works.\nLog out of the portal and log back in as the new tenant administrator account.\nSummary We\u0026rsquo;ve provided some very basic information to vRealize Automation at this point in the series. Our next step will be to add a new tenant and to setup some authentication mechanisms other than local users.\n","permalink":"https://theithollow.com/2016/01/12/vrealize-automation-7-base-setup/","summary":"\u003cp\u003eWe\u0026rsquo;ve got vRA installed and thats a good start. Our next step is to login to the portal and start doing some configuration. Go to https://vra-appliance-name-orIP and enter the administrator login that you specified during your install. Unlike prior versions of vRealize Automation, no domain vsphere.local domain suffix is required to login.\u003cimg alt=\"vra7-base1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-base1.png\"\u003e\u003c/p\u003e\n\u003cp\u003eTo start, Lets add some local users to our vSphere.local tenant. Click on the vsphere.local tenant.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"vra7-base_1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/vra7-base_1.png\"\u003e\u003c/p\u003e\n\u003cp\u003eClick on the \u0026ldquo;Local users\u0026rdquo; tab and then click the \u0026ldquo;New\u0026rdquo; button to add a local account. I\u0026rsquo;ve created a vraadmin account that will be a local account only used to manage the default tenant configurations.\u003c/p\u003e","title":"vRealize Automation 7 - Base Setup"},{"content":"If following the posts in order, this guide should help you setup vRealize Automation 7 from start to finish. This is a getting started guide that will hopefully get you on the right path, answer any questions you might have, and give you tips on deploying your own cloud management portal.\nPart 1 - Simple Installation Part 2 -Base Setup Part 3 - Authentication Part 4 - Tenants Part 5 - Endpoints Part 6 - Fabric Groups Part 7 - Business Groups Part 8 - Reservations Part 9 - Services Part 10 - Custom Groups Part 11 - Blueprints Part 12 - Entitlements Part 13 - Manage Catalog Items Part 14 - Event Subscriptions Part 15 - Custom Properties Part 16 - XaaS Blueprints Part 17 - Resource Actions Part 18 - Enterprise Install Part 19 - Load Balancer Settings Part 20 - NSX Initial Setup Part 21 - NSX Blueprints Part 22 - Code Stream and Jenkins Setup Part 23 - Code Stream and Artifactory Setup Part 24 - Add Custom Items to vRA7 Part 25 - Upgrade vRA from 7.1 to 7.2 Part 26 - Adding an Azure Endpoint Part 27 - Installing vRealize Code Stream for IT DevOps Part 28 - Configuring Endpoints for vRealize Code Stream for IT DevOps Part 29 - Using vRealize Code Stream for IT DevOps Part 30 - Unit Testing with vRealize Code Stream for IT DevOps Part 31 - Containers on vRealize Automation Part 32 - vRA 7.3 Component Profiles Part 33 - vRA 7.5 Upgrade If you\u0026rsquo;re looking for a getting started video, check out this P luralsight course for a quick leg up on vRA 7.\nAnd if you\u0026rsquo;re looking for a bit more focused detail on using vRA 7, there is also a course for extending the basic capabilities of vRA 7 also from Pluralsight.\n\u0026amp;feature=youtu.be\nvRealize Automation 7 Official Links Release Notes - http://pubs.vmware.com/Release_Notes/en/vra/vrealize-automation-70-release-notes.html Support Matrix - https://www.vmware.com/pdf/vrealize-automation-70-support-matrix.pdf VMware Documentation - http://pubs.vmware.com/vra-70/index.jsp VRealize Automation Load Balancing Configuration Guide - http://pubs.vmware.com/vra-70/topic/com.vmware.ICbase/PDF/vrealize-automation-70-load-balancing.pdf Download vRealize Automation 7 from the VMware site - https://my.vmware.com/en/web/vmware/info/slug/infrastructure_operations_management/vmware_vrealize_automation/7_0 vRealize Automation SDK 7.0 - https://developercenter.vmware.com/web/sdk/7.0.0/vrealize-automation VMware vRealize Automation Cloud Client - https://developercenter.vmware.com/tool/cloudclient/4.0.0\nUnofficial Blogs that focus on vRealize Automation virtualjad.com - Jad El-Zein (Author)\ngrantorchard.com - Grant Orchard (Author)\nvmtocloud.com - Ryan Kelly (Author)\nopen902.com - Mike Rudloff\nHelpful Places for utilities related to vRealize Orchestrator and vRealize Automation Download the vRA / vCAC Icon Pack here: http://www.vmtocloud.com/vravcac-icon-pack/\nThe list below are people to follow on twitter for more vRealize Automation related goodies Jad El-Zein - @virtualjad\nGrant Orchard - @grantorchard\nVMware Cloud Mgmt - @vmwarecloudmgmt\nYves Sandfort - @yvessandfort\nRyan Kelly - @vmtocloud\nMike Rudloff - @michael_rudloff\n","permalink":"https://theithollow.com/2016/01/11/vrealize-automation-7-guide/","summary":"\u003cp\u003eIf following the posts in order, this guide should help you setup vRealize Automation 7 from start to finish. This is a getting started guide that will hopefully get you on the right path, answer any questions you might have, and give you tips on deploying your own cloud management portal.\u003c/p\u003e\n\u003cp\u003e\u003cimg alt=\"Setup vRealize Automation 7\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/01/vRA7Guide1-1024x610.png\"\u003e\u003c/p\u003e\n\u003ch1 id=\"part-1---simple-installation\"\u003e\u003ca href=\"http://wp.me/p32uaN-1uy\"\u003ePart 1 - Simple Installation\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-2--base-setup\"\u003e\u003ca href=\"http://wp.me/p32uaN-1vm\"\u003ePart 2 -Base Setup\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-3--authentication\"\u003e\u003ca href=\"http://wp.me/p32uaN-1vb\"\u003ePart 3 - Authentication\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-4---tenants\"\u003e\u003ca href=\"http://wp.me/p32uaN-1vK\"\u003ePart 4 - Tenants\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-5---endpoints\"\u003e\u003ca href=\"http://wp.me/p32uaN-1w0\"\u003ePart 5 - Endpoints\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-6---fabric-groups\"\u003e\u003ca href=\"http://wp.me/p32uaN-1w8\"\u003ePart 6 - Fabric Groups\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-7---business-groups\"\u003e\u003ca href=\"http://wp.me/p32uaN-1wq\"\u003ePart 7 - Business Groups\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-8---reservations\"\u003e\u003ca href=\"http://wp.me/p32uaN-1wf\"\u003ePart 8 - Reservations\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-9---services\"\u003e\u003ca href=\"http://wp.me/p32uaN-1x1\"\u003ePart 9 - Services\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-10---custom-groups\"\u003e\u003ca href=\"http://wp.me/p32uaN-1wT\"\u003ePart 10 - Custom Groups\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-11---blueprints\"\u003e\u003ca href=\"/2016/01/28/vrealize-automation-7-blueprints/\"\u003ePart 11 - Blueprints\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-12---entitlements\"\u003e\u003ca href=\"http://wp.me/p32uaN-1xa\"\u003ePart 12 - Entitlements\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-13---manage-catalog-items\"\u003e\u003ca href=\"http://wp.me/p32uaN-1zN\"\u003ePart 13 - Manage Catalog Items\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-14---event-subscriptions\"\u003e\u003ca href=\"http://wp.me/p32uaN-1xU\"\u003ePart 14 - Event Subscriptions\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-15---custom-properties\"\u003e\u003ca href=\"http://wp.me/p32uaN-1yi\"\u003ePart 15 - Custom Properties\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-16---xaas-blueprints\"\u003e\u003ca href=\"/2016/02/29/vrealize-automation-7-xaas-blueprints/\"\u003ePart 16 - XaaS Blueprints\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-17---resource-actions\"\u003e\u003ca href=\"/2016/02/15/vrealize-automation-7-custom-actions/\"\u003ePart 17 - Resource Actions\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-18---enterprise-install\"\u003e\u003ca href=\"/2016/02/22/vrealize-automation-7-enterprise-install/\"\u003ePart 18 - Enterprise Install\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-19---load-balancer-settings\"\u003e\u003ca href=\"/2016/02/24/vrealize-automation-7-load-balancer-rules/\"\u003ePart 19 - Load Balancer Settings\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-20--nsx-initial-setup\"\u003e\u003ca href=\"/2016/03/07/6234/\"\u003ePart 20 - NSX Initial Setup\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-21---nsx-blueprints\"\u003e\u003ca href=\"http://wp.me/p32uaN-1Db\"\u003ePart 21 - NSX Blueprints\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-22---code-stream-and-jenkins-setup\"\u003e\u003ca href=\"/2016/05/09/using-jenkins-vrealize-code-stream/\"\u003ePart 22 - Code Stream and Jenkins Setup\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-23---code-stream-and-artifactory-setup\"\u003e\u003ca href=\"/2016/05/23/code-stream-artifactory/\"\u003ePart 23 - Code Stream and Artifactory Setup\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-24---add-custom-items-to-vra7\"\u003e\u003ca href=\"http://wp.me/p32uaN-1G8\"\u003ePart 24 - Add Custom Items to vRA7\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-25---upgrade-vra-from-71-to-72\"\u003e\u003ca href=\"/?p=7311\u0026amp;preview=true\"\u003ePart 25 - Upgrade vRA from 7.1 to 7.2\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-26---adding-an-azure-endpoint\"\u003e\u003ca href=\"/2017/03/20/adding-azure-endpoint-vrealize-automation-7/\"\u003ePart 26 - Adding an Azure Endpoint\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-27---installing-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/03/27/installing-code-stream-management-pack-devops/\"\u003ePart 27 - Installing vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-28---configuring-endpoints-for-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/04/04/configuring-vrealize-code-stream-management-pack-devops-endpoints/\"\u003ePart 28 - Configuring Endpoints for vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-29---using-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/04/10/using-vrealize-code-stream-management-pack-devops/\"\u003ePart 29 - Using vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-30---unit-testing-with-vrealize-code-stream-for-it-devops\"\u003e\u003ca href=\"/2017/04/18/vrealize-code-stream-management-pack-devops-unit-testing/\"\u003ePart 30 - Unit Testing with vRealize Code Stream for IT DevOps\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-31---containers-on-vrealize-automation\"\u003e\u003ca href=\"/2017/05/08/containers-vrealize-automation/\"\u003ePart 31 - Containers on vRealize Automation\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-32---vra-73-component-profiles\"\u003e\u003ca href=\"/2017/06/06/vra-7-3-component-profiles/\"\u003ePart 32 - vRA 7.3 Component Profiles\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"part-33---vra-75-upgrade\"\u003e\u003ca href=\"https://wp.me/p32uaN-2oA\"\u003ePart 33 - vRA 7.5 Upgrade\u003c/a\u003e\u003c/h1\u003e\n\u003cp\u003eIf you\u0026rsquo;re looking for a getting started video, check out this P \u003ca href=\"http://pluralsight.com\"\u003eluralsight\u003c/a\u003e course for a quick leg up on vRA 7.\u003c/p\u003e","title":"vRealize Automation 7 Guide"},{"content":"This is our first stop in our journey to install vRealize Automation 7 and all of it\u0026rsquo;s new features. This post starts with the setup of the environment and assumes that you\u0026rsquo;ve deployed a vRealize Automation appliance from an OVA and that you\u0026rsquo;ve also got a Windows Server deployed so that we can install the IAAS components on it.\nAfter you\u0026rsquo;ve deployed the vRA7 OVA, login to the appliance with the root login and password supplied during your OVA deployment.\nUnlink past versions of vRA, we now have a lovely wizard that pops up and of course asks us to acknowledge that we will adhere to the End User License Agreement. Click the check mark (after reading all of the EULA of course) and click next. Look at this! We get a deployment wizard and it even asks us what kind of deployment we\u0026rsquo;re going for. A minimal deployment is what we\u0026rsquo;re doing in this post, but a fully distributed, highly available, \u0026ldquo;Enterprise deployment\u0026rdquo; is now an option as well. I\u0026rsquo;M REALLY EXCITED ABOUT THIS IF YOU CAN\u0026rsquo;T TELL. For this post click the minimal deployment option and select the check box next to \u0026ldquo;Install Infrastructure as a Service\u0026rdquo; option.\nSince we selected the IaaS option in the last screen, we\u0026rsquo;ll see there are some things we need to do on our IaaS servers. This is the Windows server we\u0026rsquo;ve stood up in addition to the vRA virtual appliance. You\u0026rsquo;ll notice there is a link on this page will will let you download an MSI file. Run this installer on your Windows server\nThis installer will ask for the address and login information for your vRA appliance. Click the \u0026ldquo;Load\u0026rdquo; button to grab the SSL fingerprint and confirm the connection. Click Next.\nThe next screen will ask you for account information that has administrative rights on your IaaS Server. This account will be used to install services and additional pre-requisite software.\nOnce the installer finishes, go back to your wizard. Notice that at the bottom of the screen you were on, there is now an IaaS Server listed. Set your NTP settings (THIS IS VERY IMPORTANT !) and click next.\nNow we we are met with a prerequisite check screen. Click \u0026ldquo;Run\u0026rdquo; to test to make sure you have all the prerequisites met. This step checks to see if you have things like IIS installed on the IaaS Server, and the appropriate .NET packages etc.\nI\u0026rsquo;m pretty sure that your prerequisite check will fail, but thats ok. Once the check is complete you can view the details and click the \u0026ldquo;fix\u0026rdquo; button to have the installer automatically take care of them for you.\nThe fix step could take some time. If you aren\u0026rsquo;t sure that anything is really happening or not, go over to your IaaS Server and check task manager. You\u0026rsquo;ll probably find a \u0026ldquo;Windows Modules Installer Worker\u0026rdquo; running and taking up copious amounts of CPU. You\u0026rsquo;ll also see some progress status happening in the install wizard. Once the prerequisites have been installed and the server is rebooted (the reboot happens automatically too) click Next.\nNext we\u0026rsquo;re asked for the vRA host name. You could change the name here but I\u0026rsquo;ve chosen to grab my hostname from DNS since I already added my vRA appliance to my DNS zone.\nEnter a very strong password for your default tenant administrator account. Be sure to keep track of this since you\u0026rsquo;ll need this down the road. Click Next.\nNow we enter the IaaS Web server address. Since we\u0026rsquo;re not using a load balancer, we\u0026rsquo;ll just input the name of our Windows IaaS Host here. Select the IaaS host from the dropdown in order to install components on it and enter a login that has administrative credentials on the box. Also, enter a security passphrase for the SQL database to use for encryption. Click Next. Next enter the name of your SQL server and a name for the new database that will be created. Select the name of an IaaS Host in which to install the DEM Workers. This should be simple since it\u0026rsquo;s there\u0026rsquo;s only one IaaS box in the simple environment of ours. Add login credentials to install the DEM service. Click Next.\nWe\u0026rsquo;ll perform the same operation with our DEM Agents except we need to specify an Endpoint here as well. I\u0026rsquo;ve named my Endpoint vCenter. The agent type is vSphere and I\u0026rsquo;ve provided login information. Click Next.\nNow we can import certificates that we created with our own certificate authority or we can generate self signed certs. The first cert we need to generate is the vRA appliance and web portal. Enter the certificate information and click Next.\nNow we need to enter the web certificate information for the IaaS Server. Enter the certificate information and click Next.\nThe last certificate we need to provide is the manager service certificate. Enter the certificate information and click Next.\nNow we\u0026rsquo;re ready to validate all of our settings. Lucky for us, the wizard will do that before installing anything. Click Validate and grab a cup of coffee. This will take a bit.\nOnce the validation has completed successfully, Click Next.\nA nice reminder will be presented to you here to take a couple of snapshots before continuing. This sounds like a pretty good plan to me. Go take your snapshots and then click Next.\nYou\u0026rsquo;re ready to go. Click Install.\nHopefully, the install process will go flawlessly, and at the end you\u0026rsquo;ll have a nice screen with a bunch of green check marks. Click Next.\nAHHHHHHH! The license. Nothing is free is it. Go grab your license key and enter it in the box. Click \u0026ldquo;Submit Key\u0026rdquo;. Then click \u0026ldquo;Next\u0026rdquo;.\nIt\u0026rsquo;s up to you here. If you want to submit telemetry data back to VMware so they can better the product, go for it. If you\u0026rsquo;re really concerned about your privacy don\u0026rsquo;t check the box. It\u0026rsquo;s your call.\nSome content can be created for you if you\u0026rsquo;d like. If you want this to happen, give the configurationadmin a login password and click \u0026ldquo;Create Initial Content\u0026rdquo;. Click \u0026ldquo;Next\u0026rdquo;.\nTA DA! vRealize Automation is now installed. Now the real fun begins. Click \u0026ldquo;Finish\u0026rdquo;.\nSummary We went through a lot of screens and steps there but believe me this is way easier than it was in version 6. Future posts in this series will take you through the configuration and setup of your vRealize Automation 7 cloud management solution.\n","permalink":"https://theithollow.com/2016/01/11/vrealize-automation-7-simple-installation/","summary":"\u003cp\u003eThis is our first stop in our journey to install vRealize Automation 7 and all of it\u0026rsquo;s new features. This post starts with the setup of the environment and assumes that you\u0026rsquo;ve deployed a vRealize Automation appliance from an OVA and that you\u0026rsquo;ve also got a Windows Server deployed so that we can install the IAAS components on it.\u003c/p\u003e\n\u003cp\u003eAfter you\u0026rsquo;ve deployed the vRA7 OVA, login to the appliance with the root login and password supplied during your OVA deployment.\u003c/p\u003e","title":"vRealize Automation 7 Simple Installation"},{"content":"Home Labs aren\u0026rsquo;t cheap. Depending on what you want to do with your lab, they can even be really expensive. If you\u0026rsquo;re looking at building one for yourself, you should take some time to determine what you want to get out of it. I\u0026rsquo;ve found that having a home lab is an incredibly valuable asset to my continuing education and I attribute much of my career success to having one. To me, it\u0026rsquo;s as essential tool for my career, but for others its a money pit.\nAbout the time I started writing this post, the gentleman that do the Datanauts podcast were discussing these very subjects. Check them out if you\u0026rsquo;ve got some time.\nDatanauts Podcast\nI thought it would be useful for me to share some information about my lab expenses. To do this, I\u0026rsquo;m leveraging vRealize Business Standard and I\u0026rsquo;ve updated the costs to match my environment. If you want details about all the stuff in the lab, take a look here to see my hybrid cloud lab.\nEquipment When I was thinking about my home lab costs, the equipment was the first thing I started to calculate. I\u0026rsquo;ve got five servers, two Synology arrays, an HP Switch and a Cisco ASA. I estimated that they would have a useful life of five years. This helped me determine how much money I was spending per year on the equipment.\nMy breakdown on the equipment was like this:\nServers 3 X Supermicro ESXi boxes - 5 years 1 X HP Microserver - 5 years 1 X Whitebox server - 5 years Storage 1 X Synology DS1513+ 1 X Synology DS1815+ 5 X 1 TB SATA Disks 4 X 4 TB SATA Disks 2 X 256GB SSDs Network Networking could also include your Internet circuit but I\u0026rsquo;m not including mine in the costs. I\u0026rsquo;d be buying the same internet service whether I had a lab or I didn\u0026rsquo;t.\nCisco ASA 5505 - 5 years HP V1910 Switch - 5 years Licensing Licensing costs are going to vary quite a bit. In my case I have the Microsoft Action Pack which is a yearly subscription that gets me all of my server licenses for lab use. I\u0026rsquo;m also a VMware vExpert which gets me vSphere licenses. If you\u0026rsquo;re not a vExpert, I highly recommend paying for VMUG Advantage which will get you VMware licenses in addition to other certification and VMworld benefits.\nMy breakdown of licenses was:\nMicrosoft Licenses VMware Licenses Facilities In vRealize Business there is a specific section for facilities but I\u0026rsquo;ve broken it down into a few parts for this post.\nMaintenance and Labor I\u0026rsquo;ve decided that my maintenance and labor are free. Time is valuable but this is for my own learning and I\u0026rsquo;m not charging an amount for my work. I also don\u0026rsquo;t have any ongoing maintenance contracts to deal with so this is a freebie.\nPower I have a UPS on my lab and that shows me how much power I\u0026rsquo;m drawing. I took this number and multiplied it by the amount per kilowatt hour i\u0026rsquo;m charged from my electric company.\nBreakdown These were my numbers according to vRealize Business once I entered everything in. We can see that my main costs are in the networking space. This is largely due to my Internet service which you\u0026rsquo;d argue that I\u0026rsquo;d be paying for whether I had a lab or not. I\u0026rsquo;d also be paying for some additional storage to store pictures and personal documents etc so some of my storage costs would be spent anyway as well.\nIf you want to see how my costs went into vRealize Business, here is how I entered them.\nSummary A home lab isn\u0026rsquo;t cheap so if you\u0026rsquo;re planning on building one for yourself be sure to talk it over with your spouse or significant others because it is an investment that can be expensive. Especially if you build it and then never use it. If you\u0026rsquo;re worried about that, then try things out in your favorite public cloud provider for a while. That will let you dip your toes in the water without a big capital expenditure up front.\n","permalink":"https://theithollow.com/2016/01/04/home-lab-expenses/","summary":"\u003cp\u003eHome Labs aren\u0026rsquo;t cheap. Depending on what you want to do with your lab, they can even be really expensive. If you\u0026rsquo;re looking at building one for yourself, you should take some time to determine what you want to get out of it. I\u0026rsquo;ve found that having a home lab is an incredibly valuable asset to my continuing education and I attribute much of my career success to having one. To me, it\u0026rsquo;s as essential tool for my career, but for others its a money pit.\u003c/p\u003e","title":"Home Lab Expenses"},{"content":"\u0026ldquo;So, what do you really do for a living?\u0026rdquo; This is a pretty common question that I get asked these days. I\u0026rsquo;ve got a Bachelors degree in Management Information Systems. I also have a VCDX which is some sort of highly desired certification so I must be pretty skilled at whatever it is I do. So what exactly is it?\nThe truth of the matter is that I have a job in the computer industry and thats about all I can accurately describe to someone who isn\u0026rsquo;t also in this field. It\u0026rsquo;s tough to explain virtual servers, Git or VLANs to someone over the course of an elevator ride. You need a certain level of background to understand those concepts.\nWhat complicates things further is that I don\u0026rsquo;t even really do any of those things that I mentioned. I do all of them today. And I may do more or none of them tomorrow. My REAL job is to adapt to the environment. My REAL job is to be an expert at doing new things that I\u0026rsquo;ve never done before.\nIf I ever get into a longer conversation with someone who really is interested, I\u0026rsquo;ll explain some our technology concepts to them at a basic level and inevitably someone will ask me, \u0026ldquo;So you learned how to do all this stuff in college?\u0026rdquo; This makes me smile because the answer is both \u0026ldquo;Yes\u0026rdquo; and \u0026ldquo;No\u0026rdquo;. The way we build servers now is vastly different from how things were done when I was in college (I expect to see some Old Man jokes in the comments now) but I did learn how to LEARN in college. I figured out how to study so that I could remember things. I figured out how to manage my time so that I could get my work done. I figured out that I had to work really hard to be good at anything. College is not something you pay for and then it just pays for itself over the rest of your career. It\u0026rsquo;s a tool that has to be used to get what you want.\nOk, so how do I explain what I do in an elevator ride? Well, for right now I just say, \u0026ldquo;I build clouds\u0026rdquo;. But the truth is, I just try to keep myself from becoming this guy\u0026hellip;\nThanks for reading.\n","permalink":"https://theithollow.com/2015/12/14/what-would-you-say-ya-do-here/","summary":"\u003cp\u003e\u0026ldquo;So, what do you really do for a living?\u0026rdquo; This is a pretty common question that I get asked these days. I\u0026rsquo;ve got a Bachelors degree in Management Information Systems. I also have a VCDX which is some sort of highly desired certification so I must be pretty skilled at whatever it is I do. So what exactly is it?\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/11/what-i-think-i-do-it1.jpg\"\u003e\u003cimg alt=\"what-i-think-i-do-it1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/11/what-i-think-i-do-it1-1024x707.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe truth of the matter is that I have a job in the computer industry and thats about all I can accurately describe to someone who isn\u0026rsquo;t also in this field. It\u0026rsquo;s tough to explain virtual servers, Git or VLANs to someone over the course of an elevator ride. You need a certain level of background to understand those concepts.\u003c/p\u003e","title":"What would you say, ya do here..."},{"content":"Veeam is a popular backup product for virtualized environments but who wants to spend their days adding and removing machines to backup jobs?\nNow available on github is a Veeam package for vRealize Orchestrator. This is my gift to you, just in time for the Hollow-days.\nAvailable Features The following features are available with the plugin for it\u0026rsquo;s initial release.\nAdd a VM to an existing backup job Remove a VM from a backup job Start a backup job immediately Add a Build Profile to vRealize Automation Add a VM to a backup job from vRA Remove a VM from a backup job from vRA Some additional functionality could easily be added to your environment using the existing worfklows such as start a backup as a Day 2 operation in vRA, or change backup jobs etc. The world is your oyster.\nDisclaimer - Veeam had no involvement with the creation of the vRO workflows and does not officially support their use. This software is to be used at your own risk and tested in your own environment.\nRequirements In order to use the package there are certain requirements that must be met before installation.\nA working Veeam Backup and Replication Server with at least one backup job configured - (Tested with v8) Veeam Backup Enterprise Manager (This provides the API) Working vRealize Orchestrator appliance Optional - vRealize Automation Solution Installation To begin, go to github.com and download the vRO Package. Once downloaded, open vRO and change to Design View. Select the Packages tab and then the import button. Select the package downloaded from github.\nWhen prompted to import the package click \u0026ldquo;Import\u0026rdquo;.\nNext, the list of elements will be shown. You may import any you wish. Each of the elements is necessary for the package to fully work, but if you have existing elements already in your vRO appliance, you may choose not to import some of them. By default only the missing elements will be imported. Select \u0026ldquo;Import selected elements\u0026rdquo;.\nBasic Configuration Now that the package has been installed, navigate to your workflows and find the \u0026ldquo;Veeam\u0026rdquo; folder. Run the \u0026ldquo;Add_VeeamRestHost\u0026rdquo; workflow to add a REST endpoint. This should be your Veeam Enterprise Backup Server.\nFill out the information for your environment. The default port is 9399 and authentication is basic, but your configuration may be different.\nOnce the Veeam REST Host has been added, go to your list of configuration elements. Find the \u0026ldquo;Veeam-Login\u0026rdquo; config item and click the pencil icon to edit it.\nClick on the Attributes tab and then click the \u0026ldquo;Not found\u0026rdquo; entry.\nSelect the Veeam REST Host you just added.\nRun a Job It\u0026rsquo;s all configured now. You can run a job to test it out. If you run the Add a VM to Backup job workflow, you\u0026rsquo;ll be asked for the vCenter name, the job name and to select the VM. NOTE: the vCenter name must be identical to what is configured in Veeam Backup and Replication, including the case. As seen below from my lab.\nSetup for vRealize Automation\nIf you want to leverage vRealize Automation to have your VMs automatically added to a backup job after provisioning and removed during deprovisioning, then go to the vRA_Setup folder and run \u0026ldquo;Veeam_CreateBuildProfile\u0026rdquo;.\nProvide the vRA Host, the build profile name and description and then the vCenter Name and Backup job that you\u0026rsquo;d want to add VMs to. Feel free to add multiple profiles if you have more than one backup job, just be sure to change the profile name and job. Once the job has been run, you can go into vRealize Automation and see the build profile and the settings that have been configured. Just add the build profile to your blueprint to use it.\nUpdate for vRealize Automation 7 If you\u0026rsquo;re planning to use this package with vRealize Automation 7, there are two additional workflows available for use with \u0026ldquo;Property Groups\u0026rdquo; which are the replacement for the old vRA 6 \u0026ldquo;Build Profiles\u0026rdquo; and a workflow to be used with vRA 7 Event Subscriptions which are the replacement for the stubs found in vRA 6.\nOnce you run the \u0026ldquo;Veeam_CreatePropertyGroup\u0026rdquo; Workflow, you\u0026rsquo;ll have a new group listed in vRealize Automation 7. The screenshot below shows the results of the workflow.\nFrom there, you can add the property group to any of your virtual machines in the design canvas. Be sure to set your vRO properties so that vRA is passing properties over to vRO during the machine provisioning events.\nSummary I hope that someone gets use out of this package. Automating your backup process can save a lot of time in dealing with mis-configurations or just forgetting to set a backup on a new machine. This package coupled with vRealize Automation should really save time for system administrators. This package is free for anyone to use and modify at your own risk. I won\u0026rsquo;t be responsible for the awesome stuff or bad stuff you may choose to do with it!\n","permalink":"https://theithollow.com/2015/12/07/veeam-plugin-for-vrealize-orchestrator/","summary":"\u003cp\u003eVeeam is a popular backup product for virtualized environments but who wants to spend their days adding and removing machines to backup jobs?\u003c/p\u003e\n\u003cp\u003eNow available on \u003ca href=\"https://github.com/theITHollow/Veeam-vRO-Package\"\u003egithub\u003c/a\u003e is a Veeam package for vRealize Orchestrator. This is my gift to you, just in time for the Hollow-days.\u003c/p\u003e\n\u003ch1 id=\"available-features\"\u003eAvailable Features\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/12/veeamlogo.png\"\u003e\u003cimg alt=\"veeamlogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/12/veeamlogo.png\"\u003e\u003c/a\u003e The following features are available with the plugin for it\u0026rsquo;s initial release.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eAdd a VM to an existing backup job\u003c/li\u003e\n\u003cli\u003eRemove a VM from a backup job\u003c/li\u003e\n\u003cli\u003eStart a backup job immediately\u003c/li\u003e\n\u003cli\u003eAdd a Build Profile to vRealize Automation\u003c/li\u003e\n\u003cli\u003eAdd a VM to a backup job from vRA\u003c/li\u003e\n\u003cli\u003eRemove a VM from a backup job from vRA\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eSome additional functionality could easily be added to your environment using the existing worfklows such as start a backup as a Day 2 operation in vRA, or change backup jobs etc. The world is your oyster.\u003c/p\u003e","title":"Veeam Package for vRealize Orchestrator"},{"content":"So far we\u0026rsquo;ve talked a lot about using our automation solution to automate network deployments with NSX. But one of the best features about NSX is how we can firewall everything! Lucky for us, we can automate the deployment of specific firewall rules for each of our blueprints as well as deploying brand new networks for them.\nUse Case: There are plenty of reasons to firewall your applications. It could be for compliance purposes or just a good practice to limit what traffic can access your apps.\nBuild It If you haven’t already gone through the process of setting up NSX and connecting it to vRealize Automation, you should do this first. Assuming you’ve already done this then lets begin.\nSecurity Groups Once the basic setups have been done, login to your vCenter web client and go to the Networking and Security tab. Specifically go to the Service Composer. We\u0026rsquo;ll start by adding a new security group which will group our machines together. In the working pane click the icon with the plus sign to add a new security group.\nGive the security group a name and a description. Next, we\u0026rsquo;ll setup some dynamic membership. This will allow us to deploy many virtual machines, but they will all automatically be added to the group based on some criteria. For this example I\u0026rsquo;m using a VM Name that contains \u0026ldquo;Hollow-\u0026rdquo; in it. This is the machine prefix listed in my vRealize Automation Business Group. That way any machines I build in vRA will be added to this group by default. After we set the dynamic memberships, we can also select other object that will be included. We can select other groups, or vCenter objects such as a virtual switch, or cluster if we choose. If we need more control we can also automatically select objects to exclude. Maybe certain VMs that are in the above groups should be whitelisted. We can add them here. Review the settings and click Finish. Security Policies Now it\u0026rsquo;s time to set the policy. Go to the \u0026ldquo;Security Policies\u0026rdquo; tab in the Service Composer.\nClick the icon with the \u0026ldquo;+\u0026rdquo; icon on it again to create a new security policy. Then give the new policy a name and description.\nThe next screen we\u0026rsquo;ll skip over because we\u0026rsquo;re not doing any guest introspection. If you\u0026rsquo;re using any Activity Monitoring or third party security solutions, be sure to complete this screen. Now we\u0026rsquo;re at the guts of the policy. Click the green \u0026ldquo;+\u0026rdquo; sign to add a new set of firewall rules. Fill out your rules for your own purposes. For a demo, I\u0026rsquo;ve setup a pair of rules to block SSH but allow pings. Add any network introspection services that you may have. Again for this setup, I\u0026rsquo;m not using any of these so I\u0026rsquo;ve skipped it. Review the settings and click \u0026ldquo;Finish\u0026rdquo;.\nWhen you\u0026rsquo;ve finished creating the security group and the policies, go to Actions and select \u0026ldquo;Apply Policy\u0026rdquo;.\nSelect the security groups in which the policy should apply.\nConfigure vRealize Automation Now we can go over to our vRealize Automation portal and edit our blueprints. I\u0026rsquo;ve selected a multi-machine blueprint that I already had created for a Routed network. If we look at the \u0026ldquo;Build Information\u0026rdquo; tab, we\u0026rsquo;ll see our virtual machines and we can click \u0026ldquo;edit\u0026rdquo; under the network tab.\nFrom here we can click on the \u0026ldquo;Security\u0026rdquo; tab and we can then manually select the Security Policies or Security Groups that we created earlier. However, in our case, we created a dynamic security group rule so we don\u0026rsquo;t need to manually apply these. As long as our virtual machines are deployed with a name that starts with \u0026ldquo;Hollow-\u0026rdquo; they\u0026rsquo;ll automatically be assigned correctly.\nDeploy VMs Now we can deploy some virtual machines from our blueprint. Once the build completes, we can look in NSX and see the security groups. We can see that there are now three virtual machines in the group. (The blueprint had three VMs in it). If we lookup our virtual machine in vCenter we find our IP Address and we are able to ping it. We are unable to connect to it through an SSH session however.\n","permalink":"https://theithollow.com/2015/11/30/vrealize-automation-6-with-nsx-firewall/","summary":"\u003cp\u003eSo far we\u0026rsquo;ve talked a lot about using our automation solution to automate network deployments with NSX. But one of the best features about NSX is how we can firewall everything! Lucky for us, we can automate the deployment of specific firewall rules for each of our blueprints as well as deploying brand new networks for them.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eUse Case:\u003c/strong\u003e There are plenty of reasons to firewall your applications. It could be for compliance purposes or just a good practice to limit what traffic can access your apps.\u003c/p\u003e","title":"vRealize Automation 6 with NSX – Firewall"},{"content":"Just deploying virtual machines in an automated fashion is probably the most important piece of a cloud management platform, but you still need to be able to manage the machines after they\u0026rsquo;ve been deployed. In order to add more functionality to the portal, we can create post deployment \u0026ldquo;actions\u0026rdquo; that act on our virtual machine. For instance an action that snapshots a virtual machine would be a good one. We refer to these actions that take place after the provisioning process a \u0026ldquo;Day 2 Operation\u0026rdquo;, probably because it\u0026rsquo;s likely to happen on the second day or later. Clever huh?\nBut often I run into a snag when trying to provision these types of actions. The examples below are actions I created to install the Puppet agent on a virtual machine. I already have a vRealize Orchestrator workflow that installs the Puppet agent on a virtual machine and the only input the workflow requires is a hostname. So I log into the vRealize Automation portal as a Service Architect and go to my Advanced Services to create a new resource action. I browse to the workflow I\u0026rsquo;ve created and select it from the list. Notice that the Input parameter shows only a single input named hostname and of type string.\nOnce I click next I\u0026rsquo;m asked for resource mapping information. The resource type and input parameter are blank and there is no option to select anything or move on. This is due to the fact that vRA needs to know how to pass these parameters to another system. vRA doesn\u0026rsquo;t know what \u0026ldquo;hostname\u0026rdquo; is or how to handle it. There are a couple of ways around this but one method is to slightly tweak our vRealize Orchestrator Workflow to add a \u0026ldquo;wrapper\u0026rdquo;. If we login to vRO we can modify our Day 2 Action to require a Virtual Machine object instead of a string with the hostname.\nTo get a feel of what I did, you can take a look at the schema of the workflow itself. You can see I\u0026rsquo;ve added a single script element at the beginning of the workflow and it\u0026rsquo;s only purpose is to read a VC:VirtualMachine object and output a hostname. From there we pass the hostname over to the original workflow to be executed.\nWhen we look at the visual binding of the wrapper script, we see the input and output types.\nSo now, lets try to add the Day 2 action again in vRA. We go back to our Advanced Services and try to add the new workflow. You can see that now the input parameters are different from when we first tried to do this. Now it\u0026rsquo;s looking for a VC:VirtualMachine object which vRealize Automation understands. When we click next, we\u0026rsquo;re then presented with our resource mapping screen again, but this time the information if already filled out for us and looks to be working! I move along to the details screen and finish filling out the information to create the action. Once the action is created I publish it.\nJust like any server blueprint that we might create, we then have to configure the item with an icon and assign users with an entitlement. The end result is that I have a workflow to deploy the Puppet Agent to a virtual machine after it\u0026rsquo;s been provisioned. Final Thoughts In case you\u0026rsquo;re wondering why I don\u0026rsquo;t just install puppet during the MachineProvisioned Stub in vRA, it\u0026rsquo;s because I\u0026rsquo;m running this in my home lab with only 10 puppet licenses. Adding this as a day 2 operation give me the flexibility to only install Puppet on nodes when I want to go test out some Puppet code. There are certainly many actions that could be used for a corporate environment though, like allowing users to storage vMotion their machine or snapshot\\restore VMs at their leisure.\n","permalink":"https://theithollow.com/2015/11/16/create-a-day-2-operations-wrapper/","summary":"\u003cp\u003eJust deploying virtual machines in an automated fashion is probably the most important piece of a cloud management platform, but you still need to be able to manage the machines after they\u0026rsquo;ve been deployed.  In order to add more functionality to the portal, we can create post deployment \u0026ldquo;actions\u0026rdquo; that act on our virtual machine. For instance an action that snapshots a virtual machine would be a good one. We refer to these actions that take place after the provisioning process a \u0026ldquo;Day 2 Operation\u0026rdquo;, probably because it\u0026rsquo;s likely to happen on the second day or later. Clever huh?\u003c/p\u003e","title":"Create a Day 2 Operations Wrapper"},{"content":"If you\u0026rsquo;re building a multi-machine blueprint or multi-tiered app, there is a high likelihood that at least some of those machines will want to be load balanced. Many apps require multiple web servers in order to provide additional availability or to scale out. vRealize Automation 6 coupled with NSX will allow you to put some load balancing right into your server blueprints.\nJust to set the stage here, we\u0026rsquo;re going to deploy an NSX Edge appliance with our multi-machine blueprint and this will load balance both HTTPs and HTTP traffic between a pair of servers.\nUse Case: Load balancing can be used for plenty of different reasons. Providing scale out capacity and additional availability are two of the primary use cases.\nBuild It If you haven\u0026rsquo;t already gone through the process of setting up NSX and connecting it to vRealize Automation, you should do this first. In addition, you\u0026rsquo;ll need to already have a multi-machine blueprint ready to go. If you haven\u0026rsquo;t gone through this setup yet, please look at one of these posts before continuing:\nvRealize Automation 6 with NSX - NAT vRealize Automation 6 with NSX - Private Networks vRealize Automation 6 with NSX - Routed Networks I\u0026rsquo;ve chosen to copy my already existing NAT profile that we built in a previous post. Feel free to use any type of multi-machine blueprint that you want.\nOnce you\u0026rsquo;ve setup the basics, go to your blueprint and go to the Build Information tab. From here, you\u0026rsquo;ll probably want to make sure that a minimum of two servers are listed. We then select \u0026ldquo;edit\u0026rdquo; under the network view.\nYou\u0026rsquo;ll notice that we already have a network and network profile setup. If you haven\u0026rsquo;t done this already please see one of the previous posts about setting up network adapters.\nNow go to the Load Balancer tab. Here you can select the service that you want to load balance. I\u0026rsquo;ve decided to do both HTTP (port 80) and HTTPS (port 443) to demonstrate what happens when multiple protocols are load balanced. Save your changes. Make sure that you publish your blueprint and assign it to your users.\nRequest Item Request your new load balancing blueprint from your catalog.\nFrom here, your mileage may vary based on the number of VMs in your blueprint and the type of blueprint you\u0026rsquo;ve deployed. In my case I\u0026rsquo;ve used a NAT network profile. When my machines started to deploy, I got a new NSX Edge appliance and there was one vNIC setup as an uplink to my transit network and another link used as my internal network.\nIf I look at the NAT tab I\u0026rsquo;ll see that I\u0026rsquo;ve got two entries for NAT and each of them has the same original IP Address. The difference between the two NAT entries is the port. Remember that I load balanced two ports (HTTPS/HTTP) so the two NAT entries correspond to those two ports.\nNow if I look at the load balancer tab of my new edge appliance we\u0026rsquo;ll see some cool stuff. Looking at the Pools, we\u0026rsquo;ll see that there are two pools created. If we look at the first pool, we can see that there are two machines associated with this pool and they are on port 443 (HTTPS). If we go look at the second pool, we\u0026rsquo;ll see that the same two IP Addresses are listed in the pool, but now on port 80 (HTTP). So we can see that a pool is created for each port that is going to be load balanced. Now we can look at the Virtual Servers. Here we see that there are two virtual servers. This is the IP Address that uses would browse to in order to retrieve our web page. Notice that two virtual servers are created, each with the same IP Address. The different again is the port that is used.\nSummary Load balancing is a pretty common place thing for multi-tiered apps so its pretty crucial that your automation solution also be able to provide this ability. Using vRealize Automation and NSX, it is possible to provide these abilities to your developers and end users so that they can test and build new services as they would be in an production environment.\n","permalink":"https://theithollow.com/2015/11/09/vrealize-automation-6-with-nsx-load-balancing/","summary":"\u003cp\u003eIf you\u0026rsquo;re building a multi-machine blueprint or multi-tiered app, there is a high likelihood that at least some of those machines will want to be load balanced. Many apps require multiple web servers in order to provide additional availability or to scale out. vRealize Automation 6 coupled with NSX will allow you to put some load balancing right into your server blueprints.\u003c/p\u003e\n\u003cp\u003eJust to set the stage here, we\u0026rsquo;re going to deploy an NSX Edge appliance with our multi-machine blueprint and this will load balance both HTTPs and HTTP traffic between a pair of servers.\u003c/p\u003e","title":"vRealize Automation 6 with NSX – Load Balancing"},{"content":"You\u0026rsquo;re network isn\u0026rsquo;t fully on IPv6 yet? Ah, well don\u0026rsquo;t worry you\u0026rsquo;re certainly not alone, in fact you\u0026rsquo;re for sure in the majority. Knowing this, you\u0026rsquo;re probably using some sort of network address translation (NAT). Luckily, vRealize Automation can help you deploy translated networks as well as routed and private networks with a little help from NSX.\nA quick refresher here, a translated network is a network that remaps an IP Address space from one to another. The quickest way to explain this is a public and a private IP Address. Your computer likely sits behind a firewall and has a private address like 192.168.1.50 but when you send traffic to the internet, the firewall translates it into a public IP Address like 143.95.32.129. This translation can be used to do things like keeping two servers on a network with the exact same IP Address.\nUse Case: A NAT\u0026rsquo;d network can be used to deploy an application multiple times with the same local IP Addressing. The application could still communicate on the network even though it has the same IP Address as another application.\nBuild It If you haven\u0026rsquo;t already gone through the process of setting up NSX and connecting it to vRealize Automation, you should do this first. Assuming you\u0026rsquo;ve already done this then lets begin.\nLogin to vRA and add a new Multi-Machine Blueprint. Fill out the Blueprint information tab and then skip over to the Network tab where the magic starts.\nSelect the Transport Zone from the dropdown list. Then select the New Network Profile dropdown list and then select NAT. Give the new profile a name and description. Then select an external network profile. This should be your transit network that connects the newly created NSX Edge routers to your NSX edge that was created earlier. The NAT type I chose \u0026ldquo;One-to-Many\u0026rdquo; but you could do a One-to-One mapping if you needed every address to be translated to separate addresses.\nNext select the IP Ranges tab. Create a new network range and a set of starting and ending IP Addresses. These will be assigned to the virtual machines when they\u0026rsquo;re provisioned in this new network.\nNow we go to back to the Build Information tab. Select the Add Blueprints option to select a blueprint that you\u0026rsquo;ve already created. You can do this several times to add many virtual machines to your multi-machine blueprint. In my case, I\u0026rsquo;ve added only a pair of servers to simulate a multi-tiered app.\nOnce you\u0026rsquo;ve added your machines, click the hyperlink under network. Select your network profile that we created in the previous steps. Once this is done you can finish up your blueprint setup with any additional configurations you might want to add such as post deployment scripts, Lease settings, Build Profiles or actions. Save the blueprint and publish it. Assign the blueprint to a service and grant entitlements. All of this is outside the scope of this article but is a standard process for deploying blueprints in vRealize Automation 6.\nRequest Item What are you waiting for? Go request your new catalog item. For this test I\u0026rsquo;m using the \u0026ldquo;Tiered NAT\u0026rdquo; item we just created.\nI\u0026rsquo;ve requested a pair of multi-tiered apps to show how the IP Addressing can be reused for each multi-tiered app.\nWait for the VMs to provision and then go check your vCenter to see what happened! The first thing you might notice that that in your NSX UI there are new NSX edges that has been created automatically.\nIf we take a look at the first NSX edge interfaces, we\u0026rsquo;ll see that there is an internal interface and an uplink. The uplink will have at least two IP addresses. One of which will be used for the translation of the internal addresses.\nIf we move over to the NAT tab, we\u0026rsquo;ll see that the internal IP Address range is being translated to one of the uplink interface\u0026rsquo;s IP Addresses. If we were to look at the second edge interface, you\u0026rsquo;ll notice that the internal IP Addresses are identical, but the uplink IP Addresses are different. The same range will be translated to a different IP Address.\nNow that you\u0026rsquo;ve seen some of the coolness, you probably aren\u0026rsquo;t surprised that a new logical switches have been created. This logical switch is connected to the internal interface of the new NSX edges.\nTesting In my vRA workloads cluster, I\u0026rsquo;ve got my new pair of NSX edges and a two pair of virtual machines that are part of my NAT test. If we console into our first vm (Hollow-32) we notice the IP Address and subnet mask of 10.10.71.1 and we can ping a public IP address confirming that we have network connectivity.\nIf I then console into the first VM created in the second multi-tier app, we see that it has the same IP Address and can also communicate on the network. This is possible without issue, because we\u0026rsquo;ve translated the addresses so that other machines don\u0026rsquo;t see these addresses. They instead will see the 10.10.80.X address.\nSummary If I had a nickel for every time I heard a customer say that they wanted to have a test lab or application that had Identical settings including network, I\u0026rsquo;d be able to spend a fun day at the arcade. (Qbert would be my game of choice.) vRealize Automation 6 combined with NSX and NAT will allow you to do this.\n","permalink":"https://theithollow.com/2015/11/02/vrealize-automation-6-with-nsx-nat/","summary":"\u003cp\u003eYou\u0026rsquo;re network isn\u0026rsquo;t fully on IPv6 yet? Ah, well don\u0026rsquo;t worry you\u0026rsquo;re certainly not alone, in fact you\u0026rsquo;re for sure in the majority. Knowing this, you\u0026rsquo;re probably using some sort of network address translation (NAT). Luckily, vRealize Automation can help you deploy translated networks as well as routed and private networks with a little help from NSX.\u003c/p\u003e\n\u003cp\u003eA quick refresher here, a translated network is a network that remaps an IP Address space from one to another. The quickest way to explain this is a public and a private IP Address. Your computer likely sits behind a firewall and has a private address like 192.168.1.50 but when you send traffic to the internet, the firewall translates it into a public IP Address like 143.95.32.129. This translation can be used to do things like keeping two servers on a network with the exact same IP Address.\u003c/p\u003e","title":"vRealize Automation 6 with NSX - NAT"},{"content":"Any corporate network thats larger than a very small business is likely going to have a routed network already. Segmenting networks improves performance and more importantly used for security purposes. Many compliance regulations such as PCI-DSS state that machines need to be segmented from each other unless there is a specific reason for them to be on the same network. For instance your corporate file server doesn\u0026rsquo;t need to communicate directly with your CRM database full of credit card numbers. The quickest way to fix this is to put these systems on different networks but this can be difficult to manage in a highly automated environment. Developers might need to spin up new applications which may need to be on different network segments from the rest of the environment. Its not very feasible to assume we can now spin up test and delete hundred of machines each day, but need the network team to manually create new network segments and tear them down each day. That wouldn\u0026rsquo;t be a nice thing to do to your network team.\nLuckily NSX has the ability to create routed networks and vRealize Automation can leverage this, to automatically setup a new network when we deploy blueprints. The initial setup requires setting up an NSX edge and a transit network. This is done manually to get the environment prepared for the automation piece. The diagram below shows the pieces that are manually created vs the routed networks which are automatically created through vRealize Automation.\nUse Case: A compliance regulation or pod design where services or applications are to be build in their own networks but are still able to communicate with the corporate network.\nBuild It If you haven\u0026rsquo;t already gone through the process of setting up NSX and connecting it to vRealize Automation, you should do this first. Assuming you\u0026rsquo;ve already done this then lets begin.\nLogin to vRA and add a new Multi-Machine Blueprint. Fill out the Blueprint information tab and then skip over to the Network tab where the magic starts.\nSelect the Transport Zone from the dropdown list. Then select the New Network Profile dropdown list and then select Routed. From here we create a network profile for our virtual machines to use. For my setup, I created three separate network profiles for this setup (Web, App, Database) which meant repeating this process three times. If you\u0026rsquo;re trying to figure out how this will work, look at one side of the diagram above with a single router and three logical switches.\nGive the network profile a name and description. Then be sure to select the Transit network as the external network profile. The external network profile is how the logical router you\u0026rsquo;ll be spinning up, will connect to the rest of the infrastructure. In our case we\u0026rsquo;re connecting it to the network hanging off of our NSX edge.\nEnter a subnet mask, range subnet mask and Base IP. The range subnet mask made my brain hurt for a second until I looked at the next tab (IP Ranges).\nIf you look at the IP Ranges tab, you can see how the subnet mask and range subnet mask affect the network settings. Below we can see that I\u0026rsquo;m starting with the 10.10.72.1 (base IP) and creating multiple /29 subnets inside of my /24 subnet mask.\nOnce I was done with this process, I created two more network profiles so that I could have a different profile for App, Database and Web tiers. You don\u0026rsquo;t need to do this, but I wanted to show what was possible here. If I\u0026rsquo;m only spinning up 3 VMs in a new routed network, I have enough IP Addresses in my range, but I\u0026rsquo;ve chosen to use three separate networks.\nNow we can go back to the Build Information tab and add our virtual machines into the blueprint. Add as many as makes sense to you for your environment. After you add them click the \u0026ldquo;edit\u0026rdquo; hyperlink in the network column.\nHere you would select the network profile that you created earlier. I assigned my \u0026ldquo;App\u0026rdquo; vm to my \u0026ldquo;App\u0026rdquo; network profile, but you could assign all three VMs to the same network profile, it would just mean that they are all on the same network segment.\nRepeat this process with the rest of your virtual machines in the blueprint, and then finish adding the customization settings such as post-deployment scripts and actions. After this you should be able to publish the blueprint and add it to a catalog for your users to consume.\nRequest Item What are you waiting for? Go request your new catalog item. For this test I\u0026rsquo;m using the \u0026ldquo;Tiered Routed\u0026rdquo; item we just created.\nWait for the VMs to provision and then go check your vCenter to see what happened! The first thing you might notice that that in your NSX UI there is a new NSX edge that has been created automatically.\nIf we dive into that NSX edge, we will see several new interfaces. One of the interfaces is from the external network profile which is the NSX uplink interface. The rest of the internal interfaces should match up to the network profiles that you added in your multi-machine blueprint. I created three network profiles (App,Database,Web) so I see three of these. You may only see a single internal interface. Now if we go look at our NSX logical switches, we can see that three new switches were created. Each of those switches are connected to the internal vNICs of the Distributed Logical Router that we just spun up.\nLet\u0026rsquo;s move over and take a look at our virtual machines that were created. I\u0026rsquo;ve got a new edge appliance and my three VMs.\nIf I look at the IP Addresses of the new VMs, they should match up to three separate /29 networks. When we look at the network configs of one of the VMs, we can see that we\u0026rsquo;ve got a /29 IP Address and a gateway that matches the interface on our distributed logical router. We can also ping the other VMs that were created in this blueprint as well as a public DNS Server to show that we have connectivity throughout. COOL HUH!?\nSummary vRealize Automation gives us some cool ways to deploy virtual machines but combining it with a solution like NSX give us even more power to deploy multi-tier apps. Micro-segmentation is too difficult to manage on a one by one basis, but with an automation solution in place these operations become feasible. Routed networks give us the ability to deploy tiered applications in their own networks to cut down on the amount of manual effort required by the networking team.\n","permalink":"https://theithollow.com/2015/10/26/vrealize-automation-6-with-nsx-routed-networks/","summary":"\u003cp\u003eAny corporate network thats larger than a very small business is likely going to have a routed network already. Segmenting networks improves performance and more importantly used for security purposes. Many compliance regulations such as PCI-DSS state that machines need to be segmented from each other unless there is a specific reason for them to be on the same network. For instance your corporate file server doesn\u0026rsquo;t need to communicate directly with your CRM database full of credit card numbers. The quickest way to fix this is to put these systems on different networks but this can be difficult to manage in a highly automated environment. Developers might need to spin up new applications which may need to be on different network segments from the rest of the environment. Its not very feasible to assume we can now spin up test and delete hundred of machines each day, but need the network team to manually create new network segments and tear them down each day. That wouldn\u0026rsquo;t be a nice thing to do to your network team.\u003c/p\u003e","title":"vRealize Automation 6 with NSX - Routed Networks"},{"content":"Of the types of networks available through NSX, private networks are the easiest to get going because they don\u0026rsquo;t require any NSX edge routers to be in place. Think about it, the NSX edge appliance is used to allow communication with the physical network which we won\u0026rsquo;t need for a private network.\nA quick refresher here, a private network is a network that is not connected to the rest of the environment. Machines that are on the private network can communicate with each other, but nothing else in the environment. Its simple, think of some machines connected to a switch and the switch isn\u0026rsquo;t connected to any routers. The machines connected to the switch can talk to each other, but thats it.\nUse Case: A private network could be used to test an application in isolated environment so as to not affect a production environment. The machines in the private network could have the same IP Addresses as the production network but its ok because they can\u0026rsquo;t talk to each other. Using a private network with vRA would allow teams to deploy multi-tier applications over an over without affecting the production environment.\nBuild It If you haven\u0026rsquo;t already gone through the process of setting up NSX and connecting it to vRealize Automation, you should do this first. Assuming you\u0026rsquo;ve already done this then lets begin.\nLogin to vRA and add a new Multi-Machine Blueprint. Fill out the Blueprint information tab and then skip over to the Network tab where the magic starts.\nSelect the Transport Zone from the dropdown list. Then select the New Network Profile dropdown list and then select Private. Give the new profile a name, subnet mask, gateway, and if you\u0026rsquo;re using DHCP or not.\nNext select the IP Ranges tab. Create a new network range and a set of starting and ending IP Addresses. These will be assigned to the virtual machines when they\u0026rsquo;re provisioned in this new network.\nNow we go to back to the Build Information tab. Select the Add Blueprints option to select a blueprint that you\u0026rsquo;ve already created. You can do this several times to add many virtual machines to your multi-machine blueprint.\nOnce you\u0026rsquo;ve added the blueprints to this new multi-machine blueprint, select the edit hyperlink under the network column. Here you\u0026rsquo;ll select the network profile we just created in the previous step. Remember to do this for each of the blueprints. In my case, I\u0026rsquo;ve added each of the blueprints to the same network profile but you could add many network profiles if you wished.\nOnce this is done you can finish up your blueprint setup with any additional configurations you might want to add such as post deployment scripts, Lease settings, Build Profiles or actions. Save the blueprint and publish it. Assign the blueprint to a service and grant entitlements. All of this is outside the scope of this article but is a standard process for deploying blueprints in vRealize Automation 6.\nRequest Item What are you waiting for? Go request your new catalog item. For this test I\u0026rsquo;m using the \u0026ldquo;Tiered Private\u0026rdquo; item we just created.\nWait for the VMs to provision and then go check your vCenter to see what happened! The first thing you might notice that that in your NSX UI there is a new NSX edge that has been created automatically.\nIf we look at the interfaces of this edge we\u0026rsquo;ll notice that a single interface is added (internal) and that the IP Address is the default gateway that you used in your network profile in vRA. Pretty neat huh?\nNow that you\u0026rsquo;ve seen some of the coolness, you probably aren\u0026rsquo;t surprised that a new logical switch has been created. This logical switch is connected to the internal interface of the new NSX edge.\nIn my vRA workloads cluster, I\u0026rsquo;ve got my new NSX edge and a pair of virtual machines that are part of my private network test. Notice that the IP Addresses are aligned with the network profile we created. If we console into our first vm (Hollow-02) we notice the IP Address and subnet mask. If we ping the other virtual machine (hollow-03) we get replies from the server. On the other hand if we try to ping anything else such as a public DNS Server (4.2.2.1) the destination is not reachable.\nSummary vRealize Automation gives us some cool ways to deploy virtual machines but combining it with a solution like NSX give us even more power to deploy mutli-tier apps. Micro-segmentation is too difficult to manage on a one by one basis, but with an automation solution in place these operations become feasible. Private networks give us the ability to test solutions over and over without affecting existing workloads and have their place in a new devops world.\n","permalink":"https://theithollow.com/2015/10/19/vrealize-automation-6-with-nsx-private-networks/","summary":"\u003cp\u003eOf the types of networks available through NSX, private networks are the easiest to get going because they don\u0026rsquo;t require any NSX edge routers to be in place. Think about it, the NSX edge appliance is used to allow communication with the physical network which we won\u0026rsquo;t need for a private network.\u003c/p\u003e\n\u003cp\u003eA quick refresher here, a private network is a network that is not connected to the rest of the environment. Machines that are on the private network can communicate with each other, but nothing else in the environment. Its simple, think of some machines connected to a switch and the switch isn\u0026rsquo;t connected to any routers. The machines connected to the switch can talk to each other, but thats it.\u003c/p\u003e","title":"vRealize Automation 6 with NSX - Private Networks"},{"content":"This is a series of posts helping you get familiarized with how VMware\u0026rsquo;s vRealize Automation 6 can leverage VMware\u0026rsquo;s NSX product to provide software defined networking. The series will show you how to do some basic setup of NSX as well as how to use Private, Routed and NAT networks all from within vRA.\nvRealize Automation 6 with NSX - NSX Setup vRealize Automation 6 with NSX - Private Networks vRealize Automation 6 with NSX - Routed Networks vRealize Automation 6 with NSX - NAT vRealize Automation 6 with NSX - Load Balancing vRealize Automation 6 with NSX - Firewall ","permalink":"https://theithollow.com/2015/10/12/software-defined-networking-with-vrealize-automation-and-nsx/","summary":"\u003cp\u003eThis is a series of posts helping you get familiarized with how VMware\u0026rsquo;s vRealize Automation 6 can leverage VMware\u0026rsquo;s NSX product to provide software defined networking. The series will show you how to do some basic setup of NSX as well as how to use Private, Routed and NAT networks all from within vRA.\u003c/p\u003e\n\u003ch2 id=\"vrealize-automation-6-with-nsx---nsx-setup\"\u003e\u003ca href=\"http://wp.me/p32uaN-1lT\"\u003evRealize Automation 6 with NSX - NSX Setup\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-6-with-nsx---private-networks\"\u003e\u003ca href=\"http://wp.me/p32uaN-1lR\"\u003evRealize Automation 6 with NSX - Private Networks\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-6-with-nsx---routed-networks\"\u003e\u003ca href=\"/2015/10/26/vrealize-automation-6-with-nsx-routed-networks/\"\u003evRealize Automation 6 with NSX - Routed Networks\u003c/a\u003e\u003c/h2\u003e\n\u003ch1 id=\"vrealize-automation-6-with-nsx---nat\"\u003e\u003ca href=\"http://wp.me/p32uaN-1qS\"\u003evRealize Automation 6 with NSX - NAT\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"vrealize-automation-6-with-nsx---load-balancing\"\u003e\u003ca href=\"http://wp.me/p32uaN-1s2\"\u003evRealize Automation 6 with NSX - Load Balancing\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"vrealize-automation-6-with-nsx---firewall\"\u003e\u003ca href=\"http://wp.me/p32uaN-1tu\"\u003evRealize Automation 6 with NSX - Firewall\u003c/a\u003e\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/10/GuideLogo.jpg\"\u003e\u003cimg alt=\"GuideLogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/10/GuideLogo-1024x543.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Software Defined Networking with vRealize Automation and NSX"},{"content":"Before we can start deploying environments with automated network segments, we need to do some basic setup of the NSX environment.\nNSX Manager Setup It should be obvious that you need to setup NSX Manager, deploy controllers and do some host preparation. These are basic setup procedures just to use NSX even without vRealize Automation in the middle of things, but just as a quick review:\nInstall NSX Manager and deploy NSX Controller Nodes NSX Manager setup can be deployed from an OVA and then you must register the NSX Manager with vCenter. After this is complete, deploy three NSX Controller nodes to configure your logical constructs. Next Prepare your ESXi hosts which will install a VIB on them.\nThe next step isn\u0026rsquo;t specific to every NSX install. We want to create an NSX Edge so that our newly created distributed logical routers will be able to communicate with the rest of the network. To do this create a new NSX Edge with an Uplink that is connected to a vSphere portgroup and create another Internal interface that will service as a transit network.\nOnce you\u0026rsquo;ve got the NSX edge created, create a logical switch on the Transit network. This will be where the rest of your Distributed Logical Routers will connect when they are spun up.\nIf you\u0026rsquo;re having trouble visualizing the process think about this diagram where we\u0026rsquo;ve got an NSX edge connected to a transit switch and then the Distributed Logical Routers will be created from vRA and attached to the transit switch.\nvRealize Automation Endpoints Before you can use any of the automated networking features you have to discover them with vRealize Automation. To do this we need to make sure our vCenter endpoint is aware of the NSX manager.\nTo do this go to Infrastructure \u0026ndash;\u0026gt; Endpoints \u0026ndash;\u0026gt; Endpoints and modify (or create from scratch if you don\u0026rsquo;t have a vCenter endpoint already) the vCenter endpoint. Select the \u0026ldquo;Specify manager for network and security platform\u0026rdquo; checkbox. Then enter the URL for your NSX Manager appliance and add some credentials to connect to it.\nWhen you\u0026rsquo;re finished setting up the endpoint be sure to do a data collection to inventory all of the NSX components. Until you do this, you won\u0026rsquo;t be able to deploy any new networks. Once you\u0026rsquo;ve done tis we can go about setting up some network profiles. Go to Infrastructure \u0026ndash;\u0026gt; Reservations \u0026ndash;\u0026gt; Network Profiles and add a new External Network Profile. I\u0026rsquo;ve called mine transit because it\u0026rsquo;s going to be what I use to connect to my transit logical switch that we created in NSX.\nEnter all of the information for the transit network. Enter a name and description, as well as the subnet mask and default gateway that matches the transit network you created in NSX. Also fill out DNS information and suffixes. Last but not least, be sure to enter your WINS Servers if you\u0026rsquo;ve gotten lost in some sort of wormhole for the past decade. Just kidding. Click on the IP Ranges tab and enter a new network range. The IP Addresses you setup here will be assigned to the distributed routers that get created by vRealize Automation. Ensure that you have enough IP Addresses here to handle all of the new routers you\u0026rsquo;ll be spinning up.\nSummary Now your NSX and vRealize Automation environment is ready to start creating blueprints to leverage private networks, routed networks and NAT\u0026rsquo;d networks. There is still the matter of creating network profiles for your applications but this can be done as part of the blueprint build in vRealize Automation. If you\u0026rsquo;ve gotten this far, you\u0026rsquo;re well on your way to deploying multi-tier applications with their own networks.\n","permalink":"https://theithollow.com/2015/10/12/vrealize-automation-6-with-nsx-initial-setup-of-nsx/","summary":"\u003cp\u003eBefore we can start deploying environments with automated network segments, we need to do some basic setup of the NSX environment.\u003c/p\u003e\n\u003ch2 id=\"nsx-manager-setup\"\u003eNSX Manager Setup\u003c/h2\u003e\n\u003cp\u003eIt should be obvious that you need to setup NSX Manager, deploy controllers and do some host preparation. These are basic setup procedures just to use NSX even without vRealize Automation in the middle of things, but just as a quick review:\u003c/p\u003e\n\u003ch3 id=\"install-nsx-manager-and-deploy-nsx-controller-nodes\"\u003eInstall NSX Manager and deploy NSX Controller Nodes\u003c/h3\u003e\n\u003cp\u003eNSX Manager setup can be deployed from an OVA and then you must register the NSX Manager with vCenter. After this is complete, deploy three NSX Controller nodes to configure your logical constructs.\n\u003cimg alt=\"NSXSetupManagementSetup\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/09/NSXSetupManagementSetup-1024x452.png\"\u003e\u003c/p\u003e","title":"vRealize Automation 6 with NSX - Initial Setup of NSX"},{"content":"A common task that comes up during an automation engagement relates to passing values from vRealize Automation blueprints over to vRealize Orchestrator. There is a workflow that I use quite frequently that will list the properties available for further programming and you can download the plugin at github.com if you\u0026rsquo;d like to use it as well.\nHow it works The workflow takes several inputs that are provided by vRealize Automation during a stub like Building Machine, Machine Provisioned or Machine Disposing. These inputs include the vRA Virtual Machine instance, the vCenter Virtual Machine ID, the vRealize Automation Host, the stubs used and most importantly the vRealize Automation VM properties.\nThe workflow itself is a single scriptable task that could be converted into an action of you so prefer.\nThe main goal for this task is to log the property information of the virtual machine as it\u0026rsquo;s being provisioned or destroyed. You can see below there are properties such as VirtualMachine.Memory.Size property value and its set to 1024.\nOne of the additional attributes that can be set on the workflow is the \u0026ldquo;OutputProperty\u0026rdquo;. This can be set prior to any stubs being executed and will output whatever property you want as an output variable that can be passed to further workflow actions. For instance the screenshot below will pass the VirtualMachine.Network0.Name as an output so that you can update an IPAM solution or something. You can change this setting to match any property retrieved from the plugin.\nIn vRealize Automation you will need to add a stub property and update the value to match the workflow in vRealize Orchestrator. This will ensure that the workflow gets called during the stub process.\nIt is also possible to run the workflow manually, but you\u0026rsquo;ll need to fill in the input parameters yourself.\nSummary This workflow doesn\u0026rsquo;t accomplish any specific task, but it will allow you to get the properties of the virtual machine you\u0026rsquo;re provisioning through vRealize Automation. Once you get those properties, your imagination is the limit about what kind of workflows that can be run by using them. I hope this plugin helps someone else that is struggling with getting this information over to vRO.\nDownload the Plugin from Github ","permalink":"https://theithollow.com/2015/10/05/vrealize-automation-entity-properties/","summary":"\u003cp\u003eA common task that comes up during an automation engagement relates to passing values from vRealize Automation blueprints over to vRealize Orchestrator. There is a workflow that I use quite frequently that will list the properties available for further programming and you can download the plugin at \u003ca href=\"https://github.com/theITHollow/vRA6-PropertyEntities\"\u003egithub.com\u003c/a\u003e if you\u0026rsquo;d like to use it as well.\u003c/p\u003e\n\u003ch1 id=\"how-it-works\"\u003eHow it works\u003c/h1\u003e\n\u003cp\u003eThe workflow takes several inputs that are provided by vRealize Automation during a stub like Building Machine, Machine Provisioned or Machine Disposing. These inputs include the vRA Virtual Machine instance, the vCenter Virtual Machine ID, the vRealize Automation Host, the stubs used and most importantly the vRealize Automation VM properties.\u003c/p\u003e","title":"vRealize Automation Entity Properties"},{"content":"To download packages please visit the Github repositories. https://github.com/theITHollow\n","permalink":"https://theithollow.com/downloads/","summary":"\u003cp\u003eTo download packages please visit the Github repositories. https://github.com/theITHollow\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/theITHollow\"\u003e\u003cimg alt=\"github-logo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/09/github-logo-1024x340.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Downloads"},{"content":"I found some conflicting information about setting up load balancers for vRealize Automation in a Distributed installation, specifically around Health Checks. The following health checks were found to work for a fully distributed installation of vRA 6.2.2.\nvRealize Automation Appliances This is the pair of vRealize Automation Linux appliances that are deployed via OVA file.\nType: HTTPS\nInterval: 5 seconds\nTimeout: 9 seconds\nSend String: GET /vcac/services/api/statusrn\nLoad Balancing Method: Round Robin\nvRA Node 1 port 443\nvRA Node 2 port 443\nvRealize Automation IAAS Servers Web Components These are the load balancer settings used on the Web components. This would not be necessary for the DEM Agents and DEM Workers.\nType: HTTPS\nInterval: 5 seconds\nTimeout: 9 seconds\nSend String: GET /WAPI/api/statusrn\nIaaS Server 1 port 443\nIaas Server 2 port 443\nvRealize Automation IAAS Model Managers These are the load balancer settings used on the Model Manager component. This would not be necessary for the DEM Agents and DEM Workers.\nType: HTTPS\nInterval: 5 seconds\nTimeout: 9 seconds\nSend String: GET /VMPSProvisionrn\nReceive String: name=”ProvisionService”\nIaaS Server 1 port 443\nIaas Server 2 port 443\nvRealize Orchestrator This is for a vRealize Orchestrator Cluster being deployed from the VMware OVA.\nType: HTTPS\nInterval: 3 seconds\nTimeout: 9 seconds\nSend String: GET /vco/api/docs/index.html HTTP/1.1rnHost:rnnConnection: closernrn\nReceive String: 200 OK\nLoad Balancing Method : Round Robin\nvRO Node 1 Service port 8281\nvRO Node 2 Service port 8281\nvRealize Automation vPostgres Database This assumes that you\u0026rsquo;ve setup a pair of vRA appliances and they are replicating the vPostgres database between the two instances.\nNOTE: in order for the send and receive strings to be valid, a script must be run on each of the vRA appliances after vPostgres replication has been completed. This script allows for the receive string to be visible to the load balancer. Without this it won’t work. VMware KB Article: Type: HTTPS\nInterval: 5 seconds\nTimeout: 9 seconds\nSend String: GET /vPostgresService.pyrn\nReceive String: Postgres.Master=true\nUsername: {the load balancer uses the root username and password to check the receive string}\nPassword: Alias Service Port: 5480 {the load balancer check the receive string on a different port from where the service runs}\nvRA Node 1 port 5432\nvRA Node 2 port 5432\n","permalink":"https://theithollow.com/2015/09/28/vrealize-automation-load-balancer-settings/","summary":"\u003cp\u003eI found some conflicting information about setting up load balancers for vRealize Automation in a Distributed installation, specifically around Health Checks. The following health checks were found to work for a fully distributed installation of vRA 6.2.2.\u003c/p\u003e\n\u003ch2 id=\"vrealize-automation-appliances\"\u003e\u003cstrong\u003evRealize Automation Appliances\u003c/strong\u003e\u003c/h2\u003e\n\u003cp\u003eThis is the pair of vRealize Automation Linux appliances that are deployed via OVA file.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eType:\u003c/strong\u003e HTTPS\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eInterval:\u003c/strong\u003e 5 seconds\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eTimeout:\u003c/strong\u003e 9 seconds\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eSend String:\u003c/strong\u003e GET /vcac/services/api/statusrn\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eLoad Balancing Method:\u003c/strong\u003e Round Robin\u003c/p\u003e","title":"vRealize Automation Load Balancer Settings"},{"content":"vRealize Automation is at its best when it can leverage multiple infrastructures to provide a hybrid cloud infrastructure. One of the things we might want to do is to set up VMware vCloud Air integration with your vRA instance.\nTo start, we need to have a vCloud Air account which you can currently sign up for with some initial credits to get you started for free. Once you\u0026rsquo;ve got an account you\u0026rsquo;ll be able to setup a VDC and will have some catalogs that you can build VMs from. If you\u0026rsquo;re concerned about these steps, don\u0026rsquo;t worry a default VDC including some storage and a network will be there for you by default.\nEndpoints and Fabric Groups Now we move over to the vRealize Automation console. The first thing we need to do is to create an endpoint. We go to Infrastructure \u0026ndash;\u0026gt; Endpoints \u0026ndash;\u0026gt; Endpoints and add a new endpoint of type vApp (Cloud) under the Cloud header.\nWe\u0026rsquo;ll be asked to enter information specific to our vCloud Air VPC. The first two items aren\u0026rsquo;t anything crucial, just a name and a description. You can make these anything that makes sense to you. The next entry is the address though. In order to find the Address information, go back to your VPC in vCloud Air and look at the URL. The URL will show something like https://REGIONNAME.vchs.vmware.com/api.compute. This is your Address!\nThe next line we have to enter information in for is the credentials item. This needs to be an account administrator so that it has permissions to spin up VMs, destroy them and connect the VM console. This may be the email address and password you setup your vCloud Air instance with, but you could add a new user in the vCloud Air portal if you wish.\nNext we have to add the Organization. This is again found in the URL for your VDC. Go back to the VDC in your web browser and look for the \u0026ldquo;orgName\u0026rdquo; part of the URL. Copy everything after the \u0026ldquo;=\u0026rdquo; and up to the \u0026ldquo;\u0026amp;serviceInstanceId\u0026rdquo;.\nWhen you\u0026rsquo;ve filled out the endpoint information it should look similar to the below screenshot. Click OK to finish your setup.\nNow we run the data collection. Click the arrow next to your new endpoint and run the \u0026ldquo;Data Collection\u0026rdquo; in order to pull in all the information from the public cloud catalogs. Once the data collection has finished we can add a new Fabric Group. Go to Infrastructure \u0026ndash;\u0026gt; Groups \u0026ndash;\u0026gt; Fabric Groups and Add a new one. Fill out the name, description and the fabric administrators and then select the resources from your new Endpoint.\nReservations Now we add a reservation. To do this we go to Infrastructure \u0026ndash;\u0026gt; Reservations \u0026ndash;\u0026gt; Reservations and add a new vApp (vCloud) reservation. You\u0026rsquo;ll need to select the compute resource, a name for the reservation, the tenant to assign it to and a business group. You\u0026rsquo;ll also need to add a priority to this reservation in order for blueprints to decide which reservation to use first if multiple reservations exist.\nOn the resources tab you\u0026rsquo;ll need to select a storage path and an amount of storage assigned. You shouldn\u0026rsquo;t need to select a Memory reservation since it\u0026rsquo;s on demand and could be unlimited.\nNext on the network tab, we need to select a network. If you didn\u0026rsquo;t create any networks there should only be one network available. Click OK to finish the Reservation Setup.\nBlueprint This part differs a bit from a traditional blueprint in vRealize Automation. Normally, a single blueprint is created an published. In this case, we need to create one blueprint that includes the component and another blueprint that includes this group of components.\nGo to Infrastructure \u0026ndash;\u0026gt; Blueprint \u0026ndash;\u0026gt; Blueprints and add a new one by clicking \u0026ldquo;New Blueprint\u0026rdquo; and then selecting Cloud \u0026ndash;\u0026gt; vApp Component (Cloud). I\u0026rsquo;ve skipped the first page and am assuming you can fill out the name and description stuff. On the \u0026ldquo;Build Information\u0026rdquo; screen we\u0026rsquo;ll fill information about our virtual machine template. Clone it and select the vAppCloneWorkflow for the provisioning workflow. Select the elipses to select which catalog to clone from. Don\u0026rsquo;t worry if you haven\u0026rsquo;t uploaded a custom catalog since there are public catalogs already. Enter the rest of the provisioning information and save the blueprint.\nWhen you\u0026rsquo;re done, I found that I didn\u0026rsquo;t need to publish this blueprint or anything. In fact, even if it is published, it won\u0026rsquo;t show up in your vRA catalog items. Now we add a second blueprint by going to \u0026ldquo;New Blueprint\u0026rdquo; and then selecting Cloud \u0026ndash;\u0026gt; vApp (Cloud).\nOn the \u0026ldquo;Build Information\u0026rdquo; tab choose the \u0026ldquo;clone from\u0026rdquo; and select a catalog from vCloud Air. This will be the vApp stored in vCloud Air. Next we have to select the component blueprint that maps to this blueprint. In the Blueprint column of the Components table, select the vApp component we just created. This is done so that the vCloud Air vApp can be customized with additional settings like the number of Nics, storage devices etc. For further information please see the official documentation.\nWhen the blueprint is done, you\u0026rsquo;ll need to publish it and then entitle it to your business group owners which is covered in a previous post.\nBUILD VMs! Now you should be able to deploy your new workload and they\u0026rsquo;ll show up in vCloud Air and be managed by vRealize Automation.\nSummary It\u0026rsquo;s not too difficult to setup but the way it currently works with setting up multiple blueprints doesn\u0026rsquo;t seem very intuitive. I would like to see VMware make some modifications to the way these are provisioned but it does work and its certainly usable. Try it out yourself and see what you think.\n","permalink":"https://theithollow.com/2015/09/21/vrealize-automation-and-vcloud-air-integration/","summary":"\u003cp\u003evRealize Automation is at its best when it can leverage multiple infrastructures to provide a hybrid cloud infrastructure. One of the things we might want to do is to set up VMware vCloud Air integration with your vRA instance.\u003c/p\u003e\n\u003cp\u003eTo start, we need to have a \u003ca href=\"http://vcloud.vmware.com/\"\u003evCloud Air\u003c/a\u003e account which you can currently sign up for with some initial credits to get you started for free. Once you\u0026rsquo;ve got an account you\u0026rsquo;ll be able to setup a VDC and will have some catalogs that you can build VMs from. If you\u0026rsquo;re concerned about these steps, don\u0026rsquo;t worry a default VDC including some storage and a network will be there for you by default.\u003c/p\u003e","title":"vRealize Automation and vCloud Air Integration"},{"content":"This last post in the series shows you how Nick Colyer and I to tie everything together. If you want to just download the plugins and get started, please visit Github.com and import the plugins into your own vRealize Orchestrator environment.\nDownload the Plugin from Github\nNOTE: The first version of this code has been refactored and migrated to Github in Rubrik\u0026rsquo;s Repository since the time of this initial writing\nTo recap where we\u0026rsquo;ve been, we:\nLearned some basics about REST Added Rest Hosts and Operations to vRO Used REST to get Authentication Tokens for Rubrik Used the Authentication Tokens to query for VMs in Rubrik Now in this post we\u0026rsquo;ll take the VM we just queried for and assign it to an slaDomain (Backup Policy) for Rubrik. Finally, all of our workflows will be able to be used for a useful purpose!\nBefore we dive into the workflow we should setup our REST operations. In this case we\u0026rsquo;re using a PATCH operation on the /vm/{id} resource. Here we need to supply the Rubrik Virtual Machine ID that we found in our previous post.\nThe first step should look pretty familiar to you after reading the previous posts. We\u0026rsquo;re calling the Rubrik Authentication workflow to get us login to the Rubrik appliance. This should be old hat to you by now so I won\u0026rsquo;t repeat it in this post. The guts of this post are in the scripting piece which is where we will assign the virtual machine to the correct slaDomain.\nLet\u0026rsquo;s take a quick look at the visual binding tab of the \u0026ldquo;AssignVMtoSLADomain\u0026rdquo; element. We\u0026rsquo;ve got two input parameters that we\u0026rsquo;ll either input manually each time we run the workflow, or pass from another workflow. These are the RubrikVM to which we\u0026rsquo;ll assign a policy, and the slaDomain which is the policy to be applied. The rest of the parameters that we pass to our REST call are things like the REST Operations, and the authentication token. As far as outputs are concerned, we are outputting the status of the REST call. The magic piece shows our REST call. Open the scripting tab and we find that we\u0026rsquo;re using the authentication token to do our REST call. We then do some formatting of the slaDomainID to put some quotes around it and then log our settings.\nIn the payload section we format the content of the REST call. We set the sladomain and then log more information. YOU CAN\u0026rsquo;T LOG TOO MUCH INFORMATION! After this we create a request by adding a new variable called request and we put the inParamaters values and the content. This will create our request with the RubrikVMID in the inParamatersValues, and the slaDomainID in the content parameter.\nWe modify the request types to \u0026ldquo;application/json\u0026rdquo; as the type of header\u0026rsquo;s we\u0026rsquo;re passing and then an additional header with our authentication token in it. The last section will then execute the request.\nWhen we actually execute this request we\u0026rsquo;ll need to select the virtual machine and an slaDomain. The slaDomain is a GUID so you\u0026rsquo;d need to lookup this information ahead of time, or run another workflow to find the slaDomain. Nick Colyer explains how to do this on his site SystemsGame.com if you\u0026rsquo;re interested. Otherwise you can find this information by looking at the Rubrik swagger-ui.\nSummary When you execute this workflow you\u0026rsquo;ll have to pass a few parameters such as Virtual Machine and slaDomain but when it executes it will tell the Rubrik appliance to start protecting a virtual machine under the policies of the slaDomain. This workflow is the first time in this series that an actual operation is performed that would be beneficial to a Systems Administrator that is managing an environment.\nThis series is here to explain how these pieces were put together and how they can be customized. Putting many of these workflows together can be incredibly useful to an automated environment. Think about the possibilities of deploy a new VM from vRealize Automation. After provisioning we can then pass the virtual machine parameters over to vRO and then automatically call the Rubrik REST api and protect this virtual machine by combining a few of the pieces we\u0026rsquo;ve just built. There are plenty of new possibilities like adding a day 2 operation to allow an end user to perform a Rubrik snapshot or restore from the vRA user portal, or through the vRO console.\nIf you want to use the plugins that Nick Colyer and I created, feel free to download them at Github and use them in your own environment.\nDownload the Plugin ","permalink":"https://theithollow.com/2015/09/14/assign-a-vm-to-a-rubrik-sladomain/","summary":"\u003cp\u003eThis last post in the series shows you how \u003ca href=\"https://twitter.com/vnickC\"\u003eNick Colyer\u003c/a\u003e and I to tie everything together. If you want to just download the plugins and get started, please visit Github.com and import the plugins into your own vRealize Orchestrator environment.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/rubrikinc/vRO-Workflow\"\u003eDownload the Plugin from Github\u003c/a\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eNOTE: The first version of this code has been refactored and migrated to Github in Rubrik\u0026rsquo;s Repository since the time of this initial writing\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eTo recap where we\u0026rsquo;ve been, we:\u003c/p\u003e","title":"Assign a VM to a Rubrik slaDomain"},{"content":"Part four of this series will show you how to lookup a VM in the Rubrik Hybrid Cloud appliance through the REST API by using vRealize Orchestrator. If you\u0026rsquo;d rather just download the plugin and get using it, check out the link to Github to get the plugin and don\u0026rsquo;t forget to check out Nick Colyer\u0026rsquo;s post over at systemsgame.com about how to use it.\nDownload the Plugin from Github\nNOTE: The first version of this code has been refactored and migrated to Github in Rubrik\u0026rsquo;s Repository since the time of this initial writing\nWhy do we need this workflow: Rubrik assigns a unique identifier to each virtual machine that it discovers from your vCenter inventory. If we want to perform any operations on this VM such as, adding it to an sladomain (backup policy), we need to be able pass over an Identifier to Rubrik that it understands. This workflow will take a vCenter object and retrieve the Rubrik VM ID that corresponds to it\nINPUTS: VM in the VC:VirtualMachine format. This is a Virtual Machine Object from vCenter.\nOUTPUTS: Status Codes and the RubrikVMID in a string (text) format.\nWorkflow Overview If we take a quick glance at the vRealize Orchestrator Schema we\u0026rsquo;ll get a good idea about what\u0026rsquo;s about to happen. The first thing we do is to get the VMware Managed Object Reference (MOR) from vCenter. Next, we\u0026rsquo;re going obtain a Rubrik base64 token so that we can authenticate with the Hybrid Cloud Appliance and then eventually execute our REST call against the Rubrik appliance which is where we get our VM ID.\nPrior to us jumping into the workflow, I want to take second to look at the REST Operation that we setup earlier. Just as a quick refresher, we previously added a REST operation for Get VM. If you want a quick reminder, we setup a GET operations on the /vm resource. We\u0026rsquo;ll use this operation in our workflow.\nLet\u0026rsquo;s start at the beginning of our workflow. The first element in our schema is going to get the MOR from vCenter. We\u0026rsquo;re going to take the Virtual Machine as an input and our output is going to be just the MOR ID.\nI wish that I could tell you this is an incredibly difficult operation that Nick and I spend days working just so that you didn\u0026rsquo;t have to, but really we\u0026rsquo;re just looking for the vm.id and saving it as a variable. We also log it if that makes you feel better about this element? Next, we are going to call the workflow that we created earlier to obtain our Rubrik Token. I\u0026rsquo;m not going to walk through this whole piece again, but if you want a refresher, please take a look at the previous post. The main thing to remember here is that we\u0026rsquo;re sending a REST request to Rubrik to obtain a token or cookie so that we can issue further commands.\nNow that we\u0026rsquo;ve got our token we can try to execute our desired command against the Rubrik appliance. In the \u0026ldquo;Get VMs from Rubrik\u0026rdquo; element, we are going to pass our RESTHost parameter to the next scripting element. The rest of the element inputs come from the previous elements in our workflow. Specifically, we need to pass the Rubrik Token and our MOR ID so that we can use it in our REST Calls. Here is the magic piece to this workflow. In the scripting tab we specify all of our headers and payload for the REST Call. Notice that again, we set the request content type to \u0026ldquo;application/json\u0026rdquo; and our REST URL will be passed over from the REST operation we configured earlier.\nUnder the \u0026ldquo;Customize request\u0026rdquo; we create a new variable for our authentication token. This is the Token we retrieved earlier with the \u0026ldquo;Basic \u0026quot; in front of it. We log the token in case we need to do further troubleshooting. Lastly, we make our requests and pass the header information over.\nThe result is returning a single VM ID from Rubrik that matches the Virtual Machine we selected in vCenter.\nThis workflow would likely make more sense to come from a cloud management platform such as vRealize Automation but if you want to test it out, just run the workflow from vRO. You\u0026rsquo;l be asked for the VM and in the logs will see the VM ID.\nSummary This is another intermediary step in getting our workflow to be something usable. We have not configured REST hosts and operations, retrieved Rubrik tokens for authentication purposes, and now can retrieve a VM ID. Our next step would be to retrieve the slaDomains.\n","permalink":"https://theithollow.com/2015/09/10/get-rubrik-vm-through-vrealize-orchestrator/","summary":"\u003cp\u003ePart four of this series will show you how to lookup a VM in the \u003ca href=\"http://rubrik.com\"\u003eRubrik\u003c/a\u003e Hybrid Cloud appliance through the REST API by using vRealize Orchestrator. If you\u0026rsquo;d rather just download the plugin and get using it, check out the link to \u003ca href=\"https://github.com/rubrikinc/vRO-Workflow\"\u003eGithub\u003c/a\u003e to get the plugin and don\u0026rsquo;t forget to check out \u003ca href=\"http://twitter.com/vnickc\"\u003eNick Colyer\u0026rsquo;s\u003c/a\u003e post over at \u003ca href=\"http://systemsgame.com\"\u003esystemsgame.com\u003c/a\u003e about how to use it.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/rubrikinc/vRO-Workflow\"\u003eDownload the Plugin from Github\u003c/a\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eNOTE: The first version of this code has been refactored and migrated to Github in Rubrik\u0026rsquo;s Repository since the time of this initial writing\u003c/p\u003e","title":"Get Rubrik VM through vRealize Orchestrator"},{"content":"Part three of this series focuses on how Nick Colyer and I built the authentication piece of the plugin so that we could then pass commands to the Rubrik appliance. An API requires a login just like any other portal would. Since this is a a REST API, we actually need to do a \u0026ldquo;POST\u0026rdquo; on the login resource to get ourselves an authentication token.\nDownload the Plugin from Github\nNOTE: The first version of this code has been refactored and migrated to Github in Rubrik\u0026rsquo;s Repository since the time of this initial writing\nTo begin, lets look at the whole workflow.\nINPUTS: Username of type String, Password of type SecureString\nOUTPUTS: Token of type String\nWe\u0026rsquo;ll be performing the following operations:\nFormat Login Request - Here we want to get our username and password formatted correctly to pass to the REST host. Retrieve Rubrik Token - This is where we pass our login information over to the REST host with a POST on the /login resource Output Token - We pass the token to this element and log it. base64Encode - convert the token into base64 Log Base64Token - Log the base64 Token The first element in the workflow attempts to format our username and password so that it can be passed in a REST call. We see from the visual binding that we have inputs of Username and Password and an output of \u0026ldquo;postText\u0026rdquo; which will be the string we pass to our POST rest call.\nThe scripting tab is where we\u0026rsquo;ll do the the hard work of formatting the request. The string should look something like {\u0026ldquo;userId\u0026rdquo;:\u0026ldquo;username\u0026rdquo;,\u0026ldquo;password\u0026rdquo;,\u0026ldquo;password\u0026rdquo;} when we\u0026rsquo;re done (assuming we have a userId of username and a password of password). When we\u0026rsquo;ve got our string formatted, we log it so that we can be sure it\u0026rsquo;s being created correctly.\nNote: If you\u0026rsquo;re wondering why the string has extra \u0026quot;\u0026quot; in it, that is because the code vRO uses is javascript and in javascript the \u0026quot;\u0026quot; is an escape character. Meaning I can add quotes without javascript thinking it is anything other than a text character.\nThe next step is to call a workflow that will do our REST call to Rubrik. We pass the postText parameter to the new workflow. Inside this workflow we have a scriptable task to make our REST call and we pass it the postText variable which has been named \u0026ldquo;content\u0026rdquo; in the new workflow. The scriptable task also needs to know the REST operation we are going to perform this item has been statically assigned as an attribute. We know any time that this is run, we are asking for a POST on the https://Rubrik.local/login resource.\nNow we move on to the scripting tab. This is where the magic happens. We need to modify the section under the \u0026ldquo;//set the request content type\u0026rdquo; section. We added a line to set the contentType = \u0026ldquo;application/json\u0026rdquo; and then we log the URL so that we know that its correct. The next piece that needed to be added was the \u0026ldquo;request.setHeader line\u0026rdquo; where we tell it to Accept the application/json format. The rest of the code will execute the REST Call.\nNotice the \u0026ldquo;content\u0026rdquo; variable is part of the request. This is where we pass the username and password to the Rubrik array. From here, we\u0026rsquo;ve executed a POST to the appliance and have gotten a token (or cookie if you prefer that term) from the appliance. We pass this token to our next element where we log and format it. You can see from the visual binding tab that we take the output token from the previous element and will output a RubrikToken.\nThe scripting tab shows how we parsed out the token information. The first thing we did was create a new variable and parse the content since it was in a JSON format. Once we have that information we append a \u0026ldquo;:\u0026rdquo; to the end of it because the base64 version of the token requires a username:password format. Password in this case is an empty set so we only add a colon to the end of the string. We take this content and output it as our RubrikToken attribute.\nThe next piece, Nick and I take no credit for. This is some code we got directly from Crypto-JS project. This piece of code converts our Rubrik Token into a Base64 token that can be passed for authentication purposes. See the below disclaimer for more information. A BIG thanks to these guys for writing this nugget of goodness.\n(c) 2009-2013 by Jeff Mott. All rights reserved.\nRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\nRedistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation or other materials provided with the distribution. Neither the name CryptoJS nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \u0026ldquo;AS IS,\u0026rdquo; AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\nThe last step we do is to log the base64 Token before then adding it to a workflow output. This is done so that the entire workflow requires a username and password, and outputs just the base64Token so that we can use it over and over.\nIf you\u0026rsquo;re worried that this is too complicated to deal with, don\u0026rsquo;t worry. The plugin we created is neatly packaged into a single action so that all the pieces you saw above look like a single element requiring a username and a password and they output a base64Token. If you want more information about how to use this action from the plugin, head on over to SystemsGame.com for the details.\nBelow is an example of the logs generated when running the workflow. This should help to understand the process a little better.\nSummary You\u0026rsquo;ve made it this far, we now have a token that we can use for authentication purposes. Check out the next post in the series to see how we can actually use this code to do something useful with the Rubrik Hybrid Cloud Appliance.\n","permalink":"https://theithollow.com/2015/09/08/rubrik-api-logins-through-vrealize-orchestrator/","summary":"\u003cp\u003ePart three of this series focuses on how \u003ca href=\"http://twitter.com/vnickc\"\u003eNick Colyer\u003c/a\u003e and I built the authentication piece of the plugin so that we could then pass commands to the \u003ca href=\"http://rubrik.com\"\u003eRubrik\u003c/a\u003e appliance. An API requires a login just like any other portal would. Since this is a a REST API, we actually need to do a \u0026ldquo;POST\u0026rdquo; on the login resource to get ourselves an authentication token.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://github.com/rubrikinc/vRO-Workflow\"\u003eDownload the Plugin from Github\u003c/a\u003e\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eNOTE: The first version of this code has been refactored and migrated to Github in Rubrik\u0026rsquo;s Repository since the time of this initial writing\u003c/p\u003e","title":"Rubrik API Logins through vRealize Orchestrator"},{"content":"VMware announced Site Recovery Manager version 6.1 this week at VMworld in San Francisco California. Several new features were unveiled for VMware’s flagship Disaster Recovery product.\nStorage Profile Protection Groups Remember back in the old days (prior to today), when deploying a new virtual machine we had to ensure the datastore we were putting the virtual machine on was replicated? Not only that, but if this new VM was part of a group of similar VMs that needed to fail over together, we needed to make sure it was in the same protection group? Well VMware decided this was a cumbersome process and added “Storage Profile Protection Groups”.\nIn SRM 6.1 we will use storage profiles to map datastores with protection groups. Now we’ll be able to deploy a VM and select a storage profile to automatically place the VM in the correct datastore and even better, configure protection for the virtual machine.\nOrchestrated vMotion in Active-Active Datacenters Yeah, you kind of expected something like this right? VMware announced long distance vMotion and cross vCenter vMotions with vSphere 6.0 last VMworld. We can now start doing live migrations between physical locations so why not add this to the disaster recovery orchestration engine? I think this new feature might be very useful for some companies that routinely deal with disasters where there is some warning, like a hurricane. Prior to SRM 6.1 you would have been able to do a planned failover through a previous version of SRM, but it would have required a small amount of downtime. You might also have been able to do a long distance vMotion but this would have been some manual or scripted work. With SRM 6.1 the planned failover could be done in an orchestrated fashion with zero downtime!\nOK, you’ve probably got some questions about this, lets see if I can knock out a few of them.\nQuestion 1: What if my virtual machine has a lot of RAM and vMotions could take a very long time? Do I have to vMotion them for planned migrations?\nAnswer 1: Nope! If you have certain VMs that you know you never want to vMotion during your planned migration, you’ll have the option to select the VM and disable the vMotion option during protection.\nQuestion 2: What about the network?\nAnswer 2: Yeah, the network needs to be the same on both vCenters or your VM won’t be able to communicate with the rest of the network anymore. This is the same as a normal vMotion. SRM will be able to change IP Addresses like it always has, but this requires a small amount of downtime as you might guess.\nQuestion 3: Do I have two different planned recovery options then?\nAnswer 3: There is one planned recovery still, but now there is an option to enable the vMotion of eligible VMs.\nvCenter Spanned NSX Integration The last main feature of the product is its integration with the NSX product. You used to have to explicitly map each VM with a recovery network. Now in SRM 6.1 if you’re using NSX on both vCenters and the NSX networks are the same on each, SRM will map these networks for you. (yes, you can override this mapping if you need).\nOther Notes SRM 6.1 has also done some rearranging of the recovery plans so that you can get better visibility into what is going on during a failure. If you’ve ever had to troubleshoot a failover this is a great addition to help narrow down the problem. It also provides more places to but scripts into your failover, which is welcomed.\n","permalink":"https://theithollow.com/2015/08/31/vmware-site-recovery-manager-6-1-annouced/","summary":"\u003cp\u003eVMware announced Site Recovery Manager version 6.1 this week at VMworld in San Francisco California. Several new features were unveiled for VMware’s flagship Disaster Recovery product.\u003c/p\u003e\n\u003ch1 id=\"storage-profile-protection-groups\"\u003eStorage Profile Protection Groups\u003c/h1\u003e\n\u003cp\u003eRemember back in the old days (prior to today), when deploying a new virtual machine we had to ensure the datastore we were putting the virtual machine on was replicated? Not only that, but if this new VM was part of a group of similar VMs that needed to fail over together, we needed to make sure it was in the same protection group? Well VMware decided this was a cumbersome process and added “Storage Profile Protection Groups”.\u003c/p\u003e","title":"VMware Site Recovery Manager 6.1 Announced"},{"content":"In part one of this series, we went over some basics about what REST is and the methods involved in it. In this post, we\u0026rsquo;ll add a REST host and show you how to add some REST Operations. To begin, we need to add a REST host. In plain terms, this is simply a host that will be accepting an API call. In this case, we\u0026rsquo;re adding the Rubrik Hybrid Cloud Appliance as our REST host.\nAdd a REST Host We open vRealize Orchestrator and go to the design view. From there we navigate to Library \u0026ndash;\u0026gt; HTTP-REST \u0026ndash;\u0026gt; Configuration \u0026ndash;\u0026gt; Add a REST Host. Right click to run the workflow.\nThe workflow dialog opens and we need to enter some information about our REST host. We\u0026rsquo;ll give it a descriptive name so that we can differentiate it from all of the other REST hosts we\u0026rsquo;ll be adding with our newly acquired skills!\nNext, we need to add the URL of the REST host. This is easy for rubrik since it\u0026rsquo;s just the hostname on port 443. In other cases this may be something like https://hostname/api:443 or some other resource. We are also able to change the timeout settings and whether or not to accept the certificate.\nIf you\u0026rsquo;re paying close attention to the screens, you\u0026rsquo;ll notice I skipped the proxy settings because I don\u0026rsquo;t need a proxy. If you are using a proxy server, enter those settings. Next, we need to add information about the authentication. Rubrik is using Basic Authentication, but depending on the host you\u0026rsquo;re connecting to you may have a different type such as OAuth.\nSince we selected Basic for the authentication type, we will enter the username and a password to connect to the host. Also, notice the \u0026ldquo;Session Mode\u0026rdquo; which is set to shared session. This means that these credentials will be used for all connections regardless of who runs the workflows.\nLastly, we choose whether we\u0026rsquo;re going to check to see if the SSL certificate matches the hostname. For a production server we should say \u0026ldquo;Yes\u0026rdquo; but this is a lab and we can get away with a slightly less secure connection.\nThats the basics of adding a REST host. The REST host will have several operations tied to it which we\u0026rsquo;ll handle in the next section.\nREST Operations The REST operations listed below, map to methods that are available on the Rubrik appliance. Remember the: GET,PUT,DELETE,PATCH methods we talked about in the previous post? To add a REST operation run the \u0026ldquo;Add a REST operation\u0026rdquo; workflow. The first operation that we add is a POST method to get a login token from the appliance. We\u0026rsquo;ll get to more of this in the next blog post, but for now we just add the operation. We select the \u0026ldquo;Parent host\u0026rdquo; which is the REST host that we added earlier. Next we give the Operation a descriptive name so that we\u0026rsquo;ll know what commands we\u0026rsquo;re running. Next is the Template URL which is the REST Resource. The resource that you\u0026rsquo;ll need to use can be found in the API documentation or in my case, I found it in the Swagger-UI provided by Rubrik. The Template URL for getting a login token from Rubrik is \u0026ldquo;/login\u0026rdquo;. The HTTP Method is a POST and our content type is \u0026ldquo;application/json\u0026rdquo;. Again, I found the method and content type from the Swagger-UI provided by Rubrik, buty ou may need to check the API documentation for other vendors.\nWe can run the workflow again and add any other REST operations that we might want to use. The next operation I added was a GET command on the \u0026ldquo;/vm\u0026rdquo; resource. Guess what it does? Yep, it lists the VMs.\nI\u0026rsquo;ve also added a PATCH method on the \u0026ldquo;/vm/{id}\u0026rdquo; Lets take a second to look at this one. PATCH is used to modify an entry so we can safely assume I\u0026rsquo;m going to make a modification to a virtual machine. The {id} means that it requires a VM ID as part of the input. Since we only want to modify a single VM we need to ensure that we only return a single entry in the resource. A resource with \u0026ldquo;/VM\u0026rdquo; might return all the VMs whereas the \u0026ldquo;/VM/{id}\u0026rdquo; resource would only return a specific VM.\nAnd the last operation that we added was a GET on the /slaDomain resource. Rubrik calls their backup policies an slaDomain.\nSummary We\u0026rsquo;re still doing some of the setup here and haven\u0026rsquo;t run any workflows against the Rubrik appliance yet, but we\u0026rsquo;ll be getting to that in the next post. By now though, you should know how to setup a new REST host and setup the REST operations. Remember, that you really won\u0026rsquo;t know how to use the API unless you have proper documentation or swagger or something to view the API schema. Don\u0026rsquo;t feel bad if you didn\u0026rsquo;t know that there was an \u0026ldquo;slaDomain\u0026rdquo; or other resource. You just have to look some of this up.\nIf you want more information about using the REST hosts, check out Nick Colyer\u0026rsquo;s blog over at SystemsGame.com for more information on how we built the Rubrik plugins. OH, and if you haven\u0026rsquo;t downloaded the plugin from Github yet, what are you waiting for? Here\u0026rsquo;s a link.\n","permalink":"https://theithollow.com/2015/08/27/vrealize-orchestrator-rest-hosts-and-operations-for-rubrik/","summary":"\u003cp\u003eIn \u003ca href=\"/2015/08/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api/\"\u003epart one of this series\u003c/a\u003e, we went over some basics about what REST is and the methods involved in it. In this post, we\u0026rsquo;ll add a REST host and show you how to add some REST Operations. To begin, we need to add a REST host. In plain terms, this is simply a host that will be accepting an API call. In this case, we\u0026rsquo;re adding the \u003ca href=\"http://rubrik.com\"\u003eRubrik\u003c/a\u003e Hybrid Cloud Appliance as our REST host.\u003c/p\u003e","title":"vRealize Orchestrator REST Hosts and Operations for Rubrik"},{"content":"What\u0026rsquo;s this REST thing everyone keeps talking about?\n\u0026ldquo;Oh, don\u0026rsquo;t worry, we have a REST API.\u0026rdquo;\nor\n\u0026ldquo;It\u0026rsquo;s just a simple REST call.\u0026rdquo;\nAt one point I was hearing these phrases and would get very frustrated. If REST is so commonplace or so simple to use, then why did I not know how to do it? If this sounds like you, then keep reading. I work for a company called \u0026ldquo;Ahead\u0026rdquo; as a consultant and they recently got a Rubrik Hybrid Cloud Appliance in their lab but my colleague Nick Colyer and I noticed that they didn\u0026rsquo;t have any vRealize Orchestrator Plugins for it. We decided to build these on our own, with the help of Chris Wahl and publish them for the community to use.\nThis post is the beginning to a series about building vRealize Orchestrator (vRO) plugins to connect to a Rubrik Hybrid Cloud Appliance via their built in API. Hopefully after reading the series you can see how Nick and I wrote the vRO plugins that you can freely download from Github.com.\nDownload the Plugin from Github\nNOTE: The first version of this code has been refactored and migrated to Github in Rubrik’s Repository since the time of this initial writing\nREST OK, lets start with the basics. REST stands for \u0026ldquo;Representational State Transfer\u0026rdquo; and is a way to pass information between systems. A RESTful API consists of a host, a resource, a method, headers and the payload.\nHost This one is easy to understand. It\u0026rsquo;s a server that requests will be sent to. REST isn\u0026rsquo;t magic and needs to know what server in which to send its commands, just like any other type of client-server communications. Generally this is done over HTTPS connections and https://rubrik.local:443 might be a valid host if your rubrik appliance is named \u0026ldquo;rubrik.local\u0026rdquo;.\nResource The resource identifies a \u0026ldquo;thing\u0026rdquo; you\u0026rsquo;re trying to access information from, or action you\u0026rsquo;re trying to perform. For instance, with Rubrik, we can view the virtual machines by accessing the /vm resource. If we combine the resource with the host, we will get \u0026ldquo;https://rubrik.local:443/vm.\u0026quot;\nMethod Methods are the actions that can be performed. There are several well defined methods that can be used such as:\nGET - Read information PUT - Write Information POST - Perform an action once PATCH - Modify Information DELETE -Delete Information For example, if we wanted to list all of the VMs in the Rubrik array, we would use a \u0026ldquo;GET\u0026rdquo; on the resource https://rubrik.local:443/vm.\nHeaders Headers are used to format the requests and the responses. For example, the header commonly includes authentication information as well as a format that the communication will be presented. Many times data is passed as a JSON object but it could be passed as XML or a string. The header will identity what format this information will be passed.\nPayload The payload is a generic term for \u0026ldquo;the data\u0026rdquo; that you\u0026rsquo;re trying to pass between systems. This could be, \u0026ldquo;nothing\u0026rdquo; all the way over to a large array of values. In the \u0026ldquo;GET\u0026rdquo; example earlier there isn\u0026rsquo;t a payload, but if we did a \u0026ldquo;PATCH\u0026rdquo; operation we would have to pass some information about what settings we plan to change.\nUsing Rubrik\u0026rsquo;s Swagger-UI Luckily, if you\u0026rsquo;re new to APIs, Rubrik uses a framework called \u0026quot; Swagger-UI\u0026rdquo; which is open source and shows you how to execute all of the APIs. There are examples for all of the operations. For instance, below, you can see the \u0026ldquo;Login\u0026rdquo; resource and the \u0026ldquo;POST\u0026rdquo; operation. From the screenshot, you can probably also tell that it requires a \u0026ldquo;userID\u0026rdquo; and \u0026ldquo;password\u0026rdquo; in the request and the header will request responses in the type of \u0026ldquo;application/json\u0026rdquo;. The swagger UI is a really nice way to visually see the API and really helps to get started.\nSummary The REST operation will be a crucial piece to our communications between vRealize Orchestrator and the Rubrik appliance so its important to understand the basics of REST before we dive into the plugins that Nick and I created. If you want to just get the plugin and work with it, I urge you to check out Nick\u0026rsquo;s posts over at SystemsGame.com for details on how to use the plugins. The \u0026ldquo;rest\u0026rdquo; of this series will focus on how we built the plugins and the REST operations needed to make the plugins work.\nThe next post will focus on setting up our REST hosts and operations for the Rubrik appliance so that we can run some commands.\n","permalink":"https://theithollow.com/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api-2/","summary":"\u003cp\u003eWhat\u0026rsquo;s this REST thing everyone keeps talking about?\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u0026ldquo;Oh, don\u0026rsquo;t worry, we have a REST API.\u0026rdquo;\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eor\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u0026ldquo;It\u0026rsquo;s just a simple REST call.\u0026rdquo;\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eAt one point I was hearing these phrases and would get very frustrated. If REST is so commonplace or so simple to use, then why did I not know how to do it? If this sounds like you, then keep reading. I work for a company called \u0026ldquo;Ahead\u0026rdquo; as a consultant and they recently got a Rubrik Hybrid Cloud Appliance in their lab but my colleague \u003ca href=\"http://twiter.com/vnickc\"\u003eNick Colyer\u003c/a\u003e and I noticed that they didn\u0026rsquo;t have any vRealize Orchestrator Plugins for it. We decided to build these on our own, with the help of \u003ca href=\"http://twitter.com/chriswahl\"\u003eChris Wahl\u003c/a\u003e and publish them for the community to use.\u003c/p\u003e","title":"Getting Started with vRealize Orchestrator and Rubrik's REST API"},{"content":"What\u0026rsquo;s this REST thing everyone keeps talking about?\n\u0026ldquo;Oh, don\u0026rsquo;t worry, we have a REST API.\u0026rdquo;\nor\n\u0026ldquo;It\u0026rsquo;s just a simple REST call.\u0026rdquo;\nAt one point I was hearing these phrases and would get very frustrated. If REST is so commonplace or so simple to use, then why did I not know how to do it? If this sounds like you, then keep reading. I work for a company called \u0026ldquo;Ahead\u0026rdquo; as a consultant and they recently got a Rubrik Hybrid Cloud Appliance in their lab but my colleague Nick Colyer and I noticed that they didn\u0026rsquo;t have any vRealize Orchestrator Plugins for it. We decided to build these on our own, with the help of Chris Wahl and publish them for the community to use.\nThis post is the beginning to a series about building vRealize Orchestrator (vRO) plugins to connect to a Rubrik Hybrid Cloud Appliance via their built in API. Hopefully after reading the series you can see how Nick and I wrote the vRO plugins that you can freely download from Github.com\nREST OK, lets start with the basics. REST stands for \u0026ldquo;Representational State Transfer\u0026rdquo; and is a way to pass information between systems. A RESTful API consists of a host, a resource, a method, headers and the payload.\nHost This one is easy to understand. It\u0026rsquo;s a server that requests will be sent to. REST isn\u0026rsquo;t magic and needs to know what server in which to send its commands, just like any other type of client-server communications. Generally this is done over HTTPS connections and https://rubrik.local:443 might be a valid host if your rubrik appliance is named \u0026ldquo;rubrik.local\u0026rdquo;.\nResource The resource identifies a \u0026ldquo;thing\u0026rdquo; you\u0026rsquo;re trying to access information from, or action you\u0026rsquo;re trying to perform. For instance, with Rubrik, we can view the virtual machines by accessing the /vm resource. If we combine the resource with the host, we will get \u0026ldquo;https://rubrik.local:443/vm.\u0026quot;\nMethod Methods are the actions that can be performed. There are several well defined methods that can be used such as:\nGET - Read information PUT - Write Information POST - Perform an action once PATCH - Modify Information DELETE -Delete Information For example, if we wanted to list all of the VMs in the Rubrik array, we would use a \u0026ldquo;GET\u0026rdquo; on the resource https://rubrik.local:443/vm.\nHeaders Headers are used to format the requests and the responses. For example, the header commonly includes authentication information as well as a format that the communication will be presented. Many times data is passed as a JSON object but it could be passed as XML or a string. The header will identity what format this information will be passed.\nPayload The payload is a generic term for \u0026ldquo;the data\u0026rdquo; that you\u0026rsquo;re trying to pass between systems. This could be, \u0026ldquo;nothing\u0026rdquo; all the way over to a large array of values. In the \u0026ldquo;GET\u0026rdquo; example earlier there isn\u0026rsquo;t a payload, but if we did a \u0026ldquo;PATCH\u0026rdquo; operation we would have to pass some information about what settings we plan to change.\nUsing Rubrik\u0026rsquo;s Swagger-UI Luckily, if you\u0026rsquo;re new to APIs, Rubrik uses a framework called \u0026quot; Swagger-UI\u0026rdquo; which is open source and shows you how to execute all of the APIs. There are examples for all of the operations. For instance, below, you can see the \u0026ldquo;Login\u0026rdquo; resource and the \u0026ldquo;POST\u0026rdquo; operation. From the screenshot, you can probably also tell that it requires a \u0026ldquo;userID\u0026rdquo; and \u0026ldquo;password\u0026rdquo; in the request and the header will request responses in the type of \u0026ldquo;application/json\u0026rdquo;. The swagger UI is a really nice way to visually see the API and really helps to get started.\nSummary The REST operation will be a crucial piece to our communications between vRealize Orchestrator and the Rubrik appliance so its important to understand the basics of REST before we dive into the plugins that Nick and I created. If you want to just get the plugin and work with it, I urge you to check out Nick\u0026rsquo;s posts over at SystemsGame.com for details on how to use the plugins. The \u0026ldquo;rest\u0026rdquo; of this series will focus on how we built the plugins and the REST operations needed to make the plugins work.\nThe next post will focus on setting up our REST hosts and operations for the Rubrik appliance so that we can run some commands.\n","permalink":"https://theithollow.com/2015/08/25/getting-started-with-vrealize-orchestrator-and-rubriks-rest-api/","summary":"\u003cp\u003eWhat\u0026rsquo;s this REST thing everyone keeps talking about?\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u0026ldquo;Oh, don\u0026rsquo;t worry, we have a REST API.\u0026rdquo;\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eor\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e\u0026ldquo;It\u0026rsquo;s just a simple REST call.\u0026rdquo;\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eAt one point I was hearing these phrases and would get very frustrated. If REST is so commonplace or so simple to use, then why did I not know how to do it? If this sounds like you, then keep reading. I work for a company called \u0026ldquo;Ahead\u0026rdquo; as a consultant and they recently got a Rubrik Hybrid Cloud Appliance in their lab but my colleague \u003ca href=\"http://twiter.com/vnickc\"\u003eNick Colyer\u003c/a\u003e and I noticed that they didn\u0026rsquo;t have any vRealize Orchestrator Plugins for it. We decided to build these on our own, with the help of \u003ca href=\"http://twitter.com/chriswahl\"\u003eChris Wahl\u003c/a\u003e and publish them for the community to use.\u003c/p\u003e","title":"Getting Started with vRealize Orchestrator and Rubrik's REST API"},{"content":"","permalink":"https://theithollow.com/sample-page/","summary":"","title":"Home"},{"content":"\nFellow Ahead employee Nick Colyer and I built some vRealize Orchestrator plugins for automating the configuration of virtual machine backups with the Rubrik Converged Data Management solution. The plugin will allow you to add virtual machines to existing service level domains. In the future, this plugin will be updated to include the ability to create snapshots on demand, create new SLA Domains and restore VMs all from vRealize Orchestrator. These plugins can be extended to solutions like vRealize Automation so that backup configuration can become of your company\u0026rsquo;s standard virtual machine build process.\nDownload the Plugin from Github\nNOTE: The first version of this code has been refactored and migrated to Github in Rubrik’s Repository since the time of this initial writing\nNick Colyer has written a series of blog posts all detailing how to use the vRO plugin and get it working. Please check it out over at SystemsGame.com for instructions\nI\u0026rsquo;ve written a series of posts outlining the process of creating these plugins. The series explains REST, Rubrik and how to use vRO to extend to third party systems. You\u0026rsquo;ll learn how to build workflows and create your own REST operations.\nPart 1 - Learning REST with vRealize Orchestrator Part 2 - Adding REST Hosts and REST Operations Part 3 - Building Authentication Workflow for Rubrik Part 4 - Get VM Information from Rubrik Part 5 - Assign a VM to a Rubrik slaDomain ","permalink":"https://theithollow.com/vro-plugins-for-rubrik/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/08/vROPluginRubrikLogo.png\"\u003e\u003cimg alt=\"vROPluginRubrikLogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/08/vROPluginRubrikLogo-1024x309.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eFellow Ahead employee \u003ca href=\"https://twitter.com/vnickc\"\u003eNick Colyer\u003c/a\u003e and I built some vRealize Orchestrator plugins for automating the configuration of virtual machine backups with the \u003ca href=\"http://www.rubrik.com\"\u003eRubrik\u003c/a\u003e Converged Data Management solution. The plugin will allow you to add virtual machines to existing service level domains. In the future, this plugin will be updated to include the ability to create snapshots on demand, create new SLA Domains and restore VMs all from vRealize Orchestrator. These plugins can be extended to solutions like vRealize Automation so that backup configuration can become of your company\u0026rsquo;s standard virtual machine build process.\u003c/p\u003e","title":"vRO Plugins for Rubrik"},{"content":"I ran into that funny problem where if you have so many wireless devices you\u0026rsquo;re overloading your tiny wireless router that you\u0026rsquo;ve had for 5 years. After looking around a bit I settled on the AC3200 Triband Router from Linksys. I wanted something that would be really powerful to handle all of my devices and something with a cool factor.\nThe device arrived and had some simple instructions to configure it. Connect to the default SSID via a wireless device and open up your web browser to myrouter.local to get connected. The setup had a \u0026ldquo;Quick Setup\u0026rdquo; mode to get everything running quickly but I found that the quick setup would not work for my environment. The quick setup expects that you\u0026rsquo;ve connected your Wireless Router directly to a cable modem and in my case I\u0026rsquo;m connected to a layer three switch behind an ASA firewall. After resetting the router and doing the manual setup though, everything was good.\nI don\u0026rsquo;t get too excited about user interfaces but this one seemed really sharp to me.\nThe network MAP seemed really neat to me. Maybe because my old router was ancient but being able to see all my devices graphically was neat to me. I do want to note that the device names show up on the map but I\u0026rsquo;ve removed them from my screenshot.\nQOS was made really simple to do. You can drag and drop devices into the prioritization screen and set your priorities. Likewise you can also select whichever device you want and set parental restrictions only for that device.\nOne thing I didn\u0026rsquo;t care for with my last router is that it couldn\u0026rsquo;t do VLAN tagging. The Linksys AC3200 has a really simple interface to allow you to add VLAN tags to any of the wired ports.\nIn addition to providing both 5GHz and 2GHz bands, the router also provides the ability to handle Guest Wireless and a DMZ. This should enable about any combination of connections you might need for a home network.\nIt might not be the most useful feature but the router provides an app for both Apple and Android. I don\u0026rsquo;t usually need to configure my router from my phone, but I don\u0026rsquo;t usually need to check my smoke detector either, and I can do that. The performance of this thing is maybe the best part. The transfer rate I was getting on my Mac was a little over 100 Mbps with my old 802.11n router. The new 802.11ac router is transferring around 700Mbps. You might not notice this browsing the web, but will for sure notice when you try to transfer files.\nNegatives There was one major issue I found with this router. Any time I would configure settings in the router, I would lose my connections. Simple things like changing the password or enabling guest wifi access would make me lose my private network wireless and I\u0026rsquo;d have to reboot the router. This is a big deal, but only when initially configuring the router. Once it was setup, its been pretty solid. I hope that Linksys will fix this with a firmware update down the road.\nSummary Overall I\u0026rsquo;m pretty happy with this router. If the dropped wireless continues to be a problem then it will be a serious let down, but as long as I know that it only happens when configuring settings on the router I\u0026rsquo;ll live with it. Performance is great but so far the stability seems a bit lacking.\n","permalink":"https://theithollow.com/2015/08/17/linksys-ac3200-review/","summary":"\u003cp\u003e\u003cimg alt=\"LinksysRouter6\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/08/LinksysRouter6-264x300.png\"\u003e\u003cimg alt=\"LinksysRouter7\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/08/LinksysRouter7-239x300.png\"\u003eI ran into that funny problem where if you have so many wireless devices you\u0026rsquo;re overloading your tiny wireless router that you\u0026rsquo;ve had for 5 years. After looking around a bit I settled on the \u003ca href=\"http://amzn.to/1Wbs7tc\"\u003eAC3200 Triband Router from Linksys\u003c/a\u003e. I wanted something that would be really powerful to handle all of my devices and something with a cool factor.\u003c/p\u003e\n\u003cp\u003eThe device arrived and had some simple instructions to configure it. Connect to the default SSID via a wireless device and open up your web browser to myrouter.local to get connected. The setup had a \u0026ldquo;Quick Setup\u0026rdquo; mode to get everything running quickly but I found that the quick setup would not work for my environment. The quick setup expects that you\u0026rsquo;ve connected your Wireless Router directly to a cable modem and in my case I\u0026rsquo;m connected to a layer three switch behind an ASA firewall. After resetting the router and doing the manual setup though, everything was good.\u003c/p\u003e","title":"Linksys AC3200 Review"},{"content":"When you execute a Cisco UCS Director workflow you\u0026rsquo;re usually prompted to enter in some information. Usually this is something like a virtual machine name, or an IP Address, even some credentials possibly. The values that you enter can be formatted so that they come from a list and the user just has to select the right value. This helps immensely in the amount of troubleshooting you have to do because only specific verified values can be displayed.\nSo what happens when you have a list of values (LOV) that is dependent upon another selection? To do this in UCS Director we rely on the filter criteria. Lets assume that I\u0026rsquo;ve got a workflow that wants to get a food order from a popular fast food chain. Depending on which fast food restaurant is selected, the menu items are different. For instance, only McDonalds has a Big Mac.\nGo to Policies \u0026ndash;\u0026gt; Orchestration \u0026ndash;\u0026gt; Custom Workflow Inputs tab and create a custom workflow input. In the example below I\u0026rsquo;ve created a list of three popular fast food chains. Now we create a second list of values for food items. In the example below, I\u0026rsquo;ve created fast food items that might exist in each of the fast food chains, but notice the label has the name of the chain in it. This is an important step. The label must contain the name of the referenced item from above and it must be spelled the same way!\nNow we create a workflow. Go to Policies \u0026ndash;\u0026gt; Orchestration \u0026ndash;\u0026gt; Workflows and add a new workflow. In the User Inputs section of the workflow, add the two List of Values you\u0026rsquo;ve created earlier. I added both the Restaurant and the FoodItem Lists I created earlier. On the second list of values, enter an \u0026ldquo;Admin Input Criteria.\u0026rdquo; This criteria should look like \u0026ldquo;CONTAINS ${InputlabelofList1}.\u0026rdquo; The filter criteria I used was \u0026ldquo;CONTAINS ${Restaurant}.\u0026rdquo; This means that the FoodItem must contain the same value as the Restaurant item selected earlier.\nTo demonstrate this, I can execute the workflow. If I select the Restaurant \u0026ldquo;In-N-Out\u0026rdquo; the only food items that I\u0026rsquo;m able to select also have \u0026ldquo;In-N-Out\u0026rdquo; in the title.\nConversely, if I select \u0026ldquo;McDonalds\u0026rdquo; as the Restaurant, I am only able to pick the \u0026ldquo;McDonalds\u0026rdquo; food items.\nThere are plenty of other cool things to do with UCS Director and some publicly available workflows that you can download right from the Cisco Communities site. Please take a look at this site if you\u0026rsquo;re looking for examples or pre-build workflows for your environment. A big thanks to Orf Gelbrich for his time and efforts with the site.\n","permalink":"https://theithollow.com/2015/08/10/ucs-director-dynamic-list-of-values/","summary":"\u003cp\u003eWhen you execute a Cisco UCS Director workflow you\u0026rsquo;re usually prompted to enter in some information. Usually this is something like a virtual machine name, or an IP Address, even some credentials possibly. The values that you enter can be formatted so that they come from a list and the user just has to select the right value. This helps immensely in the amount of troubleshooting you have to do because only specific verified values can be displayed.\u003c/p\u003e","title":"UCS Director Dynamic List of Values"},{"content":"Infoblox is a pretty popular IP Address Management (IPAM) solution for many shops. Wouldn\u0026rsquo;t it be nice to integrate your automation solution such as vRealize Automation, with your existing IPAM system? Well, don\u0026rsquo;t worry. You can!\nInfoblox Setup This post isn\u0026rsquo;t going to go into great detail about the setup of the Infoblox appliance but we do need to make sure that we\u0026rsquo;re licensed for API usage correctly. Ensure that the infoblox appliance has the \u0026ldquo;Cloud Network Automation\u0026rdquo; license applied to it. This is an easy thing to check. If your appliance has the \u0026ldquo;Cloud\u0026rdquo; tab, then the license is enabled.\nAnother piece of info to keep in mind is to create an API administrator. The API admin can update the appliance via the API only. This is a good way to keep your appliance secure and normal administrators don\u0026rsquo;t have access to the API by default so be sure to check it.\nvRO Plugins All of the magic that happens comes from the vRealize Orchestrator plugins. vRealize Automation will leverage Orchestrator to do all of the heavy lifting. Download the latest version version of the Infoblox plugin. You can get this directly from Infoblox if you\u0026rsquo;d like. : https://www.infoblox.com/downloads/software/vmware-cloud-adapter\nI also found that I needed to install the Prop.Toolkit package for some additional helper plugins. http://www.infoblox.com/sites/infobloxcom/files/downloads/Infoblox_VCO_Plugin_Release_v_3_0_0_1.zip\nYou should also have the latest vRA plugins installed to connect Orchestrator with vRealize Automation. I\u0026rsquo;m assuming this is done, but if not, please download install and configure the vRA plugins for vRO.\nvRO Configuration Once you\u0026rsquo;ve installed the plugins we need to configure the plugin to point to our Infoblox appliance. To do this, login to the vRO configuration page and go to the Network tab. Enter the URL of your infoblox appliance and click import to download the SSL Certificate.\nNext up, go to the Infoblox plugin and add a new IPAM connection.\nNow we can open our vRO client and run a few workflows to setup the IPAM plugin. Go to the IPAM Directory and run the following workflows:\nAdd an IaaS host Wrapper Add a vCAC host Wrapper Install vCO customization Wrapper Once the wrapper workflows have been run, we\u0026rsquo;ve connected vRA all the way through vRO and over to the Infoblox appliance. The next step is to use the plugin to assign an IP Address.\nBuild Profiles vRealize Automation will use build profiles to store variable that are passed to vRO to do additional configuration. We want to add a new build profile for Infoblox so that we can attach it to our blueprints.\nRun the “Create Build Profile for Reserve an IP for vCAC VM in Network” workflow. This will ask us for the vRA Host and what to name the profile.\nIf we go over to our vRealize Automation portal, we should see a build profile under Infrastructure \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Build Profiles. Open up the build profile we just created through vRealize Orchestrator. Update the values for the properties for which I\u0026rsquo;ve added the arrows. Then save the blueprint.\nGo into your blueprint and click the properties tab. Select the build profile we just modified and save the blueprint.\nCaveats One thing to be aware of is that if you\u0026rsquo;re using a Network Profile, it will break the IPAM configuration. Check your Network settings in your reservations and if you have a network profile assigned, you\u0026rsquo;ll need to remove it.\nRequest a blueprint and marvel in your IPAM skills! If you want to check on the status of your workflow check the \u0026ldquo;Main\u0026rdquo; section in the IPAM plugin in vRO. You\u0026rsquo;ll see the workflows kick off during the deployment process.\nThe Infoblox Cloud tab will show your Tenants and your deployed VMs.\n","permalink":"https://theithollow.com/2015/08/03/vrealize-automation-infoblox-integration/","summary":"\u003cp\u003eInfoblox is a pretty popular IP Address Management (IPAM) solution for many shops. Wouldn\u0026rsquo;t it be nice to integrate your automation solution such as vRealize Automation, with your existing IPAM system? Well, don\u0026rsquo;t worry. You can!\u003c/p\u003e\n\u003ch2 id=\"infoblox-setup\"\u003eInfoblox Setup\u003c/h2\u003e\n\u003cp\u003eThis post isn\u0026rsquo;t going to go into great detail about the setup of the Infoblox appliance but we do need to make sure that we\u0026rsquo;re licensed for API usage correctly. Ensure that the infoblox appliance has the \u0026ldquo;Cloud Network Automation\u0026rdquo; license applied to it. This is an easy thing to check. If your appliance has the \u0026ldquo;Cloud\u0026rdquo; tab, then the license is enabled.\u003c/p\u003e","title":"vRealize Automation Infoblox Integration"},{"content":"Cisco UCS Director gives us some great automation and orchestration capabilities in the product. One thing I\u0026rsquo;ve noticed though is the need to customize the actions that can performed on virtual machines after deployment (Sometimes called Day 2 Operations). This post explains how to make some custom buttons for end users to manage their workloads more effectively.\nCreate Workflow Creating a workflow is out of the scope of this post, but we need to have a workflow to use for our examples. I\u0026rsquo;ve created a very simple workflow to create a VM Snapshot and email the user when it happens. To create your own workflow Go to Policies \u0026ndash;\u0026gt; Orchestration \u0026ndash;\u0026gt; Workflows to get started.\nRemove Default Buttons There is a specific set of default buttons that are available out of the box. We can enable or disable these buttons in the \u0026ldquo;End User Policy.\u0026rdquo;\nGo to Policies \u0026ndash;\u0026gt; Virtual/Hypervisor Policies \u0026ndash;\u0026gt; Service Delivery \u0026ndash;\u0026gt; End User Self-Service Policy. Select or unselect the actions you need. In this case I\u0026rsquo;ve removed them to clean up the GUI.\nCreate Custom User Action Now we\u0026rsquo;re going to create the button. To do this we create a User Action Policy.\nGo to Policies \u0026ndash;\u0026gt; Orchestration \u0026ndash;\u0026gt; User VM Action Policy and create a new policy. Give the policy a name and select the number of actions that will be part of the policy. A single policy can have multiple actions (or buttons) associate with it. In this case I\u0026rsquo;ve only picked one action.\nUnder the VM Actions, give it a name. The name will be the name of the button that is displayed to the end user. Select the workflow we created earlier.\nModify VDC Now we modify our VDC to include the new User Action Policy.\nGo to Policies \u0026ndash;\u0026gt; Virtual/HyperVisor Policies \u0026ndash;\u0026gt; Virtual Data Centers\nUser Catalog Once all of the configuration settings have been done we can login to the UCS Director Portal as an end user. Select a VM that we\u0026rsquo;ve provisioned and our new button should show up. Notice that there is a \u0026ldquo;*\u0026rdquo; next to the button to show that its a custom button. If you\u0026rsquo;re wondering where the icon comes from well that is a bit of a mystery. Depending on the name you use, the button will look differently. If the name has \u0026ldquo;Add\u0026rdquo; in it then the icon is a plus sign, if it\u0026rsquo;s got \u0026ldquo;Delete\u0026rdquo; in it there is a red X for an icon.\n","permalink":"https://theithollow.com/2015/07/27/create-a-custom-button-in-ucs-director/","summary":"\u003cp\u003eCisco UCS Director gives us some great automation and orchestration capabilities in the product. One thing I\u0026rsquo;ve noticed though is the need to customize the actions that can performed on virtual machines after deployment (Sometimes called Day 2 Operations). This post explains how to make some custom buttons for end users to manage their workloads more effectively.\u003c/p\u003e\n\u003ch2 id=\"create-workflow\"\u003eCreate Workflow\u003c/h2\u003e\n\u003cp\u003eCreating a workflow is out of the scope of this post, but we need to have a workflow to use for our examples. I\u0026rsquo;ve created a very simple workflow to create a VM Snapshot and email the user when it happens. To create your own workflow Go to Policies \u0026ndash;\u0026gt; Orchestration \u0026ndash;\u0026gt; Workflows to get started.\u003c/p\u003e","title":"Create a Custom Button in UCS Director"},{"content":" Long is the way and hard, that out of hell leads up to light - Milton\nApparently, Milton has been through the VCDX process. It is a challenge that will test your resolve and you will probably learn a lot along the way. You\u0026rsquo;ll also be glad when its over.\nI\u0026rsquo;ve been good at many things in my life, but never felt like I was great at anything. I\u0026rsquo;ve succeeded at most things I\u0026rsquo;ve attempted, but the VCDX was a goal I truly didn\u0026rsquo;t think I was capable of achieving. Chris Colotti mentioned in one of his posts that you need to decide why you\u0026rsquo;re going for the VCDX in the first place. In my case, I was doing it to prove to myself that I could do it. The process really taught me something about myself that I didn\u0026rsquo;t know. It was my own personal Vision Quest. (Queue Lunatic Fringe them song here)\nMy Journey really started when I joined Ahead. I was in a company that kept me doing a little less tech and I knew that if I wanted to really make a stab for the VCDX, I needed to be in a place that nurtured that type of endeavor. Working at Ahead is like drinking from a firehose. Every single day I spend working there I learn something new. I submitted my first VCDX design in December of 2014 with the hopes of defending in February 2015. Unfortunately, I was not asked to defend and here is why this post is labeled \u0026ldquo;Mea Culpa\u0026rdquo;. I was embarrassed about my design being rejected and didn\u0026rsquo;t talk too much about it publicly, which is why I\u0026rsquo;m apologizing. I\u0026rsquo;ve seen some very intelligent people have their designs rejected or not pass the defense stage, and I will tell you now, that if this happens to you, do not be ashamed. I work for a company that is \u0026ldquo;practically\u0026rdquo; packed full of VCDXs as well as other geniuses and failing to meet this goal on the first pass made me feel like I didn\u0026rsquo;t even fit in. If this happens to you, ignore it. I will tell you that I was very disheartened and it really destroyed my confidence not only about being able to finish the VCDX, but really question how much I knew about my day job. This was very tough for me, but I knew the only way to move past this was to finish the VCDX program. I couldn\u0026rsquo;t spend the rest of my career with the feeling that I just came up short. The failure made me even more determined to hit my goal.\nI cursed at the feedback I got for a week or so before finally accepting that all of the comments had merit and that I needed to make (some/major) changes to my design. I went back to the drawing board, completely wiped out some sections and started fresh. I submitted the new design in April and this time I was invited to defend in June.\nInitially I thought that preparing for the defense was going to be the downhill part of the work. I thought that preparing a PowerPoint presentation with key features of my design would be easy. It turns out that while this isn\u0026rsquo;t incredibly difficult, it was very time consuming and I went over it many times, deleting and replacing things on several occasions. In addition, I assumed that the knowledge that I currently had about vSphere should be sufficient to mount a defense. Again, I was wrong. (Are you sensing a theme?)\nI went through a mock defense at Ahead with three existing VCDXs and as you might assume, they asked some tough questions. I felt that I did a pretty good job of defending my design but I wasn\u0026rsquo;t able to show a deep level of understanding of some areas. Specifically being able to answer questions quickly and succinctly . This is when I realized that I needed to study things that I thought that I already understood, just at a deeper level.\nDefense Day Show up. Be Awesome - Some Smart Guy\nAfter going over some homemade flash cards and a lot of white papers, I felt confident that I could defend my design. I flew to Palo Alto the day before the defense and got some dinner that I didn\u0026rsquo;t think would upset my stomach the next day. I spent the evening studying my design some more and then made sure I got a good nights sleep. (This is easier said than done. It was hard to sleep the night before and was even hard for me to eat anything.)\nI showed up at the VMware campus almost an hour early for fear that I wouldn\u0026rsquo;t be able to find my way around. When I got there I found out that I was in the wrong place and couldn\u0026rsquo;t find any notifications about which building I was supposed to be in once I got on campus. (VMware\u0026rsquo;s campus is a big place!) Long story short, I was 20 minutes late for my defense but the panelists were VERY gracious and didn\u0026rsquo;t give me a hard time about it. I felt terrible about the miscommunication but they didn\u0026rsquo;t hold any of this against me. The process of running across campus and showing up out of breath and 20 minutes late did not help my nerves though. Strangely, once the design defense started my nerves were gone. I was prepared and I knew my material.\nWaiting The defense was over and I felt a subtle calm wash over me. I was pretty sure that I didn\u0026rsquo;t do a very good job of the design session which really bothered me because its the thing I do every day. How could I stumble over myself on that section? I was confident that I passed, and then sure that I didn\u0026rsquo;t, and then sure that I didn\u0026rsquo;t and finally confident that I\u0026rsquo;d passed. This flip flopping of certainty went on constantly for the week I waited for my results.\nI hopped out of bed one morning a week after my defense and found my results. \u0026ldquo;Congratulations you\u0026rsquo;re VCDX 195\u0026rdquo;\nSuggestions The world meets nobody halfway. When you want something, you\u0026rsquo;ve gotta take it. - Sylvester Stallone\nIf you are considering a path to the VCDX, here are some quick tips based on my experience.\nTry to find a support group to help you - Coworkers, friends or a mentor can be a great way to keep you motivated and its good to have someone to bounce an idea off. Over prepare - Know your design, inside and out. Understand how and why all of the pieces work and why you made the decisions. Be prepared for failure - It happens. This is a difficult task and you should not feel bad about missing the mark. If you\u0026rsquo;re not prepared that this could happen to you, a failure could be devastating. Jump in over your head - If you\u0026rsquo;re going to do it. DO IT! Its an intimidating goal, but if you\u0026rsquo;re going to get it done you\u0026rsquo;ve got to dedicate yourself to it. Don\u0026rsquo;t dip your toes into the VCDX process, jump in head first. ","permalink":"https://theithollow.com/2015/07/20/vcdx-vision-quest-and-mea-culpa/","summary":"\u003cblockquote\u003e\n\u003cp\u003eLong is the way and hard, that out of hell leads up to light - Milton\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003cp\u003eApparently, Milton has been through the VCDX process. It is a challenge that will test your resolve and you will probably learn a lot along the way. You\u0026rsquo;ll also be glad when its over.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve been good at many things in my life, but never felt like I was great at anything. I\u0026rsquo;ve succeeded  at most things I\u0026rsquo;ve attempted, but the VCDX was a goal I truly didn\u0026rsquo;t think I was capable of achieving. \u003ca href=\"https://twitter.com/ccolotti\"\u003eChris Colotti\u003c/a\u003e mentioned in one of his posts that you need to decide why you\u0026rsquo;re going for the VCDX in the first place. In my case, I was doing it to prove to myself that I could do it. The process really taught me something about myself that I didn\u0026rsquo;t know. It was my own personal Vision Quest. (Queue Lunatic Fringe them song here)\u003c/p\u003e","title":"VCDX Vision Quest and Mea Culpa"},{"content":"I have to be honest here, I\u0026rsquo;d heard of Scale Computing before but never really paid too much attention to them. That is, until I got to see them present at Virtualization Field Day 5 in Boston Massachusetts this year.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\nDon\u0026rsquo;t forget about these guys if you\u0026rsquo;ve bought into the Hyper-Converged Infrastructure idea. Scale Computing targets small and medium businesses specifically, due to their low cost solution. The Scale nodes come in three models currently and the best part is that you don\u0026rsquo;t need to talk to a sales rep. to find out how much they list for.\nHC1000 = $25,499\nHC2000 = $37,499\nHC4000 = $67,499\nThese prices come with a few caveats though. The existing version of the Scale Computing platform only supports a max of 8 nodes. You could have multiple groups of nodes but you would have to manage each set of 8 independently from one another as of right now. Also, if you really want to run Hyper-V or vSphere as your virtualization solution, you\u0026rsquo;re out of luck. The Scale solution leverages KVM for a hypervisor and this is likely whey the prices is so competitive. Also, there is not a lot of customization available for how availability such as number of failed nodes the cluster tolerates or making sure that VMs are housed on separate nodes (anti-affinity).\nThe hyper-converged infrastructure solution touts an easy to administer HTML5 GUI that you could run from a smartphone if necessary. The clustering setup takes a pretty complicated set of Linux instructions and places them into an easy to configure GUI. The solution also provides asynchronous remote replication if you are in the need for a disaster recovery solution.\nDesign Decisions\nIf you\u0026rsquo;re in the market for a new solution, take a second and consider your requirements though. Do you need any of the feature that are only available with vSphere or Hyper-V? Do you need a single cluster with more than 8 nodes? Is your budget constrained so that you can\u0026rsquo;t afford a more expensive play? Do you need to specify specific availability requirements, or do you want the system to be available after losing a node with minimal downtime and don\u0026rsquo;t want to think about messing with the configurations?\nScale Computing definitely has a place in the industry, even if it\u0026rsquo;s a small niche. They currently support over 1100 customers and are doing so with a support staff of eight. \u0026ldquo;The entire company is only sixty-five people\u0026rdquo; CTO Jason Collier noted during the VFD5 session. It is obviously a point of pride with the company that they are doing so much with so little.\nSummary\nI won\u0026rsquo;t sit here and tell you that I think this is an enterprise ready solution but for a small to medium business with very few IT Administrators, this may be a pretty good play. The ability to purchase your compute, hypervisor, and storage all in one shot and getting some enterprise capabilities (replications, clustering, etc.) is pretty nice. Combine this with a pretty nifty price tag and easy setup and you\u0026rsquo;re well on your way. I\u0026rsquo;ll also throw out there that it seems like a risk that Scale Computing only has eight tech support engineers, but it\u0026rsquo;s also pretty likely that you\u0026rsquo;ll have a good relationship with them. If you call for support more than a few times, you might know these guys by name!\n","permalink":"https://theithollow.com/2015/07/13/straight-forward-convergence-with-scale/","summary":"\u003cp\u003eI have to be honest here, I\u0026rsquo;d heard of \u003ca href=\"http://www.scalecomputing.com/\"\u003eScale Computing\u003c/a\u003e before but never really paid too much attention to them. That is, until I got to see them present at Virtualization Field Day 5 in Boston Massachusetts this year.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\u003c/p\u003e","title":"Straight Forward Convergence with Scale"},{"content":"What can I say? We started building servers on top of servers and it temporarily blew people\u0026rsquo;s minds. The next logical step is to build a cloud inside a cloud. Ravello Systems is trying to make this process simple and easy. Ravello Systems was kind enough to present at Virtualization Field Day 5 in Boston at the end of June and I\u0026rsquo;m happy that I was able to participate at a delegate. They presented some really fun technology.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\nSo What is it? The short answer is Ravello Systems allows you to deploy your private cloud and deploy it on either Google Cloud or Amazon EC2. Big Whoop right? We can spin up virtual machines on Google Cloud or AWS by ourselves, why do we need Ravello Systems in the middle? Well, Ravello makes all of this provisioning simple and easy to do and can create cloud templates right from your vCenter. Ravello allows you to import a VM or template right from your vCenter server and add it to a catalog. From there, you can add more VMs or appliances (like load balancers) to build a cloud blueprint via Ravello\u0026rsquo;s \u0026ldquo;Visio\u0026rdquo; style drag and drop interface. These blueprints can be published to either Google Cloud or AWS to be run and the great thing is that you can redeploy your multi-tier apps over and over again from the Ravello user interface.\nHow does it work? In order to provide these repeatable cloud blueprints, Ravello lays down their own hypervisor (HVX) on top of EC2 or Google Cloud. When you deploy a cloud blueprint, Ravello\u0026rsquo;s decision engine kicks in to decide how many resources need to be provisioned and will deploy their HVX on top of the existing public cloud\u0026rsquo;s hypervisor, and then lays down your blueprint on top of that. This obviously creates some technical challenges such as how network routing will work etc.\nOne question you might have is how much of an effect on the price or performance does that extra hypervisor cost you? Ravello claims that a single vCPU on their hypervisor is 98%-99% as good as running directly on the cloud provider\u0026rsquo;s hypervsior. Memory drops to 80% as good, and the network is 80% as well with one exception. If the VM to VM traffic is all on a single HVX instance, the network utilization is up to 2x faster because it never has to leave the host.\nUse Cases There are a few use cases for their service and just like virtualization was strictly for Test/Dev in the beginning, this is a good use case for Ravello to start as well. You can recreate your entire vSphere environment on top of a public cloud if you want to. Sales demos are another one since an entire solution can be rebuilt from anywhere an internet connection exists.\ninceptiontop\nTo me, the best use case is for training. How many times have you been to a training class only to find out that the lab they have you working in is really underpowered due to resource constraints. Now the entire lab can be built on a public cloud provider and destroyed at the end of the week.\nHome Labs would fit into this training category as well. ESXi is another image that can be deployed on the Ravello solution. This allows you to test new VMware features without having any new hardware. Fellow VFD5 delegate Mike Preston used Ravello to test out long distance vMotion from his home lab to a Public Cloud nested version of vSphere 6. If you\u0026rsquo;re trying to understand how this works, just think ESXi on top of HVX on top of (public cloud virtualization layer).\nSummary Ravello has a pretty neat solution and some cool technology behind it. I love the idea of using this to build a lab in the Public cloud to cut down on capital expenditures and if you\u0026rsquo;re a vExpert, you can get started too for free. Ravello is offering 1000 hours for free to test out the solution. Go get your evaluation today.\n","permalink":"https://theithollow.com/2015/07/08/a-dream-within-a-dream/","summary":"\u003cp\u003eWhat can I say? We started building servers on top of servers and it temporarily blew people\u0026rsquo;s minds. The next logical step is to build a cloud inside a cloud. \u003ca href=\"http://ravellosystems.com\"\u003eRavello Systems\u003c/a\u003e is trying to make this process simple and easy. Ravello Systems was kind enough to present at Virtualization Field Day 5 in Boston at the end of June and I\u0026rsquo;m happy that I was able to participate at a delegate. They presented some really fun technology.\u003c/p\u003e","title":"A Dream within a Dream"},{"content":"Orchestrating a disaster recovery scenario is no simple task. It involves setting up an entirely different data center, figuring out how to manage IP Addresses after a failover, having procedures for users in an outage event and figuring out how to fail back after the disaster is over. Handling orchestrated DR has gotten much easier in the last ten years thanks to virtualization but it\u0026rsquo;s still not a walk in the park. VMware\u0026rsquo;s Site Recovery Manager, Zerto and Veeam have dominated this market over the past several years but there is a new kid in town. I got to see OneCloud at Virtualization Field Day 5 and I think they\u0026rsquo;ve got something worth a first look.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\nOneCloud Overview OneCloud software protects your private cloud VMware infrastructure by configuring and replicating to Amazon EC2. This isn\u0026rsquo;t a new concept, but OneCloud has taken automation a step further by configuring your AWS EC2 environment for you.\nThe OneCloud software installation is touted to only take thirty minutes to get initialized and protecting your environment. The install takes you through a wizard based configuration that will use your AWS EC2 credentials and builds an environment in the cloud for you. Its a pretty cool idea to use the Amazon EC2 API to provision your DR site in the cloud totally automated. This includes setting up a secure VPN tunnel so that traffic can be replicated between the sites.\nThe cool feature set doesn\u0026rsquo;t end there though. Much like SRM and Zerto, the software can provide you with a test failover option as well as a live failover, but since OneCloud is using the AWS API they can do some other cool stuff like resizing the VM during a test operation. Think about it this way, the public cloud is pretty cheap if you\u0026rsquo;re not using it, but a test operation will need to spin up some VMs to get a real test right? Why not spin up the VMs slightly undersized for your test scenarios? The VMs shouldn\u0026rsquo;t have much load on them during your tests so they don\u0026rsquo;t need to be full size. Why pay full freight for them? With the EC2 price in mind, they\u0026rsquo;ve also optimized their failback procedure to only replicate changed data back to your production site after a failure. This is a must if you\u0026rsquo;re failing over to the public cloud because you pay for data being sent outbound from EC2.\nAnother thing that I liked about OneCloud was that during a failure, they would also reprotect the data to another EC2 region so that your data is always protected. You never know, whatever the reason for your private cloud outage might reach the AWS cloud as well, so having it protected in a second EC2 region is pretty neat.\nLimitations Let me preface this section by stating that this is a 1.0 product. I think there is plenty of merit to what OneCloud is doing and they\u0026rsquo;ve tackled some cool stuff. Knowing all of this though, there are a few limitations that I think will hold these guys back until they have fixed them.\nOneCloud can currently only backup and replicate VMs once per hour. A one hour RPO will probably work great for some small businesses but this isn\u0026rsquo;t going to be acceptable for an enterprise customer. Then again, this product is likely being marketed heavily to the small business crowd anyway. Small businesses would be clamoring for an automated DR solution that doesn\u0026rsquo;t require the expense of an entirely new DR site to use.\nAlso, if you already have a secondary site, OneCloud is probably not useful to your company. At the present time, OneCloud only works with Amazon EC2. This means that it won\u0026rsquo;t work between vCenters, or with Azure or Google\u0026rsquo;s Cloud. I\u0026rsquo;m sure these are being targeted for a roadmap item but for right now its EC2 or bust.\nSummary OncCloud has some new stuff that other vendors aren\u0026rsquo;t doing and they have a good idea. I\u0026rsquo;m a little afraid that their solution only has a few features that are outside of the SRM, Zerto and Veeam feature set and can be easily copied. Keep an eye on these guys and see what happens in the next 18 months. If they keep building on their platform they might be one of the big players in the market.\n","permalink":"https://theithollow.com/2015/07/06/onecloud-to-rule-them-all/","summary":"\u003cp\u003eOrchestrating a disaster recovery scenario is no simple task. It involves setting up an entirely different data center, figuring out how to manage IP Addresses after a failover, having procedures for users in an outage event and figuring out how to fail back after the disaster is over. Handling orchestrated DR has gotten much easier in the last ten years thanks to virtualization but it\u0026rsquo;s still not a walk in the park. VMware\u0026rsquo;s \u003ca href=\"/2015/01/srm-5-8-architecture/\"\u003eSite Recovery Manager\u003c/a\u003e, \u003ca href=\"http://zerto.com\"\u003eZerto\u003c/a\u003e and \u003ca href=\"http://veeam.com\"\u003eVeeam\u003c/a\u003e have dominated this market over the past several years but there is a new kid in town. I got to see \u003ca href=\"http://onecloudsoftware.com\"\u003eOneCloud\u003c/a\u003e at  \u003ca href=\"http://techfieldday.com/event/vfd5/\"\u003eVirtualization Field Day 5\u003c/a\u003e and I think they\u0026rsquo;ve got something worth a first look.\u003c/p\u003e","title":"OneCloud to Rule Them All..."},{"content":" It\u0026rsquo;s pretty weird to get excited about backups, but I\u0026rsquo;ve found myself thinking how cool the new technology that Rubrik\u0026rsquo;s designing. If you haven\u0026rsquo;t heard of these guys yet, you will. They presented at Virtualization Field Day 5 in Boston and had some new announcements that will blow your socks right off your feet.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\nThe first of these announcements really had nothing to do with technology, but rather with people. Rubrik announced to the world that my good friend Chris Wahl was joining their team as Technical Evangelist. Thats a pretty big \u0026ldquo;get\u0026rdquo; for Rubrik, signing a top 10 Virtualization blogger and two times over VCDX to their team. I think it also lends some credence to their legitimacy, as I don\u0026rsquo;t suspect Mr. Wahl would have joined a team that didn\u0026rsquo;t already have something going for it.\nWhy is Rubrik Cool? The Rubrik CEO Bipul Sinha explained to us that the main objective of Rubrik was to make backups useful and easy to manage. His example was using Apple\u0026rsquo;s Time Machine technology. How come my laptop backup is so easy to manage, but a corporate server backup is so difficult. Obviously servers are backed up more often with more stringent SLA etc, but his point was well received. It should be easy to do because no one wants to spend their time managing backups.\nRubrik\u0026rsquo;s model uses a hardware appliance with a scale out architecture similar to Nutanix. (This shouldn\u0026rsquo;t be a coincidence since Bipul is a founding investor in Nutanix) Rubrik\u0026rsquo;s appliance is sold as a brick (chassis) with 4 nodes (servers) in it. Once the appliance is racked, cabled and IP\u0026rsquo;d the next step is to connect it to your vCenter server(s). The Rubrik software scans the vCenter for a list of virtual machines and from there, you can select VMs to backup to the Rubrik appliance. Rubrik\u0026rsquo;s team is touting this process that only takes 15 minutes to get up and running. If you\u0026rsquo;d like to see more info on this, check out this post from Brian Suhr.\nNote: right now this is a 1.0 product and only supports VMware, but the roadmap is to support Hyper-V and Physical machines as well. It is not meant to be only for VMware environments for ever.\nBackup Process The process of backing up virtual machines to the Rubrik device consists of selecting a virtual machine and selecting a SLA. There are some default SLAs that come with the appliance but the backup admin is allowed to create as many as he/she needs in order to meet the organizations retention periods and backup windows.\nOnce the backups start, the virtual machine is snapshotted and the bits are shipped over to the Rubrik appliance where they are inline deduplicated and stored on flash temporarily. Depending on the SLA, these backups will be stored on disk as well as possibly shipped off to an S3 storage endpoint, most likely Amazon S3. This is neat right? How many times have you heard corporations state that they want to keep all data for seven years right up until they hear how much storage they are going to need to buy to accomplish that? Now Rubrik can keep the most recently backed up information locally on disk but ship off some of the bits to a cloud storage device.\nRecovery Process OK, the backup process is pretty slick. Pick a VM and a policy and let Rubrik do its thing. But everyone knows that the backup is only as good as the recovery process. Rubrik\u0026rsquo;s recovery model is great! From the Rubrik HTML5 web portal, pick the VM and a backup date to restore, or for a single file restore pick the vm, the file and the file date to restore. Simple process and the search process is VERY fast. This is because all of the backup metadata is stored on the Rubrik flash drives for quick recalls.\nNow that we\u0026rsquo;ve found the files, we can perform either a recovery or an instant mount. The recovery process will power down the existing virtual machine and recover the backup in its place. Nothing new there, but if we need a recovery to take place faster, we can mount the backup directly on the Rubrik flash tier and mount it to vCenter over an NFS mount point.\nSummary I don\u0026rsquo;t know what these appliances are going to cost, but Bipul assured us all that they won\u0026rsquo;t disappoint. With an easy backup process, simple and fast recovery process, ability to scale out and still keep a single deduplication domain and a fast storage appliance, all I can think of to say is to \u0026ldquo;shut up and take my money.\u0026rdquo; We\u0026rsquo;ll see how this product does on the market , but I have a feeling that this is going to be the new gold standard for backup solutions.\n","permalink":"https://theithollow.com/2015/06/29/a-new-standard-for-backups-rubrik/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/06/download.png\"\u003e\u003cimg alt=\"download\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/06/download.png\"\u003e\u003c/a\u003e It\u0026rsquo;s pretty weird to get excited about backups, but I\u0026rsquo;ve found myself thinking how cool the new technology that \u003ca href=\"http://rubrik.com/\"\u003eRubrik\u003c/a\u003e\u0026rsquo;s designing.  If you haven\u0026rsquo;t heard of these guys yet, you will. They presented at Virtualization Field Day 5 in Boston and had some new announcements that will blow your socks right off your feet.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\u003c/p\u003e","title":"A New Standard for Backups - Rubrik"},{"content":" CTO Satyam Vaghani was kind enough to announce several new products and features relating to the future of PernixData at Virtualization Field Day 5. If you\u0026rsquo;re not familiar with PernixData, they got their start with their FVP product which provided server side flash for both a read cache or a write-through cache. I\u0026rsquo;ve used the product several times and it really does some amazing things to smooth out latency and can give your storage array some serious umph!\nPernixData presented on June 24th at VFD5 where I was lucky enough to be a delegate. Satyam announced four new updates.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\nPernixData Architect PernixData Architect is a new product that provides insights and deep analytics about workloads running in your infrastructure. The sales pitch here is that its used to help design new infrastructure implementations, and then monitor and optimize the design after its deployed. Utilizing deep level statistics will better allow customers to decide how much flash should be bought, where to put it etc. Once deployed, what-if scenarios can be run to help optimize the existing infrastructure and determine how to make things run better. PernixData Architect is doing this by adding an agent to each ESXi host and collecting its own data. Satyam was adamant that any statistical metrics gathered should be gathered by PernixData\u0026rsquo;s solution and not a third party system. He went on to explain that its dangerous to use some other system\u0026rsquo;s information repository (such as vCenter statistics) because they either might not have the statistics necessary to make recommendations or collect data slightly differently causing a butterfly affect of misinformation. Its refreshing to see someone take this approach as I\u0026rsquo;ve seen issues trying to marry array statistics and vCenter statistics together.\nPernixData Cloud This is also a new product but works in conjunction with the PernixData Architect product. The Architect software is collecting data about your environment in order to make recommendations on design and optimization. PernixData Cloud allows you to anonymously upload this data to a public repository to then compare against all of the other installations of the product. This would allow you to compare your environment with a global pool, or some categories such as vertical, or company size etc.\nI can see why it might be nice to see what the consolidation ratios are of other customers in your space, but are you really getting the information you\u0026rsquo;re looking for? It seems to me that unless the product really takes off, you\u0026rsquo;ll only be comparing your infrastructure with the infrastructure of other customers who would need PernixData FVP. I would suspect that any customers who have an aging storage array and need additional IOPS or lower latency can buy PernixData FVP to fix the problem. This is great, but if you look at the other customers data, will all of them have aging storage arrays that need additional IOPS or lower latency? This means that in order for the PernixData Cloud solution to be a real value, customers will want to buy and install this even if they don\u0026rsquo;t need FVP.\nFreedom I brought up the question about not having enough data to make an accurate comparison with an industry. Think about it, \u0026ldquo;Big Data\u0026rdquo; isn\u0026rsquo;t really big until you get a lot of it. A little data isn\u0026rsquo;t a very good representation of the collective industry. To combat this problem PernixData is announcing \u0026ldquo;Freedom\u0026rdquo; which is a new program that allows you to use the FVP product for free. The caveats are that you can\u0026rsquo;t use the local flash devices and are forced to use the Distributed Fault Tolerant Memory (DFTM) version. This allows you to carve out ESXi host memory as a flash cache. This is also limited to 128GB per cluster for write-through acceleration, but thats still really cool.\nOK, there is one last catch with \u0026ldquo;Freedom\u0026rdquo;. If you use \u0026ldquo;Freedom\u0026rdquo; you also have to provide your anonymized analytics information to the global repository to be shared with other customers. This is a pretty small price to pay for some really useful software, but it might be a sticking point for some companies.\nDoes this \u0026ldquo;Freedom\u0026rdquo; solution fix the issue of getting a large amount of \u0026ldquo;Big Data\u0026rdquo; for analytical purposes? You could argue that the customers that are going to use this free software aren\u0026rsquo;t enterprise customers anyway. Will this software only be installed in home labs or development environments? If so, then comparing your own infrastructure with others isn\u0026rsquo;t really getting you what you want. You\u0026rsquo;d be comparing your home labs against each other right? Will \u0026ldquo;Freedom\u0026rdquo; determine that a high percentage of storage arrays are manufactured by Synology? Only time will tell.\nFVP Updates Last but not least, there are going to be updates to the existing version of FVP including things like vSphere 6 support, a new User Interface and pulling it out of the vSphere web client.\nSummary I applaud PernixData for trying to branch out and do something more with some of the data they\u0026rsquo;ve already been great at collecting. I think this could be a useful product and they really do seem to show some VERY useful data about storage fingerprints. I\u0026rsquo;m skeptical about whether or not the PernixData Cloud / Freedom messaging will work out, especially since its only focusing on storage for now. If the industry takes to the software and adopts it with several large enterprise customers I think it\u0026rsquo;s got some very useful information that IT departments would love to see, but if not then I don\u0026rsquo;t know that we get the comparisons that we\u0026rsquo;re hoping to get.\nIf any company can make this work, then PernixData can. The initial analytics show great storage fingerprint information and the FVP product is a rock solid solution. Lets hope the new products do as well as their flagship product.\n","permalink":"https://theithollow.com/2015/06/25/will-you-put-the-data-in-pernixdata/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/06/Satyam2.png\"\u003e\u003cimg alt=\"Satyam2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/06/Satyam2-241x300.png\"\u003e\u003c/a\u003e CTO \u003ca href=\"https://twitter.com/satyamvaghani\"\u003eSatyam Vaghani\u003c/a\u003e was kind enough to announce several new products and features relating to the future of \u003ca href=\"http://pernixdata.com\"\u003ePernixData\u003c/a\u003e at Virtualization Field Day 5. If you\u0026rsquo;re not familiar with PernixData, they got their start with their FVP product which provided server side flash for both a read cache or a write-through cache. I\u0026rsquo;ve used the product several times and it really does some amazing things to smooth out latency and can give your storage array some serious umph!\u003c/p\u003e","title":"Will You Put the Data in PernixData?"},{"content":" Last year I wrote a post on VMTurbo and its method of using the idea of a market economy to manage your infrastructure. If you need a refresher (or because you didn\u0026rsquo;t read my blog, shame) take a look here. If you aren\u0026rsquo;t going to read it, the gist is that VMTurbo monitors your virtual environment and uses the hardware as though it is a supply, and the workloads that run on it as the demand. Based on the demand of a workload and supply of a resource there is a cost associated with the workload, and VMTurbo uses these metrics to determine the most cost effective way to balance these.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtualization Field Day 5. This was the only compensation given and did not influence the content of this article.\nFast forward 18 months and I\u0026rsquo;m back at Virtualization Field Day 5 where VMTurbo is expanding on this idea. Hybrid Clouds are picking up steam from customers and users are actually using both at the same time in their datacenters. This affects the \u0026ldquo;Global Economy\u0026rdquo; that VMTurbo has been putting together. To simplify this concept, assume that we have a regional market like the United States (Private Cloud example) and we\u0026rsquo;re moving workloads around within that country\u0026rsquo;s borders to make sure our workloads are cost effective. Now we\u0026rsquo;ve just got a trade agreement with France (Public Cloud Provider example) and they\u0026rsquo;ve got some unlimited cheap resources! VMTurbo would now manage both of these clouds and determine where the resources should run in the most cost effective manner.\nWell, I just said that France (Our Public Cloud Provider) has unlimited cheap resources, so VMTurbo would obviously want us to run all of our workloads there right? Not necessarily! VMTurbo has baked into the platform an additional cost for transit of the goods and services. Think about it this way. If France has lots of cheap resources but we have to ship them back to the US to use them and that cost is more expensive than getting resources from the US, would we do that? (The answer is no here.) To handle some of this, VMTurbo can monitor the networking stack using Netflow and analyze where workloads should be placed. VMTurbo is calling groups of related applications a vPOD. VMTurbo\u0026rsquo;s software will make a recommendation to keep these resources close together in order to better utilize the transit networks.\nVMTurbo is also introducing QOS for applications as well. The idea behind this is the applications that have a SLA set on them will be treated as a customer that provides a higher revenue to the company. That application would get some preferential treatment since it\u0026rsquo;s so important to the economy.\nSummary I really love VMTurbos basic premise of using resources as a supply and workloads as a demand. This accurately sums up what is actually happening. Much like our economy changes quickly and drastically, so does our infrastructure. VMTurbo is making modifications to handle these changes and I hope they continue to build on this economy idea.\n","permalink":"https://theithollow.com/2015/06/24/vmturbos-market-economy-got-a-free-trade-agreement/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/06/VMTurboLogo.png\"\u003e\u003cimg alt=\"VMTurboLogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/06/VMTurboLogo.png\"\u003e\u003c/a\u003e Last year I wrote a post on \u003ca href=\"http://vmturbo.com\"\u003eVMTurbo\u003c/a\u003e and its method of using the idea of a market economy to manage your infrastructure. If you need a refresher (or because you didn\u0026rsquo;t read my blog, shame) take a look \u003ca href=\"/2014/03/vmturbo-market-economy/\"\u003ehere\u003c/a\u003e. If you aren\u0026rsquo;t going to read it, the gist is that VMTurbo monitors your virtual environment and uses the hardware as though it is a supply, and the workloads that run on it as the demand. Based on the demand of a workload and supply of a resource there is a cost associated with the workload, and VMTurbo uses these metrics to determine the most cost effective way to balance these.\u003c/p\u003e","title":"VMTurbo\u0026#039;s Market Economy Got a Free Trade Agreement."},{"content":"I got up this morning to receive news that I had completed the qualifications for the VMware Certified Design Expert certification. This is a group of around 200ish individuals who have completed this exhaustive process which included three exams, submitting an enterprise design and then defending that design in front of a panel of other VCDXs. From the VMware education site:\nVMware Certified Design Expert (VCDX) is the highest level of VMware certification. This elite group is comprised of design architects highly-skilled in VMware enterprise deployments and the program is designed for veteran professionals who want to validate and demonstrate their expertise in VMware technology.\nVCDX5-DCV certification is achieved through the unique design defense process, where all candidates must submit and successfully defend a production-ready VMware Solution before a panel of veteran VCDX-DCV holders. This process ensures that those who achieve VCDX status are peer-vetted and ready to join an elite group of world-class consulting architects.\nI wanted to use this time to thank some very important people who have helped me along this journey. A VCDX is earned by an individual, but I don\u0026rsquo;t think that it can be achieved without some help. To start with, I have to thank my family for being so supportive. It\u0026rsquo;s often joked that before qualifying to defend a VCDX design, you must first get approval from your spouse because of the incredibly long hours and additional work that must be performed after coming home from work. This can certainly eat into family time, and my Wife and Son were VERY supportive about desire to pursue this endeavor. I can\u0026rsquo;t thank them enough. Love you guys.\nNext, I have to thank my employer. I started working at Ahead a little over a year ago and it has been a great place to work. There are always new challenges, we\u0026rsquo;re pushing into new areas of technology and my coworkers are some of the brightest and most driven people I\u0026rsquo;ve ever met. I really can\u0026rsquo;t say enough about this whole organization but would be remiss if I didn\u0026rsquo;t single out a few individual who really helped. Ahead already boasted three VCDXs so they already familiar with what I was about to go through. Chris Wahl, Brian Suhr and Tim Curless were very helpful in providing guidance along my way, and providing both negative and positive encouragement throughout the process. Steve Pantol (BOSS MAN) was incredibly encouraging throughout the whole thing and kept me level headed when I would freak out, which was about every other day for three months. Not only that, but being one of the smartest VMware people that I know, Steve was a keystone of my journey and I could not have done it without him.\nFellow Co-worker Tim Carr was my partner in crime since he was also pursuing the VCDX during the same time that I was. Having a buddy that was going through the exact same stresses and trials that you were, was a great relief to me. We did a mock defense together and would constantly bounce ideas off of one another. While we did completely separate designs, I felt like we were in it together.\nThis has been a really long journey (it feels like) but I\u0026rsquo;ve never learned more in my entire life about my craft. Thank you to Mom and Dad, family members, coworkers and perfect strangers that I may have cutoff in traffic because my mind was preoccupied with this thing. Now on to something else. Maybe a bar-b-que.\nNOTE: at the time of this writing I was notified by VMware Education that my VCDX number was 196. A few hours later I received another email stating that I was given the wrong number and that I was actually number 195. This has led to a little confusion, hense the URL for this post has 196 in it.\n","permalink":"https://theithollow.com/2015/06/18/thank-you-vcdx-196/","summary":"\u003cp\u003eI got up this morning to receive news that I had completed the qualifications for the VMware Certified Design Expert certification. This is a group of around 200ish individuals who have completed this exhaustive process which included three exams, submitting an enterprise design and then defending that design in front of a panel of other VCDXs. From the VMware education site:\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eVMware Certified Design Expert (VCDX) is the highest level of VMware certification. This elite group is comprised of design architects highly-skilled in VMware enterprise deployments and the program is designed for veteran professionals who want to validate and demonstrate their expertise in VMware technology.\u003c/p\u003e","title":"Thank you - VCDX 195"},{"content":"In order to deploy a fully provisioned automated deployment of a server we have to look past just deploying a virtual machine OS and configuring an IP Address. In order to get something usable we also need to configure the server with some applications or make post provisioning changes. For instance we might want to install Apache after deploying a Linux machine. In vRealize Automation deployments invoke a post-provisioning stub to call vRealize Orchestrator workflows to make additional changes. This works very well on a vSphere environment since we can leverage VMtools to access the guest OS. But if you\u0026rsquo;ve ever deployed an instance in Amazon EC2 you\u0026rsquo;ll know that this isn\u0026rsquo;t quite as easy. EC2 instances don\u0026rsquo;t have VMTools to allow us into the guest OS. To make matters worse, the current version of vRealize Automation doesn\u0026rsquo;t pass the IP address of the guest Operating System to vRO. See this KB article from VMware for more information.\nThis post goes into more details about how to use a post-provisioning workflow on an Amazon EC2 instance, specifically customizing a Red Hat Linux Guest OS.\nOverview In order to achieve a fully automated deployment, we use vRealize Automation (vRA) to as our user portal and life-cycle management. Layered on top of this, we utilize vRealize Orchestration to make the vRA servers, Amazon EC2 endpoints and Guest OS all work together. Here is our high level process of what needs to happen.\nvRA will deploy the server from a blueprint. Once this is finished, the post-provisioning stub will make a call to run an orchestrator workflow. The first thing that this workflow will do is to make a SQL call to the vRA database. This call retrieves the IP Address of the EC2 instance. Next, we pass the IP Address to another workflow and run an SSH command on the Linux Appliance.\nBlueprint Setup I won\u0026rsquo;t go into the blueprint setup in too much detail. The important piece of this is to ensure that the \u0026ldquo;ExternalWFSubs.MachineProvisioned\u0026rdquo; custom property is added to the blueprint. The value that corresponds to this should be the workflow ID from vRealize Orchestrator.\nThe value can be seen from the vRO workflow screen as seen below.\nvRealize Orchestrator Workflow Now the real work begins. We build a workflow to take the vRA information and then connect to the EC2 instance to run an SSH command. The picture below lays out the workflow and order.\nThe first element simply logs the inputs to the workflow. This is a script that I commonly use to grab any of the information that was passed from vRealize Automation, over to vRealize Orchestrator. It will log it to the screen so you can see what information is available to tie back to the original blueprint. This script isn\u0026rsquo;t necessary but is a nice thing to have. The javascript for this element is listed below.\nSystem.log(\u0026#34;Workflow started from workflow stub \u0026#34; + externalWFStub + \u0026#34; on vCAC host \u0026#34; + vCACHost.displayName); System.log(\u0026#34;Got vCAC virtual machine \u0026#34; + vCACVm.virtualMachineName); System.log(\u0026#34;Matching virtual machine entity \u0026#34; + virtualMachineEntity.keyString); vmName = vCACVm.virtualMachineName; System.log(\u0026#34;vmName is: \u0026#34; + vmName); var virtualMachine = virtualMachineEntity.getInventoryObject(); if (virtualMachine != null) { var virtualMachinePropertyEntities = virtualMachineEntity.getLink(vCACHost, \u0026#34;VirtualMachineProperties\u0026#34;); var virtualMachineProperties = new Properties(); //Loop through all of the VM Properties and log them for reference. for each (var virtualMachinePropertyEntity in virtualMachinePropertyEntities) { var propertyName = virtualMachinePropertyEntity.getProperty(\u0026#34;PropertyName\u0026#34;); var propertyValue = virtualMachinePropertyEntity.getProperty(\u0026#34;PropertyValue\u0026#34;); virtualMachineProperties.put(propertyName, propertyValue); System.log(\u0026#34;INFO: \u0026#34; + \u0026#34; PropertyName \u0026#34; + propertyName + \u0026#34; propertyValue \u0026#34; + propertyValue); } // Enter the var name to output, and the property field with the value you\u0026#39;re looking to export var vmIP = virtualMachineProperties.get(\u0026#34;VirtualMachine.Network0.Address\u0026#34;); //Log the value for troubleshooting purposes System.log (vmIP); } The next piece of the puzzle is a SQL Query. I used another script element to build the SQL Query that I plan to use. This query will be passed along to the next element in the workflow which will actually execute this script. The vCAC:Entity is passed to the script element.\nThe script takes the Entity name and merges it with our SQL Query. The script also removes the \u0026ldquo;guid\u0026rdquo; part of the vCAC:Entity string so that it matches the format that it\u0026rsquo;s stored in the database.\nvar guid guid = (virtualMachineEntity.keyString); vmString = guid.replace(\u0026#34;guid\u0026#34;, \u0026#34;\u0026#34;); System.log(vmString) SQLQuery = \u0026#34;select PrivateIPAddress from [DynamicOps.AmazonWSModel].Instances where VirtualMachineID = \u0026#34; + vmString System.log(SQLQuery) Now we pass the query that we just created, over to the element that will execute the query. This is a standard SQL Read Query that can be found in the default list of workflows. We map the result of the query to a new attribute called \u0026ldquo;IPAddress\u0026rdquo;. Thats what we\u0026rsquo;re after!\nNOTE: the workflow I modified is called \u0026ldquo;Read a custom query from a database\u0026rdquo; workflow. Also, before this workflow can be used, the \u0026ldquo;Add a database\u0026rdquo; query needs to be run to identify which database can be used for the queries.\nNow that we\u0026rsquo;ve returned the IP Address from the SQL database, we take Array record and convert it to a string to be used in our next element. I know there is likely a more clean method to cleanup this data, or stringify the array value, but this is what I was able to quickly get working.\nSystem.log(\u0026#34;IPAddress = \u0026#34; + IPAddress); ipString = IPAddress.toString(); System.log(\u0026#34;ipString = \u0026#34; + ipString); var tempIP1 = ipString.replace(\u0026#34;DynamicWrapper (Instance) : [SQLActiveRecord]-[class com.vmware.o11n.plugin.database.ActiveRecord] -- VALUE : ActiveRecord: {PrivateIPAddress=\u0026#34;, \u0026#34;\u0026#34;); var tempIP2 = tempIP1.replace(\u0026#34;}\u0026#34;,\u0026#34;\u0026#34;); System.log(\u0026#34;New IP = \u0026#34; + tempIP2); ipString = tempIP2 We can now pass the ipString variable over to our SSH workflow. This workflow logs into the EC2 instance and runs the query we chose. In this case I\u0026rsquo;ve built an SSH Workflow to install Apache on RHEL. This operation does require the SSH Keys to be available by vRealize Orchestrator. This operation can be found in another post.\nSummary OK, this doesn\u0026rsquo;t seem like the most straight forward way to run a command after provisioning a server, but this is the only way that I know of to do this with the current version of vRealize Automation. I\u0026rsquo;m sure that the integration between vRA and AWS will get much tighter but for now, this is how the operation can be achieved.\n","permalink":"https://theithollow.com/2015/06/15/vrealize-automation-6-post-provisioning-workflows-on-aws/","summary":"\u003cp\u003eIn order to deploy a fully provisioned automated deployment of a server we have to look past just deploying a virtual machine OS and configuring an IP Address. In order to get something usable we also need to configure the server with some applications or make post provisioning changes. For instance we might want to install Apache after deploying a Linux machine. In vRealize Automation deployments invoke a post-provisioning stub to call vRealize Orchestrator workflows to make additional changes. This works very well on a vSphere environment since we can leverage VMtools to access the guest OS. But if you\u0026rsquo;ve ever deployed an instance in Amazon EC2 you\u0026rsquo;ll know that this isn\u0026rsquo;t quite as easy. EC2 instances don\u0026rsquo;t have VMTools to allow us into the guest OS. To make matters worse, the current version of vRealize Automation doesn\u0026rsquo;t pass the IP address of the guest Operating System to vRO. See this \u003ca href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=2075186\"\u003eKB article\u003c/a\u003e from VMware for more information.\u003c/p\u003e","title":"vRealize Automation 6 - Post Provisioning Workflows on AWS"},{"content":"It may be necessary to connect to a Linux Guest thats that been provisioned in Amazon Web Services so that you can perform additional operations on it. One of the ways you might want to configure your instances is through vRealize Orchestrator. One of the hang ups with using vRealize Orchestrator to connect to your Linux EC2 instances is that you\u0026rsquo;ll need an SSH key to connect. This post shows you how you can do this.\nFirst, go get the SSH certificate used when creating the virtual machine. This should come as a .pem file from Amazon EC2 and you\u0026rsquo;ll only have one chance to download it. This key will allow access to the guest OS so it will be needed. Next you\u0026rsquo;ll need to convert that certificate to a .ppk file. To do this we\u0026rsquo;ll use a program called puttygen. Run the executable and click the \u0026ldquo;Load\u0026rdquo; button to load the existing private key.\nNext, click the save private key button to save that key. Name it the same name as the original .pem file, but change the extension to .ppk. At this point if you want to test the key, you can load it into putty and connect to the instance to try it out. If we want to use vRealize Orchestrator (vRO) to connect to this instance later on, we can save the .ppk file on the vRO appliance. I used WinSCP to copy the file from my Windows machine, over to the vRO server.\nOnce the .ppk file is copied over, make sure that the vRO Server has permissions to read the file. I set read permissions on the file after copying it over. Sidebar - OK, I know what you\u0026rsquo;re thinking. If I plan to use vRO to connect to this instance later, that means I\u0026rsquo;m doing some automation in my environment. So far, all of the steps that I\u0026rsquo;ve outlined are pretty manual steps and not useful. While this might be true, if you\u0026rsquo;re using a product like vRealize Automation (vRA) you can set a key pair per reservation. So if you use the same key on a reservation, all of the machines provisioned on it will use the same key. This means you only need to add a key to the vRO server for each reservation.\nNow we can create a workflow in vRealize Orchestrator to use the \u0026ldquo;Run SSH Command\u0026rdquo; workflow. I\u0026rsquo;ve used a slightly modified version of that workflow to connect to one of my instances in EC2. The main things to add though, are the Hostname or IP Address of the EC2 Instnace, the SSH Command to run and the location to the private key which is in the / directory.\nNotice also that for my modified workflow, I\u0026rsquo;ve added the username \u0026ldquo;ec2-user\u0026rdquo; as an attribute as I know it won\u0026rsquo;t change between runs.\nWhen I run the command, you can see the output in the logs.\nSummary Running SSH commands on your EC2 instances might not sound all that fun, but since there is not VMware Tools installed on the machine, its a little more difficult to access the guest OS to install programs or make configuration changes. At least by using this method, we can reliably access the guest OS from Orchestrator.\n","permalink":"https://theithollow.com/2015/06/08/aws-linux-guest-access-via-vrealize-orchestrator/","summary":"\u003cp\u003eIt may be necessary to connect to a Linux Guest thats that been provisioned in Amazon Web Services so that you can perform additional operations on it. One of the ways you might want to configure your instances is through vRealize Orchestrator. One of the hang ups with using vRealize Orchestrator to connect to your Linux EC2 instances is that you\u0026rsquo;ll need an SSH key to connect. This post shows you how you can do this.\u003c/p\u003e","title":"AWS Linux Guest Access via vRealize Orchestrator"},{"content":"OK, I know that most of the known world is all of a sudden working on making everything scriptable so that it can be automated or just run from a command line, but come on we still use the GUI for some quick tasks don\u0026rsquo;t we?\nOne of the cool things about the vSphere Web Client is its ability to create a custom search based on a set of criteria. Just to recap, I said there was a cool feature of the vSphere Web Client.\nON the top right hand side of the web client, click the down arrow in the search window. Click the \u0026ldquo;Create a new search\u0026rdquo; link.\nEnter some custom criteria. I\u0026rsquo;ve decided to create a search to show me any virtual machines that need a VMtools upgrade in my home lab. (don\u0026rsquo;t judge me for having old VMtools versions, I\u0026rsquo;m trying to help here). Once you have your search working the way you want, click Save.\nGive the saved search a name that you\u0026rsquo;ll easily recognize again later.\nNow the next time you click the down arrow in the search box, the saved search will be available. This is an easy way to display only the machines you\u0026rsquo;re looking for, but remember that this can be used for datastores, hosts or a whole list of other items.\n","permalink":"https://theithollow.com/2015/06/01/vmware-custom-searches/","summary":"\u003cp\u003eOK, I know that most of the known world is all of a sudden working on making everything scriptable so that it can be automated or just run from a command line, but come on we still use the GUI for some quick tasks don\u0026rsquo;t we?\u003c/p\u003e\n\u003cp\u003eOne of the cool things about the vSphere Web Client is its ability to create a custom search based on a set of criteria. Just to recap, I said there was a cool feature of the vSphere Web Client.\u003c/p\u003e","title":"VMware Custom Searches"},{"content":"Have you ever tried logging into vRealize Automation and gotten an Incorrect username/password but you\u0026rsquo;re positive you typed everything in correctly? You try again and find out that if you put the User Principal Name suffix everything works fine. If you\u0026rsquo;re using a solution like vRealize Automation and notice the login doesn\u0026rsquo;t work unless you specify a a User Principle Name (UPN) in the form of username@domain.name, try the following correction.\nFind the SSO server being used by vRA and login to with the administrator@vsphere.local account. Go to the Administration \u0026ndash;\u0026gt; Single Sign-On \u0026ndash;\u0026gt; configuration menu. From there, click on the identity sources tab. You should notice that by clicking on the domain name and clicking the \u0026ldquo;Set as Default Domain\u0026rdquo; button.\nOnce you do that, a popup message will show up notifying you that your current default domain will be changed. Click Yes to proceed.\nOnce the change is made, try logging in again and there should no longer be a need for the UPN suffix.\n","permalink":"https://theithollow.com/2015/05/26/vrealize-automation-remove-upn-suffix/","summary":"\u003cp\u003eHave you ever tried logging into vRealize Automation and gotten an Incorrect username/password but you\u0026rsquo;re positive you typed everything in correctly?  You try again and find out that if you put the User Principal Name suffix everything works fine. If you\u0026rsquo;re using a solution like vRealize Automation and notice the login doesn\u0026rsquo;t work unless you specify a a User Principle Name (UPN) in the form of \u003ca href=\"mailto:username@domain.name\"\u003eusername@domain.name\u003c/a\u003e, try the following correction.\u003c/p\u003e","title":"vRealize Automation - Remove UPN Suffix"},{"content":"Last week was packed full of announcements since Microsoft Ignite and EMC World were two large trade shows happening simultaneously. One announcement that excited me was a free virtual storage appliance from EMC. The appliance has the same look and feel of a VNXe but is completely virtual. You know what that means? If you\u0026rsquo;re like me, you like to tinker with things in a lab environment so as to not destroy a critical production network. Also, if you happen to write posts on things like VMware Site Recovery Manager, you\u0026rsquo;ll want to have a storage array that can replicate to a second one. That leaves you with the Netapp Simulator or the HP StoreVirtual appliance unless you want to actually buy two storage arrays. (That ain\u0026rsquo;t cheap). Well, now we have the EMC vVNX.\nInstallation Download the Appliance from the EMC Site. The installation is a very simple OVA deployment for your vSphere environment. Be cautious though, this appliance wants 2 vCPU and 12 GB of memory which can be taxing on a home lab. You\u0026rsquo;ll also map a management NIC to a portgroup and two data ports to two additional port groups. Once you\u0026rsquo;ve deployed the OVA, open a web browser and go to the IP Address of the appliance.\nLogin with the default username and password.\nUsername: admin , Password: Password123# Once you login, there is a nifty little wizard to get you started. Click Next to get going. Read and accept the End User License Agreement. (Pop Quizzes happen sometimes, Be ready to answer questions about the EULA) Change the default password. Always a good idea. Click Next. Go to the EMC Support site and enter the System UUID in the portal. This will generate a license file to allow for full functionality of the storage appliance. Add DNS Servers and click next. Add a time server and click next. Stop! When you get to the disk configuration take a second and add a new virtual disk to the vVNX array in vSphere. The disk you add will be the disk used to create a storage pool. If you don\u0026rsquo;t add a virtual disk to the appliance, no pools can be created. Once you\u0026rsquo;ve added the disk, click the \u0026ldquo;Create Storage Pools\u0026rdquo; button. Give the pool a name. This is my Disaster Recovery appliance so I called it DRStoragePool. (Clever don\u0026rsquo;t you think) Select the disk that you added earlier. You\u0026rsquo;ll also select a tier. This could be important if you\u0026rsquo;ve got both SSD virtual disks assigned, as well as spinning disks. In my case all of the virtual disks I\u0026rsquo;ve added are tiered on my Synology Storage Array so I just picked capacity. Review the summary screen and click Finish. Add any iSCSI interfaces that you want to use. I plan on using NFS so I just clicked next. Add a NAS Server if you plan to use NFS. (Like I am) Give the NFS Server a name and point it to a storage pool created earlier. Enter an IP Address for the NAS appliance and click next. Select NFS and I\u0026rsquo;ve added NFSv4. (Hey, a kid can dream right?) Choose wheter to use the Unix Directory Service or not. Enable DNS or skip this step. Click next. Review the summary screen and click Finish. Back to the original wizard, we can choose to setup replication interfaces. I skipped this step and will revisit it later if I need it. Review the Summary screen and click Finish. vSphere Configuration Now that the initial setup we can add some VMware Datastores. Click on the Hosts tab, and then the VMware Hosts section. Click the \u0026ldquo;Find ESX Hosts\u0026rdquo; button. Add either an IP Address or DNS name of either individual ESXi hosts or vCenter. Then, click \u0026ldquo;Find\u0026rdquo;.\nEnter credentials for the host or vCenter. Click OK. Afterwards review the summary screen and click Finish. Click the Storage Tab and choose \u0026ldquo;VMware Datastores\u0026rdquo;. A new wizard opens. We\u0026rsquo;ll select the type of datastore (VMFS for iSCSI or NFS for NAS). I\u0026rsquo;ve also enabled Deduplication and Compression. Click Next. Give the datastore a name and description. Click Next. Select the storage pool for this datastore to live on. Configure a snapshot schedule if it tickles your fancy. I skipped this. Select the hosts that will have access to this datastore and change the access to ReadWrite, allow Root. Click OK. After the configuration finishes, it will create the NFS mount point and login to the hosts specified to add the datastore for you automatically. Now you\u0026rsquo;re off and running with a new virtual storage appliance in your lab. ","permalink":"https://theithollow.com/2015/05/12/emc-vvnx-for-your-home-lab/","summary":"\u003cp\u003eLast week was packed full of announcements since Microsoft Ignite and EMC World were two large trade shows happening simultaneously.  One announcement that excited me was a free virtual storage appliance from EMC.  The appliance has the same look and feel of a VNXe but is completely virtual. You know what that means?  \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/05/HomeLab.png\"\u003e\u003cimg alt=\"HomeLab\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/05/HomeLab.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re like me, you like to tinker with things in a lab environment so as to not destroy a critical production network.  Also, if you happen to write posts on things like VMware Site Recovery Manager, you\u0026rsquo;ll want to have a storage array that can replicate to a second one.  That leaves you with the Netapp Simulator or the HP StoreVirtual appliance unless you want to actually buy two storage arrays.  (That ain\u0026rsquo;t cheap).  Well, now we have the EMC vVNX.\u003c/p\u003e","title":"EMC vVNX for your Home Lab"},{"content":"Zerto has been a great product for companies looking to deploy an easy to use disaster recovery solution. One of the limitations of the product was that it only worked with VMware vSphere, but not any more. Version 4 just dropped and it\u0026rsquo;s got a myriad of new goodies.\nNew User Interface Cross-Replication from vSphere to Microsoft SCVMM and Amazon Web Services (AWS) Sizing improvements More Secure Virtual Replication Appliances vSphere 6 support The most appealing new capability was the ability to fail over a vSphere environment to Amazon Web Services (AWS). This could save small businesses A LOT of money. Small businesses that have a disaster recovery requirement no longer need to have a dedicated co-lo and spend money on equipment when they may never use it. AWS provides compute, storage and network on an as-needed basis and most of the time, the disaster recovery site is not needed which correlates to savings.\nZerto - Amazon Web Services Installation Lets take a look at the Zerto architecture for vSphere to AWS. It requires a Zerto Virtual Manager (ZVM) at each site which manages the environment. The vSphere side also requires a Virtual Replication Appliance (VRA) for each ESXi host that will have virtual machines to replicate. The AWS side does not require a VRA.\nOne thing to be aware of is that the vSphere side and the AWS Side will have two separate installers.\nAWS Site The AWS site requires the Zerto Cloud Appliance installer. This can be installed on a Windows-based host inside an EC2 instance. Most of the installation screens here are a basic information and the opportunity to change ports etc so I\u0026rsquo;ve left them out. The screen below however is some of the meat and potatoes of the installation. You\u0026rsquo;re asked for an IP/Hostname of the Cloud Appliance which it will populate for you. If you have multiple NICs on your EC2 instance, you could change it. The second part of the screenshot below is the Access Key ID which is a unique ID for an AWS owner. You can find these in the Identity and Access Management Section (IAM) in the AWS portal.\nOnce you click next, the installer will check to ensure windows firewall rules are open and the AWS Access Keys are valid.\nvSphere Site The vSphere site hasn\u0026rsquo;t changed much from the previous versions. The Zerto Virtual Manager needs to be installed on a Windows server.\nOnce the ZVMs have been installed, we need to pair the local vSphere site with the Amazon site.. To do this we can login to the ZVM by using a web browser and navigating to https://ZVMFQDN:9669 . Here we see that we still need to install VRAs and pair to another site. Click on the \u0026ldquo;Sites\u0026rdquo; tab at the top of the screen to pair the vSphere site, with the AWS Site. Enter the IP Address of the Cloud ZVM and the port and click \u0026ldquo;PAIR\u0026rdquo;. Note: for this to work properly, network connectivity must already exist to the Amazon Networks. In my case a Site-Site VPN tunnel was created.\nNow you can see that a site is listed in the \u0026ldquo;Sites\u0026rdquo; section and that we still need to install VRAs. Click the \u0026ldquo;Setup\u0026rdquo; tab at the top to install the VRAs.\nSelect all of the ESXi hosts that will need virtual machines replicated and enter information to install the VRAs. Each of the VRAs is a small virtual machine that will reside on the ESXi host. Enter the root password for the ESXi host, a datastore to house the virtual machine, a network that has access to the AWS Site and the amount of VRA RAM needed. You will also need to enter the network information for the VRA so that it can communicate with the ZVM and the remote site.\nWhen done, your \u0026ldquo;Setup\u0026rdquo; tab should look similar to the one below.\nCreate a VPG Now we need to setup our Virtual Protection Groups (VPG) this is the group of virtual machines that you are protecting. Click the \u0026ldquo;VPGs\u0026rdquo; tab at the top of the menu and add a VPG. A wizard will walk you through this as seen below.\nI created a simple VPG called AmazonVPG.\nSelect one or more virtual machines to protect. You can define which order they should boot in if necessary.\nDecide where the protected VMs should be replicated. I\u0026rsquo;ve only setup one other site, so it was automatically selected. Journal history determines how far back in time you can go to restore a virtual machine and \u0026ldquo;Test Reminder\u0026rdquo; just sends you an email if you haven\u0026rsquo;t tested the recovery in a while. The target RPO alert is only for alerting purposes. Zerto tries to replicate as fast as possible, so this is not a desired RPO setting, but rather an alarm to let you know that your RPO is not being met, probably due to too much replication traffic, or possibly a down WAN link.\nThe recovery menu allows you to define a failover network and a test network. The test network will allow you to have a completely separate environment for testing the failovers of virtual machines without affecting the production machine. These two networks can be the same or different depending on your preference.\nWhen you\u0026rsquo;re finished with the wizard, you\u0026rsquo;ll notice that the VPG shows initializing and the Initial sync is taking place. Go grab a cup of coffee, the sync could take a while.\nNotice that when the sync takes place, Zerto is utilizing an Amazon S3 bucket to house the virtual machine files. This should be cheap storage that can be used to dump the files until you need them.\nFailover You\u0026rsquo;ve done all the hard work. Our VPG is set up and its meeting it\u0026rsquo;s SLA. Now lets fail that server over to AWS. Click the \u0026ldquo;FAILOVER\u0026rdquo; button at the bottom right hand corner of the ZVM screen. NOTE: there is a toggle to change from a real failover which is disruptive to the protected virtual machine, and a test failover which is not disruptive.\nSelect the VPG to be failed over.\nOn the execution parameters screen you can change the checkpoint to which you fail over. Click Next.\nWhen you\u0026rsquo;re ready, click \u0026ldquo;Start Failover Test\u0026rdquo;.\nYou\u0026rsquo;ll see the ZVM will have an action item taking place. When it\u0026rsquo;s finished you\u0026rsquo;ll notice that your EC2 screen has an additional virtual machine listed. Note: The failover process could take some time so be sure to test your RTO. The Cloud ZVM performs an import from the S3 bucket into EC2 and this process can take time.\nWhen you\u0026rsquo;re finished with a \u0026ldquo;Test Failover\u0026rdquo; you can click the Stop button and you\u0026rsquo;ll be prompted with a window to enter a note about the test for record keeping. If this is a real failover scenario, there is no current failback built into Zerto 4 at the time of release. Failing back from AWS to your vSphere environment can be accomplished by exporting the VM and importing into vSphere. Look for this to change in future updates from Zerto. Summary I\u0026rsquo;m a big fan of Zerto and even more so now that they can replicate to Amazon. This product is very easy to use and administer and doesn\u0026rsquo;t require any sort of hardware appliance to handle replication traffic. It even does WAN optimization to cut down on the amount of bandwidth needed. If you\u0026rsquo;re looking for a orchestration tool for disaster recovery, you should check them out.\nFull Disclosure: Zerto has been a sponsor of theITHollow for a long time. This has not in any way affected my views towards the product and I was not paid or even asked to write this post.\n","permalink":"https://theithollow.com/2015/05/05/zerto-4-to-amazon-and-beyond/","summary":"\u003cp\u003eZerto has been a great product for companies looking to deploy an easy to use disaster recovery solution.  One of the limitations of the product was that it only worked with VMware vSphere, but not any more.  Version 4 just dropped and it\u0026rsquo;s got a myriad of new goodies.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eNew User Interface\u003c/li\u003e\n\u003cli\u003eCross-Replication from vSphere to Microsoft SCVMM and Amazon Web Services (AWS)\u003c/li\u003e\n\u003cli\u003eSizing improvements\u003c/li\u003e\n\u003cli\u003eMore Secure Virtual Replication Appliances\u003c/li\u003e\n\u003cli\u003evSphere 6 support\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThe most appealing new capability was the ability to fail over a vSphere environment to Amazon Web Services (AWS).  This could save small businesses A LOT of money.  Small businesses that have a disaster recovery requirement no longer need to have a dedicated co-lo and spend money on equipment when they may never use it.  AWS provides compute, storage and network on an as-needed basis and most of the time, the disaster recovery site is not needed which correlates to savings.\u003c/p\u003e","title":"Zerto 4 - To Amazon and Beyond"},{"content":"Setting up a DHCP relay is a pretty common task that performed by network administrators when setting up a new LAN. If you\u0026rsquo;re not familiar with a DHCP Relay, take a look at the example below.\nIn order for a client to get an IP Address from a DHCP Server, it sends out a broadcast once it\u0026rsquo;s plugged into the network. The broadcast is asking for any servers that are DHCP servers to reply. Remember that a broadcast is a frame that is forwarded to all hosts on a Local Area Network. The DHCP Server will reply and the client will get it\u0026rsquo;s IP Address.\nNow assume that you have DHCP server setup, but it\u0026rsquo;s on a different network. What happens then? Broadcasts don\u0026rsquo;t traverse the local network so how is this accomplished? The answer is that a DHCP relay agent is used on an appliance on the clients LAN that can pass that traffic to the DHCP server. Usually this relay agent is a Layer 3 Switch or Router since it has access to both networks.\nThe broadcast still happens from the client and then the relay agent forwards a unicast frame to the DHCP server on behalf of the client.\nSetup To set this up on an HP v1910-24G switch like the one in my home lab we can do the following. In the Menu, go to the Network Section \u0026ndash;\u0026gt; DHCP. From here we first enable the DHCP Service. This doesn\u0026rsquo;t mean that the switch will start handing out IP Addresses, it just starts the service.\nNow we add the IP Address of our DHCP Server to a server group. This is to allow the agent to know where to send the relay frames. Click Add, and enter a server group ID and an IP Address of the DHCP Server.\nNext, take a look at the Interfaces and their Relay State. My Clients are on VLAN170 and my DHCP server is on VLAN 150. Notice that I only had to enable the DHCP Relay State on the Vlan170, where the clients live. Click the [looks like email] icon in the \u0026ldquo;Operation\u0026rdquo; column to enable this.\nWe we enable the DHCP Relay. You can add the \u0026ldquo;Address Match Check\u0026rdquo; as well, but beware. If any of your clients on this network don\u0026rsquo;t match a DHCP lease, they won\u0026rsquo;t be able to communicate. Meaning that even if you set a static IP Address on your client, the switch won\u0026rsquo;t respond to it because there is not matching address. Also enter a server group ID. This ID should match the server group that was added when adding the DHCP Server IP Address earlier.\nNext, go to the DHCP Snooping screen to \u0026ldquo;trust\u0026rdquo; the physical interfaces that might have a DHCP Server on them. Notice that I have multiple physical ports trusted, because my DHCP Server might vMotion between ports on different hosts since it\u0026rsquo;s a virtual machine. Don\u0026rsquo;t forget to add all of them or you\u0026rsquo;ll have a fun time hunting down issues. Again, click the icon in the operations column to configure the port.\nChange the interface state to \u0026ldquo;trust\u0026rdquo; and click Apply. Your clients should now be able to request IP Addresses from your DHCP Server.\n","permalink":"https://theithollow.com/2015/04/27/setup-dhcp-relay-on-hp-v1910/","summary":"\u003cp\u003eSetting up a DHCP relay is a pretty common task that performed by network administrators when setting up a new LAN.  If you\u0026rsquo;re not familiar with a DHCP Relay, take a look at the example below.\u003c/p\u003e\n\u003cp\u003eIn order for a client to get an IP Address from a DHCP Server, it sends out a broadcast once it\u0026rsquo;s plugged into the network.  The broadcast is asking for any servers that are DHCP servers to reply.  Remember that a broadcast is a frame that is forwarded to all hosts on a Local Area Network.  The DHCP Server will reply and the client will get it\u0026rsquo;s IP Address.\u003c/p\u003e","title":"Setup DHCP Relay on HP V1910"},{"content":"If you\u0026rsquo;re planning on doing a full distributed installation of vRealize Automation, you\u0026rsquo;ll likely want to have some protection for the vPostgres database. Having a single point of failure defeats the purpose of doing a full distributed install. I\u0026rsquo;ve been doing a bunch of work on this lately and wanted to warn people of a gotcha if you\u0026rsquo;re using a load balancer.\nNon-Distributed Install To give us a better understanding, take a look at a pair of vRealize Automation Appliances that aren\u0026rsquo;t in a high availability solution. In the picture below, there are two vRealize Automation Appliances and each of them is communicating with their own embedded vPostgres Database. This is the default configuration when deployed from VMware and works just fine.\nTaking a look at the configuration inside the cafe appliance, we can see that the local host address is being used for the database.\nHigh Availability vPostgres The version of vPostgres that ships with vRealize Automation will allow you to setup a one way replication but does require a manual failover in the event of an outage. The setup requires you to make several changes to allow vPostgres to listen on additional addresses, adding some replication paramaters etc. Once the configurations are done the solution would look much like the picture below, with a load balancer in the middle.\nOne embedded vPostgres database is active, the other is passive and is receiving replication traffic from the primary. Each of the appliances will then point their database connection to a virtual IP (VIP) on the load balancer and the load balancer will point to the active vPostgres database.\nThe Gotcha When you go to the vRealize Automation appliance and change the database to use a network address that is not 127.0.0.1 or the IP Address of the appliance, the vRealize Automation appliance will SHUT DOWN THE DATABASE.\nBefore the database is changed to use the load balancer, we can see the vpostgres service is running on the appliance.\nWe then make the change on the appliances to use the VIP. When you click save, you\u0026rsquo;ll see that the database configuration change happened, but the connection status shows an error.\nAnother check on the service and we notice that the vPostgres database is not longer running.\nTo fix this, start the service.\nAfterwards you\u0026rsquo;ll see that the connection is working. Be aware though, that anytime these appliances reboot, the service may need to be manually started as noted in the KB Article from VMware.\n","permalink":"https://theithollow.com/2015/04/21/vpostgres-for-vrealize-automatin-gotcha/","summary":"\u003cp\u003eIf you\u0026rsquo;re planning on doing a full distributed installation of vRealize Automation, you\u0026rsquo;ll likely want to have some protection for the vPostgres database.  Having a single point of failure defeats the purpose of doing a full distributed install.  I\u0026rsquo;ve been doing a bunch of work on this lately and wanted to warn people of a gotcha if you\u0026rsquo;re using a load balancer.\u003c/p\u003e\n\u003ch2 id=\"non-distributed-install\"\u003eNon-Distributed Install\u003c/h2\u003e\n\u003cp\u003eTo give us a better understanding, take a look at a pair of vRealize Automation Appliances that aren\u0026rsquo;t in a high availability solution.  In the picture below, there are two vRealize Automation Appliances and each of them is communicating with their own embedded vPostgres Database.  This is the default configuration when deployed from VMware and works just fine.\u003c/p\u003e","title":"vPostgres for vRealize Automation Gotcha"},{"content":"Clustering the vPostgres database is an important part of a fully distributed vRealize Automation install. The simple install only requires a single vRealize Appliance and an IaaS Server, but the fully distributed install requires many additional pieces including load balancers to ensure both high availability as well as handling extra load placed by users. The vPostgres database is included with the vRealize Automation appliances, but for a full distributed install, these must be modified so that there is an active and standby vPostgres database running on them. The primary vPostgres database will replicate to a standby read-only database.\nVMware recently published a KB article to setup the vPostgres replication but didn\u0026rsquo;t show how to fail over the databases.\nThe Setup Just to level set, here is my environment. I\u0026rsquo;ve got a pair of vRealize Appliances, a pair of IaaS Servers running the manager, web services and Dems, a pair of windows server running the agents and last but not least, a pair of vPostgres appliances. These are all managed with a Kemp virtual load balancer.\nI\u0026rsquo;m using the vi rtual Kemp Load Balancer in my lab and as you can see I\u0026rsquo;ve setup a virtual IP (VIP) for the vPostgress database with a name of \u0026ldquo;vPostgres\u0026rdquo; and my two vPostgress servers are listed as \u0026ldquo;Real Servers\u0026rdquo; on port 5480. The load balancing scheduler was set to a fixed weighting, meaning that all requests will go to the highest weighted server unless it is unavailable, at which point the second appliance will receive the requests. This is important to have configured correctly because the vPostgres servers will be ActivePassive. Sending requests to the Passive database will result in errors.\nFailover the Database I used a highly complicated and sophisticated procedure to fail my Active vPostgres appliances\u0026hellip; I powered it off. I went into my vRealize Automation Appliance to check the database status. To my surprise it shows a connection status of \u0026ldquo;Connected\u0026rdquo; as you can see from the screenshot below. I\u0026rsquo;m not sure what the vRA appliance uses to determine if the database connection status is connected or not, but if it\u0026rsquo;s just using a simple ping, then this makes sense.\nI went to my vRealize Automation portal and still received my login screen. However when I tried to login it took a long period of time to log me in, and then finally displayed a error page.\nNow that we\u0026rsquo;ve proven that things are breaking, lets fail over that database. SSH into the backup vPostgres appliance and change to the postgres user by running:\nsu - postgres Next, run the commands to make the database readwrite. Be sure to change the directory to /opt/vmware/vpostgres/current/share since that\u0026rsquo;s where the scripts reside.\n./promote_replica_to_primary You\u0026rsquo;ll notice that the server shows \u0026ldquo;Server Promoting\u0026rdquo;.\nOnce the promotion is complete, the vRealize Automation portal works again as expected. Please note though that the load balancer started sending the requests to the backup appliance automatically because it was the only appliance available when the primary failed. If the primary vPostgres appliance were to become available again, the load balancer would start sending traffic to that one again, even though it is no longer the primary database. If the original primary appliance is fixed and powered back on, be sure to modify your load balancer rules before it comes back online.\nReset Now that your backup vPostgres database is now the primary, you need to make the original appliance the backup. To do this, you can re-run:\n./run_as_replica –h -b -W -U This will make the appliance a read-only copy of the new primary vPostgres appliance.\n","permalink":"https://theithollow.com/2015/04/13/vrealize-automation-vpostgres-failover/","summary":"\u003cp\u003eClustering the vPostgres database is an important part of a fully distributed vRealize Automation install.  The simple install only requires a single vRealize Appliance and an IaaS Server, but the fully distributed install requires many additional pieces including load balancers to ensure both high availability as well as handling extra load placed by users.  The vPostgres database is included with the vRealize Automation appliances, but for a full distributed install, these must be modified so that there is an active and standby vPostgres database running on them.  The primary vPostgres database will replicate to a standby read-only database.\u003c/p\u003e","title":"vRealize Automation vPostgres Failover"},{"content":"The vSphere-land top virtualization blog voting is now over and theITHollow.com was voted #35 which is up from number forty last year. I wanted to take this opportunity to thank everyone who voted for the site. There are very few rewards for all the time that is spent posting content, but seeing that the hard work is benefiting others, and is useful to the community, are among the top for sure.\nI started blogging because I thought I owed the community some of my experiences since I\u0026rsquo;d gotten so much value out of other bloggers posts over the years. I\u0026rsquo;ve found that sharing information has helped me stay current on new offerings, as well as gaining a deeper understanding of technologies. Continuing to put out content on a regular basis is time consuming, but it\u0026rsquo;s all worth it when someone posts a thank you for an article that\u0026rsquo;s helped them.\nIn addition I\u0026rsquo;d like to take the opportunity to thank all of the sponsors that have contributed to theITHollow over the past year. You help pay the bills and keep the blog posts coming along. I certainly hope you find value in the investments as well.\n","permalink":"https://theithollow.com/2015/04/07/thank-you/","summary":"\u003cp\u003eThe vSphere-land top virtualization blog voting is now over and theITHollow.com was voted #35 which is up from number forty last year.  I wanted to take this opportunity to thank everyone who voted for the site.  There are very few rewards for all the time that is spent posting content, but seeing that the hard work is benefiting others, and is useful to the community, are among the top for sure.\u003c/p\u003e","title":"Thank You"},{"content":"vRealize Automation is a great way to allow teams to deploy virtual machines and manage them throughout their entire lifecycle. You can control exactly where you want the machines deployed and the processes that must happen in order to meet company guidelines. Sometimes, you\u0026rsquo;d like to give some additional options to the end user when they deploy a machine. To do this, we can use a custom property.\nBuild a Property in the Property Dictionary To start, lets build a new property in the property dictionary. To do this, go to the Infrastructure Tab \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Property Dictionary. From there, we can add a \u0026ldquo;New Property Definition\u0026rdquo;. In the example below I\u0026rsquo;ve created a very generic \u0026ldquo;HollowTestProperty\u0026rdquo; and left the display name the same. A description is always a good idea and the Control Type I changed to \u0026ldquo;DropDownList\u0026rdquo;. This will mean that we can enter a series of values to be selected by the end user at the time of the request. Be sure to click the green check mark to save the entry. Once you\u0026rsquo;ve added the property definition. Click the \u0026ldquo;Edit\u0026rdquo; hyperlink under property attributes. This will allow you to add the list of values that should show up in the drop down list. Select Value list as the type, and give the attributes a name. Lastly, enter the values that should show up in the drop down list, separated by a comma. Again, be sure to click the green check mark to save the entry.\nNext, add a new property layout. Enter a name for the layout and a description. Click the green check mark to save the layout.\nOnce you\u0026rsquo;ve saved the layout, click the \u0026ldquo;Edit\u0026rdquo; hyperlink under property instances. Select the property definitions that are part of the layout.\nCreate a Build Profile Now that we\u0026rsquo;ve got the property created, we can build a new build profile. Go to Infrastructure \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Build Profiles. Give the profile a name and then under properties add a new custom property. The name should match the new custom property that you created earlier. Leave the value blank as it will be selected by the end user, but be sure to click the check mark under \u0026ldquo;Prompt User\u0026rdquo;. And yes, click the green check mark to save the custom property.\nCreate or Modify the Server Blueprint Edit any of your server blueprints now and select the Build profile that we\u0026rsquo;ve created. Save the blueprint andor publish and assign the blueprint to your users as you normally would.\nWhen the end user requests a server from the blueprint, they\u0026rsquo;ll be prompted to enter a value from the drop down for your new custom property.\nRequest a Catalog Item Maybe this doesn\u0026rsquo;t seem super useful to you, but this value will be passed to vRealize Orchestrator if your blueprint calls a workflow on provisioning. To illustrate, I\u0026rsquo;ve created a workflow that was called during provisioning that logs all the variable values passed to Orchestrator. You can see that a variable called \u0026ldquo;HollowTestProperty\u0026rdquo; has a value of \u0026ldquo;Maybe\u0026rdquo;. Perhaps you want to allow the user to select a Gold, Silver or Bronze tier during provisioning and your Orchestrator workflow reads the variable and modifies the virtual machine settings accordingly. This is much more elegant than having to have three separate blueprints just because there are three different places to deploy them to.\nSummary A custom property and a build profile is a great way to customize your server blueprints. If nothing else, it\u0026rsquo;s a good way to pass additional information over to the vRealize Orchestrator so that additional customizations can be performed.\n","permalink":"https://theithollow.com/2015/03/30/custom-options-for-vrealize-automation-server-requests/","summary":"\u003cp\u003evRealize Automation is a great way to allow teams to deploy virtual machines and manage them throughout their entire lifecycle.  You can control exactly where you want the machines deployed and the processes that must happen in order to meet company guidelines.  Sometimes, you\u0026rsquo;d like to give some additional options to the end user when they deploy a machine.  To do this, we can use a custom property.\u003c/p\u003e\n\u003ch1 id=\"build-a-property-in-the-property-dictionary\"\u003eBuild a Property in the Property Dictionary\u003c/h1\u003e\n\u003cp\u003eTo start, lets build a new property in the property dictionary.  To do this, go to the Infrastructure Tab \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Property Dictionary.  From there, we can add a \u0026ldquo;New Property Definition\u0026rdquo;.  In the example below I\u0026rsquo;ve created a very generic \u0026ldquo;HollowTestProperty\u0026rdquo; and left the display name the same.  A description is always a good idea and the Control Type I changed to \u0026ldquo;DropDownList\u0026rdquo;.  This will mean that we can enter a series of values to be selected by the end user at the time of the request.  Be sure to click the green check mark to save the entry.\n\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/03/vRA-CustomProperty1.png\"\u003e\u003cimg alt=\"vRA-CustomProperty1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/03/vRA-CustomProperty1-1024x143.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Custom Options for vRealize Automation Server Requests"},{"content":"VMware Tools gives you the option to synchronize the time of the guest OS with the ESXi host. Many times this isn\u0026rsquo;t necessary because the guest itself is using Network Time Protocol (NTP) or used the Active Directory domain time.\nWhy would proper time synchronization be a problem, you might ask? Well, in a virtual environment, the CPU isn\u0026rsquo;t constantly keeping track of time like it does in a physical machine. For a more detailed explanation look at tick counting in the Timekeeping Guide.\nIn some circumstances NTP or domain timekeeping isn\u0026rsquo;t possible and you need a way to make sure the time on the guest OS is set and the guest keeps time correctly. For these situations you can use the VMware Tools Time Sync.\nThere are a couple of things to keep in mind when using the \u0026ldquo;Synchronize guest time with host\u0026rdquo; option.\nTime Zones If your virtual machine is on a different time zone from the ESXi host it resides on, this is ok. The virtual machine will keep time with the ESXi host but will respect the timezone. For instance, if the ESXi host time is 8:00 pm Central Time, but the guest virtual machine is set to Pacific Time, the guest OS will synchronize and show 6:00 pm.\nTime Changing If the virtual machine time drifts and needs to be reset, VMtools periodically checks the time once per minute to ensure the time drift isn\u0026rsquo;t off. If the time is determined to be wrong, then there are a couple of options.\nVM Time is behind the host time: If the virtual machine is running behind the host time, it is immediately updated to match the ESXi host time. VM Time is ahead of the host time: If the virtual machine is running ahead of the host, the guest clock is slowed down until the host matches the guest. It does not immediately change the time. Immediate Time Sync Besides the periodic time sync checks, there are also some circumstances where a time sync occurs immediately.\nAnytime the VMware Tools Daemon is started. For instance restarting the windows service, or rebooting the guest OS Reverting to a snapshot causes the time to automatically sync. Shrinking a virtual machine disk Check VMtools Sync You can check to see if VMware tools is set to sync, buy looking in the \u0026ldquo;Edit Settings\u0026rdquo; menu of the virtual machine and looking for the \u0026ldquo;Synchronize guest time with host\u0026rdquo; setting. If you don\u0026rsquo;t have access to vCenter, you can also look on the guest OS by running the VMwareToolboxCmd.exe timesync status command on a Windows guest. For a linux guest try running vmware-toolbox-cmd timesync status command. These commands should run from the VMware Tools Install directory.\n","permalink":"https://theithollow.com/2015/03/24/vmware-tools-time-syncronization/","summary":"\u003cp\u003eVMware Tools gives you the option to synchronize the time of the guest OS with the ESXi host.  Many times this isn\u0026rsquo;t necessary because the guest itself is using Network Time Protocol (NTP) or used the Active Directory domain time.\u003c/p\u003e\n\u003cp\u003eWhy would proper time synchronization be a problem, you might ask?  Well, in a virtual environment, the CPU isn\u0026rsquo;t constantly keeping track of time like it does in a physical machine.  For a more detailed explanation look at tick counting in the \u003ca href=\"http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf\"\u003eTimekeeping Guide\u003c/a\u003e.\u003c/p\u003e","title":"VMware Tools Time Syncronization"},{"content":"If you\u0026rsquo;re in the market for a Load Balancer and don\u0026rsquo;t mind that it\u0026rsquo;s virtual, check out Kemp\u0026rsquo;s Virtual Load Balancer. Even better is if you want to try stuff out in your lab because you can get the Virtual Load Balancer for free! There are some limitations, I mean everyone has to make money some how and theres no reason to buy the cow if you get the milk for free, am I right?\nMany times I want to check out some stuff in my lab but a load balancer is usually a luxury I don\u0026rsquo;t have so a very simple version of a common deployment scenario is usually what I settle for. Not any longer.\nDeployment Super simple to deploy and configure the appliance. Download an OVF from freeloadbalancer.com and import it into your vSphere environment. From there, configure some basic settings like IP Address through the console.\nOnce done, you\u0026rsquo;ll login to the load balancer from a web page and select the license type. You\u0026rsquo;ll need to register it with your online account to activate it.\nMain Menu\nLoad Balance Setup Lets setup a load balanced webpage. I deployed two web servers each with a very simple web page that displays the name of the server. Next we setup the load balancer to round robin the connections.\nClick on \u0026ldquo;Add New\u0026rdquo; under the Virtual Services. Add a new IP Address. This is the VIP for our web servers.\nNext we can give the VIP a name if we wish, but we need to select a service type. For this we\u0026rsquo;ve picked HTTPHTTPS since it\u0026rsquo;s a web server. You can modify the scheduling method as well, but I left it at Round Robin for simplicity sake.\nNext we scroll to the bottom of the page to add the \u0026ldquo;Real Servers\u0026rdquo;. These are the actual web servers that will be load balanced. Literally they are \u0026ldquo;Real Servers\u0026rdquo; (of course in my lab they are Read Virtual Machines). I\u0026rsquo;ve added the two machines.\nOpen up a web browser and navigate to our VIP. You can see from the screenshot below that I can navigate to the VIP and get either of the two Web Servers depending on how the load balancer re-directs us. Summary There are definitely some limitations with the free version of the Kemp Load Balancer, but it\u0026rsquo;s great for a home lab and may work in a pinch for you in other situations. I could envision this being available at a DR site for a \u0026ldquo;just in case\u0026rdquo; solution. It would be pretty quick to update the licenses to increase the connection count and bandwidth limitations while already having your configurations set.\n","permalink":"https://theithollow.com/2015/03/16/kemp-virtual-load-balancer-for-free/","summary":"\u003cp\u003eIf you\u0026rsquo;re in the market for a Load Balancer and don\u0026rsquo;t mind that it\u0026rsquo;s virtual, check out Kemp\u0026rsquo;s Virtual Load Balancer.  Even better is if you want to try stuff out in your lab because you can get the Virtual Load Balancer for free!  There are some limitations, I mean everyone has to make money some how and theres no reason to buy the cow if you get the milk for free, am I right?\u003cimg alt=\"DSC02088\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/03/DSC02088-225x300.jpg\"\u003e\u003c/p\u003e","title":"Kemp Virtual Load Balancer for Free!"},{"content":"Many of my daily activities at work now revolve around the idea of a Hybrid Cloud so some of my home lab activities have also followed suit. I realized it had been a while since I wrote up the particulars of my home lab and I\u0026rsquo;ve added some equipment so this gives me a good opportunity to show some of the upgrades.\nConfiguration The environment consists of four physical ESXi hosts that run most of my virtual machines. These servers have three nics that handle all of the virtual machine traffic and the NFS Storage traffic to a pair of Synology NAS devices.\nA fourth host runs Microsoft Windows on bare metal with 32 GB of RAM, and inside that are three ESXi hosts running inside of VMware Workstation. These ESXi hosts are used for testing things without affecting my management servers. For instance, since ESXi is virtual, a quick clone and host profile can build a host very quickly for testing VSAN, spinning up a DR Site or just adding hosts for an endpoint for vRealize Automation.\nA Synology DS1815+ is my primary shared storage device with four 4TB spinning disks and a pair of 240GB SSDs as cache. A second Synology DS1513+ is used for ISOs and virtual machine backups and photos.\nA Cisco 5505 ASA is my firewall and it provides Anyconnect VPN Access as well as a Site-Site VPN to Amazon Web Services for a vRealize Automation Endpoint.\nLastly, an HP v1910-24G switch ties everything together with some layer3, and LACP capabilities.\nHybrid Cloud It\u0026rsquo;s not a Hybrid Cloud without a Private Cloud, a Public Cloud some Automation and a Catalog. Also, if you want some nice instructions on setting up vRealize Automation with Amazon Web Services, check out Kendrick Coleman\u0026rsquo;s blog. He\u0026rsquo;s got a great series on the whole setup.\nParts List ESXi Servers – Quantity 2 Case: Lian Li PC-V351B MicroATX PSU: SeaSonic Platinum SS-400FL2 Fanless 400W RAM: Kingston 16GB (4 X 8GB) 240-Pinn DDR3 Unbufferred ECC Motherboard: Supermicro MBD-X9SCM-F-O CPU: Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz NICs: Intel EXPI9301CTBLK 1000Mbps PCI-Express, SuperMicro Dual Port Gigabit Card Boot: Kingston DataTraveler 101 G2 8GB USB 2.0 Local SSD: 64 GB Intel SSD ESXi Server - Quantity 1 Case: Antec P280 ATX Mid Tower Case RAM: Crucial 32GB (2 X 16GB) DDR4 CPU: Intex Xeon E5-2603V3 Motherboard: Supermicro X10SRL-F-O PSU: EVGA 500W-1 80+ ESXi Server - Quantity 1 Case: HP Gen8 Microserver Storage: Dual 480GB SSD’s from OCZ Nested ESXi Host – Quantity 1 (Used for a DR Site, or vRealize Automation Endpoint Cluster) Motherboard: Asus M5A97 CPU: AMD 6200 FX Six Core Storage Arrays Synology Array: 1- Synology DS1513+\nHard Drives: Five - 1 TB Wester Digital Blue 7200 3.5 inch hard drives\nSynology Array: 1 - Synology DS1815+\nHard Drives:\nFour - 4 TB Western Digital Purple 7200 3.5 inch hard drives Two - 240 GB Kingston SSD Now drives Networking Equipment Layer 3 Switch: Cisco WS03750G-24T Switch Firewall: Cisco ASA Wireless Router: Dlink Wireless N+ Router ","permalink":"https://theithollow.com/2015/03/09/hollow-lab-2015-baby-dragon-hybrid-cloud/","summary":"\u003cp\u003eMany of my daily activities at work now revolve around the idea of a Hybrid Cloud so some of my home lab activities have also followed suit.  I realized it had been a while since I wrote up the particulars of my home lab and I\u0026rsquo;ve added some equipment so this gives me a good opportunity to show some of the upgrades.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/03/HollowLab1.jpg\"\u003e\u003cimg alt=\"HollowLab1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/03/HollowLab1-1024x658.jpg\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/03/20150215_124454.jpg\"\u003e\u003cimg alt=\"20150215_124454\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/03/20150215_124454-1024x576.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"configuration\"\u003eConfiguration\u003c/h1\u003e\n\u003cp\u003eThe environment consists of four physical ESXi hosts that run most of my virtual machines.  These servers have three nics that handle all of the virtual machine traffic and the NFS Storage traffic to a pair of Synology NAS devices.\u003c/p\u003e","title":"Hollow Lab 2015 - Baby Dragon Hybrid Cloud"},{"content":"There is a new kid in town when it comes to infrastructure monitoring. Opvizor is a pretty neat little solution for identifying issues in your environment before they become a problem. The install was simple enough, only requiring me to run an installer and connect it to my vCenter Server. Once that was done, it was a matter of clicking a button to scan and upload my data back to Opvizor\u0026rsquo;s servers (anonymized data of course). These uploads are scheduled to prevent all this manual nonsense, but I couldn\u0026rsquo;t wait to see what my environment looked like so I uploaded it right away.\nHere\u0026rsquo;s what you could expect to see after an upload. NOTE: Of course I wouldn\u0026rsquo;t have had any errors in my lab, so I had to conjure some up just for a test. :)\nThe graphical user interface is pretty slick and very responsive. From the main screen, you can get a quick view of your environment and what might be wrong with it. If you drill down into individual issues, you can get a really detailed description of why it\u0026rsquo;s marked as an issue. From there, it also explains how to modify the environment to mitigate the threat, or you can mark that threat as not important, maybe because you have a darn good reason for the settings being the way they are.\nMy favorite part of the tool, was not only did it show you how to make a configuration change, but it would also show you the powershell code to fix the issue across your entire environment if you wanted to do it! This makes these types of reconfigurations a snap to resolve.\nYou could also check out some of the reporting features. There are reports for compute, networking, storage, issues, memory and others in either PDF, HTML or CSV format.\nThe last thing I noticed was the capability to see how your environment stacks up against the average ones.\nSummary Overall, I like the idea of using big data analytics to resolve issues, and it\u0026rsquo;s nice to be able to see some averages, but you must always be careful with that data. Just because everyone is doing it, doesn\u0026rsquo;t make it right.\nThe tool is very easy to use and can give some very quick insight into an environment as well as ongoing configuration management. If you\u0026rsquo;re interested, check it out for yourself. There is a free trial available and some other tools like the recently released \u0026quot; Snapwatcher\u0026quot;.\n","permalink":"https://theithollow.com/2015/03/02/opvizor/","summary":"\u003cp\u003eThere is a new kid in town when it comes to infrastructure monitoring.  \u003ca href=\"http://www.opvizor.com/\"\u003eOpvizor\u003c/a\u003e is a pretty neat little solution for identifying issues in your environment before they become a problem.  The install was simple enough, only requiring me to run an installer and connect it to my vCenter Server.  Once that was done, it was a matter of clicking a button to scan and upload my data back to Opvizor\u0026rsquo;s servers (anonymized data of course).  These uploads are scheduled to prevent all this manual nonsense, but I couldn\u0026rsquo;t wait to see what my environment looked like so I uploaded it right away.\u003c/p\u003e","title":"Opvizor"},{"content":"I recently made some changes to my home lab and had to create a new Cluster because of my EVC mode when I was faced with migrating my vC Ops vApp to the new cluster. I moved the hosts over, but the vApp wouldn\u0026rsquo;t go with them like the virtual machines did.\nvAppMigrate1\nMy first attempt was to export the vApp to an OVF file and reimport it to the new cluster which failed with an error.\nAfter doing some looking and finding this very useful knowledge base article, it was obvious that I needed to create a new virtual machine as a place holder.\nExport your vApp to an OVF The first thing you should do is check the timezone, IP Addresses and virtual networks that your VMs inside of the vApp. You\u0026rsquo;ll need to supply this information again later.\nNext, Create a virtual machine inside of the vApp and use the same network that your previous Virtual Machines were using. (I should note, that if you\u0026rsquo;re doing this before you\u0026rsquo;ve moved your VMs to the new cluster, it might not be necessary to create this temporary virtual machine. For my temp VM you can see that I set my memory and hard disk levels very low since this is a fake VM with no operating system. I also made sure to configure the Nic to use the same virtual network that my VMs from the original vApp were on. This is how we eliminate the error I had above.\nWhen you\u0026rsquo;re all done you should see the new VM under your vApp.\nNow, we can export the vApp to an OVF by right clicking on the powered off vApp and choosing \u0026ldquo;export OVF Template\u0026rdquo;. Save it to someplace on your hard drive.\nWhen the vApp has been exported, you can delete the vApp from the old cluster.\nImport OVF Template Now we will navigate to our new cluster, right click and choose \u0026ldquo;Deploy OVF Template\u0026rdquo;. Choose the file location that you exported the vApp to.\nYou\u0026rsquo;ll be handed off to a wizard that is straight forward enough that I won\u0026rsquo;t go through all of it, but two screens do need to be modified from the defaults. The first of which is the \u0026ldquo;Setup Networks\u0026rdquo; section. Of course, be sure to select the correct destination network, but the important part is to change the IP allocation Policy from \u0026ldquo;DHCP\u0026rdquo; to \u0026ldquo;Static-Manual\u0026rdquo;.\nThe very next screen you come to will need you to change the timezone, and set the IP addresses of the UI VM and the Analytics VM. We collected all of this information at the very beginning of this post.\nClick finish to deploy the vApp.\nCleanup Now that we\u0026rsquo;ve imported the vApp to the new cluster, you\u0026rsquo;ll have your temp virtual machine still in the vApp. We need to move the UI VM and Analytics VM back to this new vApp.\nOnce that\u0026rsquo;s done, you can go ahead and delete the \u0026ldquo;Temp\u0026rdquo; VM that we used during the migration. Power on your vApp and you\u0026rsquo;re back in business!\n","permalink":"https://theithollow.com/2015/02/23/move-vapp-clusters/","summary":"\u003cp\u003eI recently made some changes to my home lab and had to create a new Cluster because of my EVC mode when I was faced with migrating my vC Ops vApp to the new cluster.  I moved the hosts over, but the vApp wouldn\u0026rsquo;t go with them like the virtual machines did.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/06/vAppMigrate1.png\"\n         alt=\"vAppMigrate1\" width=\"348\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003evAppMigrate1\u003c/p\u003e\n        \u003c/figcaption\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eMy first attempt was to export the vApp to an OVF file and reimport it to the new cluster which failed with an error.\u003c/p\u003e","title":"Move a vApp Between Clusters"},{"content":"You\u0026rsquo;ve got to be a serious geek to want to install your own SSL Certificates on your home NAS. I mean come on, who really has their own certificate authority sitting around at home and is so annoyed with a little warning page when they access the GUI? Well, since you\u0026rsquo;ve landed on this page, I assume that I\u0026rsquo;m in some similar company :)\nLogin to your Synology NAS and open the control panel. Click Security and then the \u0026ldquo;Certificates\u0026rdquo; tab at the top. You\u0026rsquo;ll notice the subtle \u0026ldquo;Self-signed certificate\u0026rdquo; status blazoned in red lettering. Don\u0026rsquo;t worry, thats what we\u0026rsquo;re going to fix. Click the create certificate button to open a certificate wizard. Once the wizard opens, click the radio button next to \u0026ldquo;Create certificate signing request (CSR). Click Next.\nEnter some information about your new certificate. Private key length (the larger the more secure of course), a common name which should match the NAS DNS name, and then some additional information about your organization and locality. Click Next.\nOnce done, click the download button to download the CSR and the private key.\nYou\u0026rsquo;ll need to extract the files to a folder that you can access.\nGo to your certificate server to request a certificate. In my case I\u0026rsquo;m using a Microsoft Certificate Authority so I can go to https://[CASERVER]/certsrv/ . If you need help setting up your home lab certificate authority, check out one of my previous series to get you going.\nClick on Request a certificate.\nChoose \u0026ldquo;advanced certificate request\u0026rdquo;\nOpen the CSR that you downloaded from your Synology array and paste the contents into the certificate request field. Click Submit.\nClick Base 64 encoded and then choose \u0026ldquo;Download certificate\u0026rdquo;.\nGo back to the Synology administration page and click the \u0026ldquo;Import certificate\u0026rdquo; button. Here, you\u0026rsquo;ll need to locate three files.\nThe Private Key - This file will be one of the two files that were originally downloaded from the Synology and was in the compressed file with the CSR. The Certificate - This file will be the file we just downloaded from the certificate authority. Intermediate certificate - This file will be the Root Cert or an Intermediate Cert. You can get this at the https://[CASERVER]/certsrv/ on the main page. Click the \u0026ldquo;Download a CA Certificate, certificate chain or CRL\u0026rdquo;. Click OK\nLook at that! Now we\u0026rsquo;ve got some great looking green statuses! Cool!\nTo ensure that you\u0026rsquo;re using the new certificates to connect to your NAS, go to the Network section in control panel and then the \u0026ldquo;DSM Settings\u0026rdquo; tab. Click the \u0026ldquo;Enable HTTPS connection. If you\u0026rsquo;re like me, I didn\u0026rsquo;t want to type in that pesky \u0026ldquo;S\u0026rdquo; on HTTP\u0026quot;S\u0026rdquo; every time so I checked the box to automatically redirect http connections.\nGo Access your NAS and look for the sweet https:// valid certificate indicator in your favorite browser.\n","permalink":"https://theithollow.com/2015/02/17/add-ssl-certificates-to-your-synology-nas/","summary":"\u003cp\u003eYou\u0026rsquo;ve got to be a serious geek to want to install your own SSL Certificates on your home NAS.  I mean come on, who really has their own certificate authority sitting around at home and is so annoyed with a little warning page when they access the GUI?  Well, since you\u0026rsquo;ve landed on this page, I assume that I\u0026rsquo;m in some similar company :)\u003c/p\u003e\n\u003cp\u003eLogin to your Synology NAS and open the control panel.  Click Security and then the \u0026ldquo;Certificates\u0026rdquo; tab at the top.  You\u0026rsquo;ll notice the subtle \u0026ldquo;Self-signed certificate\u0026rdquo; status blazoned in red lettering.  Don\u0026rsquo;t worry, thats what we\u0026rsquo;re going to fix. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/01/Synology-SSL1.png\"\u003e\u003cimg alt=\"Synology-SSL1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/01/Synology-SSL1.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Add SSL Certificates to your Synology NAS"},{"content":"Amazing news that the latest version of vSphere has been announced. vSphere 6 is now just around the corner from being generally available, but one can assume that the VCP 6 exam is still several weeks away from being available. If you\u0026rsquo;re like me, your VCP 5 is scheduled to expire in March. AHHHHHHHHHHH!!!!!!!!!!!!!!!!!!!! Quick, study for an exam that you\u0026rsquo;ve already taken and passed in the past so that you can keep your certification and then take the VCP 6 which is likely just around the corner.\nWell, yeah, I know this is kind of a bummer, but the good news is that you can take the VCP 5 Delta exam online from the comfort of your own laptop, and if you do it before March (I assume thats about the only reason to still take the version 5 exam) then you can do it for half price. Be sure to enter the promotion code VCP550D and request authorization. http://mylearn.vmware.com/mgrReg/plan.cfm?plan=51919\u0026amp;ui=www_cert\nIf you\u0026rsquo;re wondering about the exam, I can tell you that just because you\u0026rsquo;re already a VCP and that you work with vSphere on a regular basis, and you may be able to look up some things since it\u0026rsquo;s an open book exam, you should still not take the exam lightly. I would take some time to study for it because it\u0026rsquo;s not a formality. The test questions can take you by surprise.\nI was expecting to have a lot of questions about changes between versions and some basics just to re-validate the skillset, but I found the test to be fairly challenging. It\u0026rsquo;s not something that will likely require hours and hours of studying but you should be prepared for it.\nIn any case, take advantage of the discounted price and the fact that you can take it at home, before your VCP expires. The requirements to certify from scratch are far more painful than just sitting a 90 minute open book exam.\n","permalink":"https://theithollow.com/2015/02/09/renew-your-vcp-5-certification-now/","summary":"\u003cp\u003eAmazing news that the latest version of vSphere has been announced.  vSphere 6 is now just around the corner from being generally available, but one can assume that the VCP 6 exam is still several weeks away from being available.  If you\u0026rsquo;re like me, your VCP 5 is scheduled to expire in March.  AHHHHHHHHHHH!!!!!!!!!!!!!!!!!!!!   Quick, study for an exam that you\u0026rsquo;ve already taken and passed in the past so that you can keep your certification and then take the VCP 6 which is likely just around the corner.\u003c/p\u003e","title":"Renew your VCP 5 Certification Now"},{"content":"vSphere 6.0 is now available and there are some great new enhancements with the new version. Here are some of the many highlights from today\u0026rsquo;s announcement.\nSpeeds and Feeds As with the new version of anything things are bigger and faster. vSphere 6.0 is no exception.\n64 hosts per cluster, up from 32 8000 Virtual Machines per Cluster, up from 4000 480 CPUs, up from 320 CPUs 12 TB RAM, up from 4 TB (if someone has 12 TB of RAM in a box, please let me know how long it takes to do a memory check. vSphere 7 might be out by then. 1000 Virtual Machines per host, up from 512 Virtual Machines Virtual Machine Hardware version 11 allows for:\n128 vCPUs Hot-add RAM now is vNUMA aware Serial Ports now have a maximum of 32 ( I KNOW CAN YOU BELIEVE HOW AWESOME THIS IS!!!!!) vMotion is supported for Cluster Across Boxes with physical compatibility mode RDMs vCenter 6 now has parity between the Windows version and the appliance version. If you use the appliance though, your database options are still limited to vPostgres or Oracle.\n1000 VMs per vCenter 10,000 Powered-On VMs per vCenter 64 Hosts per Cluster 8,000 VMs per Cluster Web Client is drastically improved. There have been plenty of grumblings about the vSphere web client in the past, but it looks like we should give the new client a go. Mainly because it will be a necessary piece of the infrastructure for some time to come. In any case, try using it with an open mind. Performance has been dramatically improved, and the layout has been adjusted to look more like the C# client. I think we\u0026rsquo;ll like it.\nFault Tolerance Fault Tolerance is nothing new, but there has been a big wait for vSMP for FT. I mean if you\u0026rsquo;ve got a VM that\u0026rsquo;s so important that you need two of them in lockstep, what are the odds that this VM only have one processor? Usually, really important VMs also have high performance characteristics. Well, relax, you can now protect up to 4 vCPUs in a single VM, but pay close attention to the caveats.\nFault Tolerance is now able to be hot configured Protect 4 vCPUs Allows snapshots now allows paravirtual devices Works with SRM, and VDS Caveats\nRequires an additional vmdk ( This makes storage fault tolerant as well which requires twice as much storage, but also adds to availability) Requires 10 GbE Max of 4 protected virtual machines Introducing the Platform Service Controller There is a new construct in town and it\u0026rsquo;s the Platform Services Controller (PSC). This controller manages Single Sign-On (SSO), Licensing and Certificate Authority. The PSC will link all of your vCenters together automatically if they are in the same SSO domain. You can imagine this controller being the new way to manage highly available vCenters in the future.\nAlso, The PSC is a certificate authority. All of those pesky warnings about untrusted certificates are soon a thing of the past. And no more grueling through updating certificates on your hosts and services. Now certificates will be as easy as right clicking and asking for a new one. You can make the PSC an intermediate CA if you\u0026rsquo;ve got an existing PKI.\nBig Changes for vMotion There are quite a few changes to vMotion now and I think we\u0026rsquo;ll be seeing a lot of new uses for the technology.\nvMotion between switches. (This requires Layer 2 connectivity. It will not re-IP the machine, but you can now migrate between switches if you need) Migrating between vDS switches keeps the metadata for historical purposes. You can now vMotion between vCenters. This may be great for migrations in the future. Long Distance vMotion as long as RTT is less than 100ms vMotion over Layer 3. Now we don\u0026rsquo;t even need to have layer 2 adjacent hosts to vMotion. We can now route to do a vMotion. ","permalink":"https://theithollow.com/2015/02/02/vsphere-6-0-announced/","summary":"\u003cp\u003evSphere 6.0 is now available and there are some great new enhancements with the new version.  Here are some of the many highlights from today\u0026rsquo;s announcement.\u003c/p\u003e\n\u003ch2 id=\"fast-speeds-and-feeds\"\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/02/FAST.png\"\u003e\u003cimg alt=\"FAST\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/02/FAST-300x190.png\"\u003e\u003c/a\u003e Speeds and Feeds\u003c/h2\u003e\n\u003cp\u003eAs with the new version of anything things are bigger and faster.  vSphere 6.0 is no exception.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e64 hosts per cluster, up from 32\u003c/li\u003e\n\u003cli\u003e8000 Virtual Machines per Cluster, up from 4000\u003c/li\u003e\n\u003cli\u003e480 CPUs, up from 320 CPUs\u003c/li\u003e\n\u003cli\u003e12 TB RAM, up from 4 TB (if someone has 12 TB of RAM in a box, please let me know how long it takes to do a memory check.  vSphere 7 might be out by then.\u003c/li\u003e\n\u003cli\u003e1000 Virtual Machines per host, up from 512 Virtual Machines\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eVirtual Machine Hardware version 11 allows for:\u003c/p\u003e","title":"vSphere 6.0 Announced"},{"content":"Unfortunately, not all software is perfect and from time to time I\u0026rsquo;ve run into issues with SRM as well. This post is a list of items I often see during SRM deployments and some information to troubleshoot issues.\nLog File Locations SRM Logs: c:programDataVMwareVMware vCenter Site Recovery ManagerLogs\nInstallation logs: %USERPROFILE%Application DataVMwareVMware Site Recovery ManagerLogs\nStorage Replication Adapater Logs: This depends on the SRA Vendor, but try program filesSRANAME to start with\nChange the default SRM Install Directory It\u0026rsquo;s not really that uncommon for Windows server teams to install applications in a drive other than the C: drive, but SRM might give you a hurdle to cross there. SRM uses Perl for some if it\u0026rsquo;s install and Perl default installation need both a long and short name for directory listings. In order to allow SRM to be installed into a drive other than C: you modify the following registry key and restart the SRM Server before the install.\nHKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFile SystemNtfsDisable8dot3NameCreation Change te Value to 1 and Click OK.\nFull information about the issue can be found at the following VMware KB Article.\nDatabase Permissions A constant headache I see during SRM installations is getting SRM to create the a SQL Database. This is either a connection issue such as windows firewall blocking ports between the SRM server and the SQL server, or most likely the database account does not have enough permissions on the database.\nThe account used to connect to the SQL Server should have the following permissions in SQL:\nDatabase permissions\ndb_owner public Server Roles\nBulkadmin DBCreator Public Many times, I see that the database permissions are set correctly, but the account is missing the server roles. This will prevent SRM from creating all of the tables in the database. Check this VMware KB article out for more information.\nDatabase Schema Another issue with the database could be that there are multiple schemas on the database.\nHere are three rules to follow when using multiple schemas onthe SQL server.\nSRM database schema must have the same as the database user account SRM must be the owner of the SRM database schema SRM database schema must be the default schema for the SRM user account This sort of issue happens on occasion, but in most cases I see databases, there is only one schema so this isn\u0026rsquo;t an issue.\nThe VMware Site Recovery Manager Service Won\u0026rsquo;t Start After a default installation, the SRM Service is created and the default login is \u0026ldquo;Local System\u0026rdquo;. This is ok if you are using SQL Server Authentication, but if the database authentication is set to \u0026ldquo;Windows Authentication\u0026rdquo; then the service account needs to run as the domain user account with access to the database.\nChange Service Account Passwords This one might surprise you, but changing some of the account information means that you\u0026rsquo;ll need to re-run the installer and perform a modify action on the installation. There is likely another way to do this that would involve updating fields in the database, or some xml files as well as updating the service passwords, but the easiest way to make these modifications is to re-run the installer by going to your \u0026ldquo;Programs and Features\u0026rdquo; on the server and clicking modify on the SRM service. Re-run the install process and only modify the pieces that need changed. This is in the official documentation found here.\nFailover Issues A common issue I see during \u0026ldquo;test\u0026rdquo; failovers is a problem mounting the snapshotted datastores at the recovery site. A typical error message may be \u0026ldquo;Failed to Create Snapshots of Replica Devices\u0026rdquo; or \u0026ldquo;Timed out (300 seconds) while waiting for SRA to complete \u0026lsquo;discoverDevices\u0026rsquo; command\u0026rdquo;. Many times this is related to the SRA and some tweaks can be made to the timeout settings.\nIn the Advanced Settings of the site I typically end up modifying the:\nstorageProvider.hostRescanRepeatCnt to 2 storageProvider.hostRescanTimeoutSec to something higher than 300. It all depends on the environment. Sorry I can\u0026rsquo;t be more specific than that, but it depends. storageProvider.waitForAccessibleDatastoreTimeoutSec to something higher than 30, and again it depends. There are plenty of advanced settings to tweak here, but most of the time the settings I have to change are in the \u0026ldquo;Storage Provider\u0026rdquo; section and in SRM 5.8 they give you a nice little summary about each setting to help you out.\nUn-Replicated Devices Errors Sometimes you\u0026rsquo;ll notice that some of your virtual machines will be in a warning state in the protection group and the Protection Status will say something like \u0026ldquo;Device Not Found\u0026rdquo;. This is usually because there is one or more disks that aren\u0026rsquo;t being replicated to the recovery site. YES, this can include attached CD-ROM devices. It\u0026rsquo;s an easy fix though. Open up the VM Protection Settings and go to the device that is throwing the errors. Click the \u0026ldquo;Detach\u0026rdquo; button. This tells SRM to detach the device for failover purposes and don\u0026rsquo;t worry about it. This may be especially useful for virtual machines that have their own paging disk that you\u0026rsquo;d like to separate from the replicated data to conserve on bandwidth. Just be sure to detach that disk so everything runs smoothly.\n","permalink":"https://theithollow.com/2015/01/27/srm-troubleshooting/","summary":"\u003cp\u003eUnfortunately, not all software is perfect and from time to time I\u0026rsquo;ve run into issues with SRM as well.  This post is a list of items I often see during SRM deployments and some information to troubleshoot issues.\u003c/p\u003e\n\u003ch2 id=\"log-file-locations\"\u003eLog File Locations\u003c/h2\u003e\n\u003cp\u003eSRM Logs:  c:programDataVMwareVMware vCenter Site Recovery ManagerLogs\u003c/p\u003e\n\u003cp\u003eInstallation logs:  %USERPROFILE%Application DataVMwareVMware Site Recovery ManagerLogs\u003c/p\u003e\n\u003cp\u003eStorage Replication Adapater Logs: This depends on the SRA Vendor, but try program filesSRANAME to start with\u003c/p\u003e","title":"SRM Troubleshooting"},{"content":"If you\u0026rsquo;re looking advertise on theITHollow.com please see the available options provided by BuySellAds.com below. theITHollow.com is an independent website and blog posts on this site are not paid for. Purchasing advertising space does not mean that posts will be written on the products being advertised.\n","permalink":"https://theithollow.com/about/sponsors/","summary":"\u003cp\u003eIf you\u0026rsquo;re looking advertise on theITHollow.com please see the available options provided by BuySellAds.com below. theITHollow.com is an independent website and blog posts on this site are not paid for. Purchasing advertising space does not mean that posts will be written on the products being advertised.\u003c/p\u003e","title":"Sponsors"},{"content":"I got a chance to take a quick peak at the vCloud Air beta during the Early Access Program and wanted to share some of the experience.\nI found the solution very simple to use and straight forward, even without needing to look at any install guides or user manuals. The interfaces were very intuitive. Right off the bat, you get to select your virtual Private Cloud region and then create some virtual machines from pre-defined templates. The templates are based on bare bones operating systems templates such as CentOS, Windows 2012 etc and depending on the type of template you choose, a different pricing methodology will be applied. Let\u0026rsquo;s face it, open source OS\u0026rsquo;s are free and Windows isn\u0026rsquo;t. Gotta pay the bills right?\nYou can see from the below screenshot that it\u0026rsquo;s pretty intuitive on how to create a new VM.\nOnce I got a VM created, I started poking around to find some additional goodies. When I went to the \u0026ldquo;Gateways\u0026rdquo; tab i found the option to manage my VPC in vCloud Director. It\u0026rsquo;s nice that this option is still available so that advanced users can still pull out the old vCD configurations and get started with some advanced configurations.\nEditing a network configuration was pretty simple, even without using vCloud Director. Just click the hanger on the network and choose \u0026ldquo;Edit Network\u0026rdquo;\nFrom there, it\u0026rsquo;s simple to change the name to something descriptive, as well as modifying the range. The gateway and subnets aren\u0026rsquo;t configurable since that would depend on a gateway change.\nDigging a little further I was able to add a Public IP address to my gateway. It\u0026rsquo;s one thing to add a virtual machine to a public cloud, but most likely you need to access that through the internet. My plan was to assign a public IP Address and configure some Network Address Translation as a test. You also have the options to setup a Virtual Private Network (VPN) if that makes sense for you, and I assume that it would for most people.\nOnce the public IP Address was added, a firewall rule got added to allow for ICMP (ping) to be allowed.\nLastly, a Destination NAT rule was added and that completed the configuration.\nSummary My first real experience with vCloud Air was pretty positive. Competitors such as Amazon and Microsoft might be ahead of the game here, but the user interfaces for vCloud Air impressed me about how easy it was to use. vCloud Air also has me very intrigued by some of their services which I wasn\u0026rsquo;t able to get access to, such as their VMware SRM endpoint. Being able to connect my local datacenter to vCloud Air for a disaster recovery endpoint seems like a great idea for a smaller company that can\u0026rsquo;t afford a second site, but needs a DR plan for compliance reasons.\nIf you want to try out the VMware OnDemand Portal check it out for yourself here.\n","permalink":"https://theithollow.com/2015/01/20/vcloud-air-2014-beta-impressions/","summary":"\u003cp\u003eI got a chance to take a quick peak at the vCloud Air beta during the Early Access Program and wanted to share some of the experience.\u003c/p\u003e\n\u003cp\u003eI found the solution very simple to use and straight forward, even without needing to look at any install guides or user manuals.  The interfaces were very intuitive.  Right off the bat, you get to select your virtual Private Cloud region and then create some virtual machines from pre-defined templates.  The templates are based on bare bones operating systems templates such as CentOS, Windows 2012 etc and depending on the type of template you choose, a different pricing methodology will be applied.  Let\u0026rsquo;s face it, open source OS\u0026rsquo;s are free and Windows isn\u0026rsquo;t.  Gotta pay the bills right?\u003c/p\u003e","title":"vCloud Air 2014 Beta Impressions"},{"content":"SRM version 5.8 now is now extensible with vRealize Orchestrator (formerly vCenter Orchestrator). This new functionality was expected since the vRealize Suite is all about automation and disaster recovery certainly needs to be taken into consideration.\nOne pain point I\u0026rsquo;ve seen with SRM has been the ongoing administration of protection groups. Every time a virtual machine is deployed to a protected datastore, the VM also has to be configured for protection. This usually only consists of right clicking the virtual machine and choosing \u0026ldquo;configure protection\u0026rdquo; but is also another thing that administrators have too keep track of.\nSRM58-unprotectedVM\nThanks to some new vRealize Orchestrator Plugins, we can automate this process so we don\u0026rsquo;t have to think about it anymore. First, be sure to import the SRM plugins for vRealize Orchestrator. Download the plugins here. Then import the plugin to vRO. The vCenter Orchestrator Plug-In Release Notes has all the details about importing the plugin.\nSchedule a vRealize Orchestrator Job The first option is to schedule a vRealize Orchestrator (vRO) job to go out and protect virtual machines. The \u0026ldquo;Protect All Unprotected Virtual Machines Associated with Protection Group\u0026rdquo; workflow should take care of your issue and vRO is free with vSphere so this might work great for you.\nTo schedule a workflow, right click on the workflow and choose \u0026ldquo;Schedule workflow\u0026hellip;\u0026rdquo;\nFill out the information and set your recurrence.\nvRealize Automation Server Provisioning The other method, you might see is enabling vRealize Automation (Formerly vCloud Automation Center) to automatically protect a virtual machine as a new request is made. To do this, I created a virtual machine blueprint in vRealize Automation (vRA) that calls an Orchestrator workflow to then protect the VM.\nOnce the resource is requested, it calls my custom vRO workflow as part of the ExternalWFStubs.MachineProvisioned workflow. If you\u0026rsquo;re not familiar with vRA stubs, this allows you to call a workflow after a virtual machine has been provisioned.\nSRM58-Stubs\nThe workflow looks like this, and will only protect the single virtual machine that was just requested from the vRA portal. You can obviously do much more with this, such as have the virtual machine put into a custom recovery plan(s) or create new protection groups and recovery plans at the time of provisioning. The workflow below is a simple way to configure a virtual machine that has already been deployed on replicated storage.\nThe deployment options could get very fancy, by allowing a user to select \u0026ldquo;protected\u0026rdquo; or \u0026ldquo;unprotected\u0026rdquo; and things if you want to spend the time to give these options to the users.\nThere is also a default workflow added to vRealize Orchestrator when you add the SRM Plugins that you could use as well, so don\u0026rsquo;t think you need to be a vRO expert to do these things.\nSummary It seems almost everything we do with our infrastructure these days needs to have some automation tools to go along with it. I\u0026rsquo;m very excited that VMware Site Recovery Manager 5.8 has some additional plugins for the vRealize Suite to manage deployments as well. The ways you can manage your disaster recovery solution are now greatly augmented so that you can build the solution that you need.\n","permalink":"https://theithollow.com/2015/01/19/srm-5-8-now-with-automation/","summary":"\u003cp\u003eSRM version 5.8 now is now extensible with vRealize Orchestrator (formerly vCenter Orchestrator).  This new functionality was expected since the vRealize Suite is all about automation and disaster recovery certainly needs to be taken into consideration.\u003c/p\u003e\n\u003cp\u003eOne pain point I\u0026rsquo;ve seen with SRM has been the ongoing administration of protection groups.  Every time a virtual machine is deployed to a protected datastore, the VM also has to be configured for protection.  This usually only consists of right clicking the virtual machine and choosing \u0026ldquo;configure protection\u0026rdquo; but is also another thing that administrators have too keep track of.\u003c/p\u003e","title":"SRM 5.8 now with Automation!"},{"content":"If you\u0026rsquo;ve got SRM 5.5 installed and you want to get the new SRM 5.8 code into your environment, don\u0026rsquo;t worry. The upgrade process is pretty easy to manage. The important thing to note is the upgrade order and of course your compatibility matrix. Remember that you need vCenter 5.5 U2 or higher to get SRM 5.8 working.\nUpgrade Order Ensure your vCenter Server and Web Client are on 5.5 U2 or higher in the protected site. If not, upgrade them! Upgrade vSphere Replication on the protected site to 5.8 if you\u0026rsquo;re using one. Upgrade SRM on the protected site to 5.8 Upgrade your SRA on the protected site if you\u0026rsquo;re using one. Ensure your vCenter Server and Web Client are on 5.5 U2 or higher in the recovery site. If not, upgrade them! Upgrade vSphere Replication on the recovery site to 5.8 if you\u0026rsquo;re using one. Upgrade SRM on the recovery site. Upgrade your SRA on the recovery site if you\u0026rsquo;re using one. Verify the conneciton between your sites is valid, protection groups still exist and recovery plans are in tact. Upgrade ESXi servers on the recovery site Upgrade ESXi servers on the protected site Upgrade VMware Tools. Install Run the installer on the SRM servers. The installer should detect that SRM is already installed an that an upgrade will be performed.\nJust like an full install, we\u0026rsquo;ll get some legal patent information.\nAgree to the terms of use.\nJust like with a full install, you\u0026rsquo;ll be given the opportunity to look at the prerequisites again.\nUpdate the install location if you wish. Click Next.\nEnter the vCenter credentials to register with the vCenter server.\nUpdate the Administrator email, local host and any ports if you wish. The local site name will not be editable. Click Next. Choose whether to use the automatically generated certificates or load a custom certificate. Remember that if your vCenter uses a custom certificate, you\u0026rsquo;ll need to use a custom certificate in your install of SRM as well.\nEnter some information to generate the certificate, or browse to your certificate if you\u0026rsquo;ve got a custom one.\nYou may change the database connection count if you wish. The DSN will be hard coded.\nImportant!!!! Leave this setting at \u0026ldquo;Use existing data\u0026rdquo; or you\u0026rsquo;ll erase your current SRM config.\nEnter your service account information. Click Next.\nEnjoy the upgrade process by clicking Install.\nSummary The upgrade is pretty easy as long as you\u0026rsquo;ve got your certificates in place and the prerequisites done. Upgrading vCenter and the replication appliances will probably be the bottleneck.\n","permalink":"https://theithollow.com/2015/01/14/srm-5-5-to-5-8-upgrade/","summary":"\u003cp\u003eIf you\u0026rsquo;ve got SRM 5.5 installed and you want to get the new SRM 5.8 code into your environment, don\u0026rsquo;t worry.  The upgrade process is pretty easy to manage.  The important thing to note is the upgrade order and of course your compatibility matrix.  Remember that you need vCenter 5.5 U2 or higher to get SRM 5.8 working.\u003c/p\u003e\n\u003ch1 id=\"upgrade-order\"\u003eUpgrade Order\u003c/h1\u003e\n\u003col\u003e\n\u003cli\u003eEnsure your vCenter Server and Web Client are on 5.5 U2 or higher in the protected site.  If not, upgrade them!\u003c/li\u003e\n\u003cli\u003eUpgrade vSphere Replication on the protected site to 5.8 if you\u0026rsquo;re using one.\u003c/li\u003e\n\u003cli\u003eUpgrade SRM on the protected site to 5.8\u003c/li\u003e\n\u003cli\u003eUpgrade your SRA on the protected site if you\u0026rsquo;re using one.\u003c/li\u003e\n\u003cli\u003eEnsure your vCenter Server and Web Client are on 5.5 U2 or higher in the recovery site.  If not, upgrade them!\u003c/li\u003e\n\u003cli\u003eUpgrade vSphere Replication on the recovery site to 5.8 if you\u0026rsquo;re using one.\u003c/li\u003e\n\u003cli\u003eUpgrade SRM on the recovery site.\u003c/li\u003e\n\u003cli\u003eUpgrade your SRA on the recovery site if you\u0026rsquo;re using one.\u003c/li\u003e\n\u003cli\u003eVerify the conneciton between your sites is valid, protection groups still exist and recovery plans are in tact.\u003c/li\u003e\n\u003cli\u003eUpgrade ESXi servers on the recovery site\u003c/li\u003e\n\u003cli\u003eUpgrade ESXi servers on the protected site\u003c/li\u003e\n\u003cli\u003eUpgrade VMware Tools.\u003c/li\u003e\n\u003c/ol\u003e\n\u003ch1 id=\"install\"\u003eInstall\u003c/h1\u003e\n\u003cp\u003eRun the installer on the SRM servers.  The installer should detect that SRM is already installed an that an upgrade will be performed.\u003c/p\u003e","title":"SRM 5.5 to 5.8 Upgrade"},{"content":"\nI had one of those serious first world problems where I was intermittently getting poor wireless connectivity from my upstairs bedroom at night. My wireless router is downstairs in my office on the opposite side of the house, and my neighbors\u0026rsquo; wireless was also causing some interference. So I was about to get out my chainsaw to start taking out a wall and part of my upstairs floor, when I thought \u0026ldquo;Maybe a wireless extender would work for me?\u0026rdquo;\nSo, I went out [went downstairs where I had wireless] and purchased a Netgear AC1200 from Amazon.\nOne of my requirements was that whatever extender I got, had to be something that would work with 802.11ac in the event that I upgrade my Dlink DIR-825 router. Seriously, 802.11n is so last year.\nSetup Setup of the extender was a snap. Unbox the device, plug in the antennas and the power cable. Take your wireless device and connect to the NETGEAR_EXT SSID. Once you open a web browser, the setup screen pops open automatically.\nThe first thing you do is to select your current wireless network. In my case I have both a 2.4GHz and 5GHz wireless network, but you select the 2.4GHz only first. You can skip this band if you only want 5GHz.\nNext, you select your 5GHz wireless network if applicable. Once that\u0026rsquo;s done, you\u0026rsquo;ll need to enter your WPA password.\nNow that you\u0026rsquo;ve entered your existing network settings, you\u0026rsquo;ll want to setup you extended network. The Admin guide recommends setting a different SSID for the network extender and connect to it when you need it.\nThat\u0026rsquo;s it! Super simple setup to get you up and running.\nOptional Settings A few of the bells and whistles include things like \u0026ldquo;FastLane\u0026rdquo; which allows you to customize the bands to ensure better performance. Since I am currently using both bands, I left this default, but you could change the settings to fit your needs.\nThe extender also has 5 wired ports if you\u0026rsquo;ve still got those things that need wires to get to the internet, and there is also a USB 3.0 port on the back. The extender could be used as a media server as well as a backup device for things like \u0026ldquo;Time Machine\u0026rdquo; for Mac OS.\nPerformance I wouldn\u0026rsquo;t get too concerned with the metrics below since the mileage will vary depending on distance, obstacles and interference in your environment.\nBaseline This is my baseline traffic to my Dlink router without using the extender.\nAll measurements are in Mbps\nNetgear Extender While using the Netgear Extender, you\u0026rsquo;ll notice that my throughput is lower which is a bummer, but expected. The good news is that I no longer have any dropped connections to worry about.\nHollowPoints Clearly your performance may vary, depending on your wireless signal, distance and barriers so don\u0026rsquo;t take my performance results as gospel. This extender has solved my issue and I would say just get\u0026rsquo;s the job done. It\u0026rsquo;s easy to use and maintain, has a few extras that you can take advantage of and just works. I think someone looking to extend their existing network would be happy with this device.\n","permalink":"https://theithollow.com/2015/01/12/netgear-ac1200-review/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2015/01/netgearAC1200-diagram.png\"\u003e\u003cimg alt=\"netgearAC1200-diagram\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/01/netgearAC1200-diagram-300x168.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI had one of those serious first world problems where I was intermittently getting poor wireless connectivity from my upstairs bedroom at night.  My wireless router is downstairs in my office on the opposite side of the house, and my neighbors\u0026rsquo; wireless was also causing some interference.   So I was about to get out my chainsaw to start taking out a wall and part of my upstairs floor, when I thought \u0026ldquo;Maybe a wireless extender would work for me?\u0026rdquo;\u003c/p\u003e","title":"Netgear AC1200 Review"},{"content":"A customized recovery plan means the difference between hoursdays of reconfiguration of your environment in the event of a failure. VMware SRM allows for lots of opportunity to customize your recovery plans with scripts and modifications along the way to ease the management of your disaster recovery plans.\nRun Scripts from SRM Server If we open any given recovery plan we can click on a step we\u0026rsquo;d like to modify and then right-click to \u0026ldquo;Add Step\u0026hellip;\u0026rdquo;\nOnce we\u0026rsquo;ve added a step, we\u0026rsquo;ll be given the option to run a script from the SRM Server at the recovery site. The name is just what will show up in the recovery plan so make it something descriptive. The content would need to be the command to run from the SRM server at the recovery site. Remember that this command is from windows and not a powershell command, so if you want to run powershell run the c:windowssystem32windowspowershellv1.0powershell.exe -file \u0026ldquo;path and name to script.ps1\u0026rdquo;. In my crude example below, I\u0026rsquo;ve called a powershell script that writes some output to a file. Notice from the SRM server, I\u0026rsquo;ve got my \u0026ldquo;hollowOutput.ps1\u0026rdquo; script that is ready to be run.\nPrompt In addition to creating steps to call scripts, we can also pause the recovery plan if we need to perform some manual operations. Add another step and choose prompt. This will pause the recovery plan until the SRM admin clicks a button in SRM to continue the recovery plan.\nVirtual Machine Options If we click on an individual virtual machine in a recovery plan, we can also modify it\u0026rsquo;s recovery properties as well. We can modify the priority group, which is the group of machines that will start in order of priority 1 (first) to 5 (last). We can also set dependencies so that a VM will not start unless another VM has already been started. We can change Startup actions so we can failover a VM but leave it off, and we can run pre power-on scripts and post power-on scripts.\nNote: We also have the ability to modify IP Settings for each individual VM but this is covered in another post.\nNotice that if we run a post power-on script that it can be run on the SRM server as we showed before, but we can also run the command on the virtual machine. Perhaps some additional changes inside the guest OS need to be made to some VMs after they fail over. SRM allows us to do that from here as well.\nExamples In my test recovery plan, I\u0026rsquo;ve created a script to run on the SRM Server and it successfully create a file seen below.\nAlso, notice that when I ran my recovery plan, it paused at a prompt and waited for me to click the \u0026ldquo;Dismiss\u0026rdquo; hyperlink to continue the failover. These scripts will run during both test and failover scenarios.\n","permalink":"https://theithollow.com/2015/01/08/srm-5-8-customizing-your-recovery-plan/","summary":"\u003cp\u003eA customized recovery plan means the difference between hoursdays of reconfiguration of your environment in the event of a failure.  VMware SRM allows for lots of opportunity to customize your recovery plans with scripts and modifications along the way to ease the management of your disaster recovery plans.\u003c/p\u003e\n\u003ch1 id=\"run-scripts-fromsrm-server\"\u003eRun Scripts from SRM Server\u003c/h1\u003e\n\u003cp\u003eIf we open any given recovery plan we can click on a step we\u0026rsquo;d like to modify and then right-click to \u0026ldquo;Add Step\u0026hellip;\u0026rdquo;\u003c/p\u003e","title":"SRM 5.8 Customizing Your Recovery Plan"},{"content":"Setting up some alerting is a good idea once you\u0026rsquo;ve setup your disaster recovery solution. Let\u0026rsquo;s face it once you\u0026rsquo;ve tested your DR solution, you might not look at it again until your next test, which in some cases is yearly!\nTo setup alarms for SRM in version 5.8 navigate to the vCenter object and click the Manage tab. From there click the Alarm Definitions sub-tab. Click the add (green plus sign) to add a new alarm.\nGive the alarm a name and description that will let you know what it is. You\u0026rsquo;ll need to click the \u0026ldquo;Monitor:\u0026rdquo; dropdown and select vCenter Server. Click Next.\nAdd a trigger (Green Plus Sign) and scroll through the drop down list. Select the event type that you\u0026rsquo;re trying to monitor. Hint: The SRM alarms will have the name of the SRM server in parenthesis. Click Next. Add an action to take place when the event status is triggered. Click the green plus button and choose an alarm action such as send an email. The configuration should be related to the action, such as a notification email requires an email address. \u0026ldquo;Run a command\u0026rdquo; requires a command to run. Also, you can select how often the action should be done and when the alarm went from normal to a warning, warning to alarm, etc.\nSummary Alarm actions are a small thing, but might really save you in the event of a disaster. It\u0026rsquo;s much easier to deal with an alert right as something in the environment changes, versus not knowing about it and trying to track it down while you\u0026rsquo;re under pressure.\n","permalink":"https://theithollow.com/2015/01/06/srm-5-8-alarms/","summary":"\u003cp\u003eSetting up some alerting is a good idea once you\u0026rsquo;ve setup your disaster recovery solution.  Let\u0026rsquo;s face it once you\u0026rsquo;ve tested your DR solution, you might not look at it again until your next test, which in some cases is yearly!\u003c/p\u003e\n\u003cp\u003eTo setup alarms for SRM in version 5.8 navigate to the vCenter object and click the Manage tab.  From there click the Alarm Definitions sub-tab.  Click the add (green plus sign) to add a new alarm.\u003c/p\u003e","title":"SRM 5.8 Alarms"},{"content":" This is a Site Recovery Manager 5.8 Guide to help understand the design, installation, operation and architecture of setting up VMware SRM 5.8\nSRM 5.8 Architecture SRM 5.8 Installation SRM 5.8 Upgrade from SRM 5.5 SRM 5.8 Site Setup SRM 5.8 Array Based Replication SRM 5.8 Protection Groups SRM 5.8 Recovery Plan SRM 5.8 IP Customization SRM 5.8 Customizing your Recovery Plan SRM 5.8 Test Recovery SRM 5.8 Failover Recovery SRM 5.8 with Automation SRM 5.8 Alarms SRM 5.8 Troubleshooting Official Documentation Links Site Recovery Manager 5.8 Documentation Center Site Recovery Manager 5.8 Compatibility Matrix Site Recovery Manager 5.8 Release Notes Site Recovery Manager 5.8 vCO Plug-in Download Site Recovery Manager 5.8 Download Site Recovery Manager 5.8 Storage Replication Adapters Additional Resources If you want some great resources to continue learning VMware Site Recovery Manager, I suggest checking out these resources:\n@Mike_Laverick Author of “ Administering VMware Site Recovery Manager” BLOG: mikelaverick.com @vmKen Senior Technical Marketing Director for VMware DRBC products BLOG: http://blogs.vmware.com/vsphere/author/ken_werneburg @benmeadowcraft Senior Product Manager for VMware Site Recovery Manager BLOG: http://www.benmeadowcroft.com/ @gurusimran Senior Technical Marketing Manager at VMware focusing on availability solutions BLOG: http://blogs.vmware.com/vsphere/author/gkhalsa VMware Communities SRM\nHome Lab Its one thing to read how to do something, but another to get your hands on the technology. I have a post on this site dedicated to showing how you can build an SRM site all within a single host with nested ESXi hosts.\nPoor Mans SRM Lab\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-guide/","summary":"\u003ch4 id=\"58guide\"\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/12/5.8Guide.png\"\u003e\u003cimg alt=\"5.8Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/12/5.8Guide.png\"\u003e\u003c/a\u003e\u003c/h4\u003e\n\u003cp\u003eThis is a Site Recovery Manager 5.8 Guide to help understand the design, installation, operation and architecture of setting up VMware SRM 5.8\u003c/p\u003e\n\u003ch3 id=\"srm-58-architecture\"\u003e\u003ca href=\"http://wp.me/p32uaN-180\"\u003eSRM 5.8 Architecture\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-installation\"\u003e\u003ca href=\"http://wp.me/p32uaN-16I\"\u003eSRM 5.8 Installation\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-upgrade-from-srm-55\"\u003e\u003ca href=\"http://wp.me/p32uaN-18q\"\u003eSRM 5.8 Upgrade from SRM 5.5\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-site-setup\"\u003e\u003ca href=\"http://wp.me/p32uaN-176\"\u003eSRM 5.8 Site Setup\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-array-based-replication\"\u003e\u003ca href=\"http://wp.me/p32uaN-18J\"\u003eSRM 5.8 Array Based Replication\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-protection-groups\"\u003e\u003ca href=\"http://wp.me/p32uaN-17n\"\u003eSRM 5.8 Protection Groups\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-recovery-plan\"\u003e\u003ca href=\"http://wp.me/p32uaN-17v\"\u003eSRM 5.8 Recovery Plan\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-ip-customization\"\u003e\u003ca href=\"http://wp.me/p32uaN-181\"\u003eSRM 5.8 IP Customization\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-customizing-your-recovery-plan\"\u003e\u003ca href=\"http://wp.me/p32uaN-18d\"\u003eSRM 5.8 Customizing your Recovery Plan\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-test-recovery\"\u003e\u003ca href=\"http://wp.me/p32uaN-17H\"\u003eSRM 5.8 Test Recovery\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-failover-recovery\"\u003e\u003ca href=\"http://wp.me/p32uaN-17O\"\u003eSRM 5.8 Failover Recovery\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-with-automation\"\u003e\u003ca href=\"http://wp.me/p32uaN-19e\"\u003eSRM 5.8 with Automation\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-alarms\"\u003e\u003ca href=\"/2015/01/srm-5-8-alarms/\"\u003eSRM 5.8 Alarms\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"srm-58-troubleshooting\"\u003e\u003ca href=\"/2015/01/srm-troubleshooting/\"\u003eSRM 5.8 Troubleshooting\u003c/a\u003e\u003c/h3\u003e\n\u003ch1 id=\"official-documentation-links\"\u003eOfficial Documentation Links\u003c/h1\u003e\n\u003ch3 id=\"site-recovery-manager-58-documentation-center\"\u003e\u003ca href=\"http://pubs.vmware.com/srm-58/index.jsp?topic=%2Fcom.vmware.srm.install_config.doc%2FGUID-B3A49FFF-E3B9-45E3-AD35-093D896596A0.html\"\u003eSite Recovery Manager 5.8 Documentation Center\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"site-recovery-manager-58-compatibility-matrix\"\u003e\u003ca href=\"https://www.vmware.com/support/srm/srm-compat-matrix-5-8.html\"\u003eSite Recovery Manager 5.8 Compatibility Matrix\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"site-recovery-manager-58-release-notes\"\u003e\u003ca href=\"https://www.vmware.com/support/srm/srm-releasenotes-5-8-0.html\"\u003eSite Recovery Manager 5.8 Release Notes\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"site-recovery-manager-58-vco-plug-in-download\"\u003e\u003ca href=\"https://my.vmware.com/group/vmware/info?slug=infrastructure_operations_management/vmware_vcenter_site_recovery_manager/5_8#drivers_tools\"\u003eSite Recovery Manager 5.8 vCO Plug-in Download\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"site-recovery-manager-58-download\"\u003e\u003ca href=\"https://my.vmware.com/group/vmware/info?slug=infrastructure_operations_management/vmware_vcenter_site_recovery_manager/5_8#product_downloads\"\u003eSite Recovery Manager 5.8 Download\u003c/a\u003e\u003c/h3\u003e\n\u003ch3 id=\"site-recovery-manager-58-storage-replication-adapters\"\u003e\u003ca href=\"https://my.vmware.com/group/vmware/info?slug=infrastructure_operations_management/vmware_vcenter_site_recovery_manager/5_8#drivers_tools\"\u003eSite Recovery Manager 5.8 Storage Replication Adapters\u003c/a\u003e\u003c/h3\u003e\n\u003ch1 id=\"additional-resources\"\u003eAdditional Resources\u003c/h1\u003e\n\u003cp\u003eIf you want some great resources to continue learning VMware Site Recovery Manager, I suggest checking out these resources:\u003c/p\u003e","title":"SRM 5.8 Guide"},{"content":"\u0026ldquo;Your disaster recovery plan is only as good as it\u0026rsquo;s last test.\u0026rdquo;\n\u0026ldquo;If you haven\u0026rsquo;t tested your DR plan, then you don\u0026rsquo;t have a DR plan.\u0026rdquo;\nThese are all statements I\u0026rsquo;ve heard in the industry from CIOs and directors, and lucky for us VMware Site Recovery Manager has a test functionality built in for us to leverage without fear of affecting our production workloads.\nRun a Test Open up one of your recovery plans and click the monitor tab. Here you\u0026rsquo;ll have several buttons to choose from as well as seeing the list of recovery steps. To run a \u0026ldquo;Test\u0026rdquo; recovery click the green arrow button. Once you\u0026rsquo;ve begun the test process, you\u0026rsquo;ll be prompted about whether or not you want to run one additional replication to the DR site. You\u0026rsquo;ll have to decide what you\u0026rsquo;re testing here. If it\u0026rsquo;s a disaster test, then you probably don\u0026rsquo;t want to run an additional replication because you can\u0026rsquo;t hold off your disaster until you replicate one more time. If you\u0026rsquo;re test is for a planned datacenter migration then maybe this is applicable to you. Review your test settings and click Finish. Once the test starts, it will create a snapshot of the storage at the DR site so that replication can continue in the background while the test is run. It may also create some new virtual switches if you\u0026rsquo;re running an isolated test.\nDuring the test, you\u0026rsquo;ll be able to monitor the recovery plan every step of the way. If you encounter a failure, you\u0026rsquo;ll know what step failed and you\u0026rsquo;ll be able to fix it and try again. Assuming everything goes as planed, you\u0026rsquo;ll get a \u0026ldquo;Test Complete\u0026rdquo; message with a check mark. Once the test is complete you can login to some of your virtual machines to ensure things are how you expect them to be after a failover. When you\u0026rsquo;re ready to finish the test, click the broom icon in the recovery plan menu.\nWhen you click the cleanup button, you\u0026rsquo;ll get a confirmation much like you did when you ran the test. Click Next.\nReview the cleanup settings and click Finish. When you click Finish, the snapshots created at the recovery site will be deleted, any isolated virtual switches used for the test will be destroyed, and the placeholder VMs will be ready for another failover.\nSummary We don\u0026rsquo;t have to wait for a long test window to try our DR plan any longer. We can test during the middle of the day, test once a month, week, day or hour if you really wanted to. Now we have some semblance of certainty that our DR plan will work successfully if the time arises.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-test-recovery/","summary":"\u003cp\u003e\u0026ldquo;Your disaster recovery plan is only as good as it\u0026rsquo;s last test.\u0026rdquo;\u003c/p\u003e\n\u003cp\u003e\u0026ldquo;If you haven\u0026rsquo;t tested your DR plan, then you don\u0026rsquo;t have a DR plan.\u0026rdquo;\u003c/p\u003e\n\u003cp\u003eThese are all statements I\u0026rsquo;ve heard in the industry from CIOs and directors, and lucky for us VMware Site Recovery Manager has a test functionality built in for us to leverage without fear of affecting our production workloads.\u003c/p\u003e\n\u003ch1 id=\"run-a-test\"\u003eRun a Test\u003c/h1\u003e\n\u003cp\u003eOpen up one of your recovery plans and click the monitor tab.  Here you\u0026rsquo;ll have several buttons to choose from as well as seeing the list of recovery steps.   To run a \u0026ldquo;Test\u0026rdquo; recovery click the green arrow button. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/12/srm58-test0.png\"\u003e\u003cimg alt=\"srm58-test0\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/12/srm58-test0.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"SRM 5.8 Test Recovery"},{"content":"A terrible thing has happened and it\u0026rsquo;s time to failover your datacenter to your disaster recovery site. Well, maybe you\u0026rsquo;re just migrating your datacenter to a new one, but this is always a bit of a tense situation. Luckily we\u0026rsquo;ve had the opportunity to test the failovers many, many times before so we can be confident in our process.\nGo to the Recovery Plan and click the monitor tab. Click the \u0026ldquo;BIG RED BUTTON\u0026rdquo; (yeah, it\u0026rsquo;s not that big, but it has big consequences). Before the failover actually happens, you\u0026rsquo;ll be given a warning and you actually have to click a check box stating that you understand the consequences of performing this operation. After that you\u0026rsquo;ll be given the opportunity to do a Planned Migration which will try to replicate the most recent changes and will stop if an error is encountered, or a Disaster Recovery migration which will just failover anything it can and as fast as it can. Pick your recovery type and click Next. Review the process that is about to happen and click Finish.\nWhile the recovery is running, you\u0026rsquo;ll be able to monitor the process on the recovery steps screen. Notice that this is slightly different from a test recovery in a few places, such as not creating a writable snapshot but rather making the existing storage writable in the new datacenter. Hopefully everything is working well for you after the failover. Now it\u0026rsquo;s time to go back to our original datacenter. Click the \u0026ldquo;Re-Protect\u0026rdquo; button which looks like a shield with a lightning bolt on it. This Re-Protect will reverse the direction of the replication and setup a failover in the opposite direction. You can consider the DR site to be the protected site and the original production site to be the recovery site, until you fail back.\nWhen you run the Re-Protect, you\u0026rsquo;ll need to once again confirm that you understand the ramifications of this operation.\nNow that everything is reversed, you can run another failover, but this time a \u0026ldquo;Planned Migration\u0026rdquo; is probably more reasonable since you\u0026rsquo;re likely planning to do a failback and it\u0026rsquo;s not a second disaster, this time at your disaster recovery site. (That would be awful)\nReview the failover and click Finish. When the failover is done, be sure to Re-Protect it again to get your disaster recovery site back in working order! Summary Failovers can be stressful but thankfully we\u0026rsquo;ve tested all of our plans before, so that should take some of the pressure off.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-failover/","summary":"\u003cp\u003eA terrible thing has happened and it\u0026rsquo;s time to failover your datacenter to your disaster recovery site.  Well, maybe you\u0026rsquo;re just migrating your datacenter to a new one, but this is always a bit of a tense situation.  Luckily we\u0026rsquo;ve had the opportunity to \u003ca href=\"http://wp.me/p32uaN-17H\"\u003etest the failovers\u003c/a\u003e many, many times before so we can be confident in our process.\u003c/p\u003e\n\u003cp\u003eGo to the Recovery Plan and click the monitor tab.  Click the \u0026ldquo;BIG RED BUTTON\u0026rdquo; (yeah, it\u0026rsquo;s not that big, but it has big consequences).\n\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/12/srm58-test0.png\"\u003e\u003cimg alt=\"srm58-test0\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/12/srm58-test0.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"SRM 5.8 Failover"},{"content":"Some companies have built out their disaster recovery site with a stretched layer 2 network or even a disjoint layer 2 network that shares the same IP addresses with their production sites. This is great because VMs don\u0026rsquo;t need to change IP Addresses if there is a failover event. This post goes over what options we have if you need to change IP Addresses during your failover.\nNetwork mappings SRM 5.8 has a wonderful new way to manage IP Addresses between datacenters. Prior to SRM 5.8 each VM needed to be manually updated with a new IP Address or done in bulk with a CSV template (show later in this post) if you had to re-IP your VMs. Now with SRM 5.8 we can do a network mapping to make our lives much easier. This is one of the best new features of SRM 5.8 in my opinion.\nGo to your sites in \u0026ldquo;Site Recovery\u0026rdquo; and click the Manage tab. Here, you\u0026rsquo;ll see our network mappings again. Click the networks that you\u0026rsquo;ve mapped previously and then you can click the \u0026ldquo;Add\u0026hellip;\u0026rdquo; button to create some IP Customization Rules.\nWhen the \u0026ldquo;Add IP Customization Rule\u0026rdquo; screen comes up, you can see that we can now map the networks to one another and the virtual machine will keep the host bits the same between networks. For example, if you have a VM on the 10.10.50.0/24 network with an IP Address of 10.10.50.100, and it needs to failover to the 10.10.70.0/24 network, it will keep it\u0026rsquo;s hosts bits the same, and just change the network, making it 10.10.70.100 at the DR site. SIMPLE!!!!\nObviously, there are a few other things that you\u0026rsquo;ll need to modify such as DNS Servers, suffixes and of course the default gateway.\nOnce you\u0026rsquo;ve created your IP Customization Rules, you can see them listed below the network mappings for your virtual machines.\nManual IP Customization If the subnet mapping spelled out above doesn\u0026rsquo;t work, you can manually customize an IP Address of each VM. Go into your recovery plans and find the virtual machine to customize. Right click and choose \u0026ldquo;Configure Recovery\u0026hellip;\u0026rdquo;\nClick the IP Customization Tab. Here you\u0026rsquo;ll see that you can add IP Addresses for both sites. Be sure to enter IP information in for both sites. If you failover to the recovery site and didn\u0026rsquo;t set the protected site IP Addresses, you\u0026rsquo;ll have some IP issues when you try to fail back.\nClick either the \u0026ldquo;Configure Protection\u0026hellip;\u0026rdquo; or \u0026ldquo;Configure Recovery\u0026hellip;\u0026rdquo; and then you can enter your IP information. Again, be sure to do both sites.\nBULK IP Customizer Many times it’s not practical to modify the IP addresses of every individual VM as they are configured. Luckily VMware has provided a way to bulk upload IP addresses.\nFrom an SRM server, open a command prompt and change the working directory to: c:Program FilesVMwareVMware vCenter Site Recovery Managerbin\nNOTE: Path may be different depending on your install location.\nGenerate a .CSV file to edit your IP Addresses by running dr-ip-customizer.exe with the –cfg, –cmd –vc -i –out switches.\n–cfg should be the location of the vmware-dr.xml file. \u0026ndash;cmd should be “Generate”, \u0026ndash;vc lists the vCenter server, and \u0026ndash;out lists the location to generate the .csv file.\nExample: dr-ip-customizer.exe \u0026ndash;cfg “C:Program filesVMwareVMware vCenter Site Recovery ManagerConfigvmware-dr.xml” \u0026ndash;cmd generate \u0026ndash;vc FQDNofvCenter -i \u0026ndash;out c:ipaddys.csv\nOpen the .csv file and fill out the information. Notice that there are two entries for the VM. This is because there are two vCenters and in order to do protection and fail back we need the IP Addresses for both sides.\nOnce the IP Address information is entered, run the customizer again with the –cmd “Apply” and –CSV file location.\nExample: dr-ip-customizer.exe \u0026ndash;cfg “C:Program filesVMwareVMware vCenter Site Recovery ManagerConfigvmware-dr.xml” \u0026ndash;cmd apply \u0026ndash;vc FQDNofvCenter -i \u0026ndash;csv c:ipaddys.csv\nSummary IP changes during a SRM failover are a necessity for many companies and SRM 5.8 has both made this process easier as well as giving plenty of options depending on your needs. We can now use network mapping, manual IP customization or bulk IP customization to accomplish our objectives.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-ip-customization/","summary":"\u003cp\u003eSome companies have built out their disaster recovery site with a stretched layer 2 network or even a disjoint layer 2 network that shares the same IP addresses with their production sites.  This is great because VMs don\u0026rsquo;t need to change IP Addresses if there is a failover event.  This post goes over what options we have if you need to change IP Addresses during your failover.\u003c/p\u003e\n\u003ch1 id=\"network-mappings\"\u003eNetwork mappings\u003c/h1\u003e\n\u003cp\u003eSRM 5.8 has a wonderful new way to manage IP Addresses between datacenters.  Prior to SRM 5.8 each VM needed to be manually updated with a new IP Address or done in bulk with a CSV template (show later in this post) if you had to re-IP your VMs.  Now with SRM 5.8 we can do a network mapping to make our lives much easier.  This is one of the best new features of SRM 5.8 in my opinion.\u003c/p\u003e","title":"SRM 5.8 IP Customization"},{"content":"A recovery plan is the orchestration piece of Site Recovery Manager and likely the main reason for purchasing the product. All of the setup that\u0026rsquo;s been done prior to creating the recovery plans is necessary but the recovery plan is where magic happens.\nWhen we go to the Recovery Plans menu in Site Recovery, we\u0026rsquo;ll see the option to click the notepad with the \u0026ldquo;+\u0026rdquo; sign on it to create a new recovery plan.\nGive the recovery plan a descriptive name. Remember that you can create a recovery plan for individual protection groups, or multiple protection groups. This allows you the opportunity to create individual recovery plans for things like \u0026ldquo;Mail Services\u0026rdquo;, \u0026ldquo;Database Services\u0026rdquo;, \u0026ldquo;DMZ\u0026rdquo;, \u0026ldquo;File Servers\u0026rdquo; and then create a catch all named \u0026ldquo;Full Recovery\u0026rdquo; that includes all of the protection groups. This allows for flexibility with whatever outage you\u0026rsquo;re planning for.\nChoose which site is the recovery site and click Next.\nSelect the Protection Groups that are part of this recovery plan. In the example below, there is only one protection group, but you could select many if they are available. Click Next.\nSelect the test networks. We\u0026rsquo;ve already created mapping for networking that should show handle what happens when a virtual machine fails over to the recovery site, but we need to configure what happens when we run a \u0026ldquo;TEST\u0026rdquo; recovery. During a failover test, we may not want the VM to be on the same network as our production servers. Leaving the \u0026ldquo;Isolated network (auto created) as the test network, allows us to create a virtual switch with no uplinks in order to ensure that the virtual machines won\u0026rsquo;t be accessible via the network during a test.\nGive the recovery plan a description and click Next.\nReview the settings and Click Finish.\nOnce done, we can see that a Recovery Plan is available and we can run a test or a failover.\nSummary This is the basic layout of a recovery plan. Most disaster recovery plans require a lot more customization than just powering on a virtual machine at another location. In a follow-up post we\u0026rsquo;ll review many more options that are available when setting up a recovery plan such as IP customization, power-on priorities and scripting.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-recovery-plan/","summary":"\u003cp\u003eA recovery plan is the orchestration piece of Site Recovery Manager and likely the main reason for purchasing the product.  All of the setup that\u0026rsquo;s been done prior to creating the recovery plans is necessary but the recovery plan is where magic happens.\u003c/p\u003e\n\u003cp\u003eWhen we go to the Recovery Plans menu in Site Recovery, we\u0026rsquo;ll see the option to click the notepad with the \u0026ldquo;+\u0026rdquo; sign on it to create a new recovery plan.\u003c/p\u003e","title":"SRM 5.8 Recovery Plan"},{"content":"SRM Sites and resource mappings are all done. It\u0026rsquo;s time to create some Protection Groups for our new VMware Site Recovery Manager deployment.\nA protection group is a collection of virtual machines that should be failed over together. For instance, you may want all of your Microsoft Exchange servers to fail over together, or you may want a Web, App, Database Tier to all failover at the same time. It is also possible that your main goal for SRM is to protect you in the event of a catastrophic loss of your datacenter and you\u0026rsquo;re concerned with every VM. It still a good idea to create multiple protection groups so that you can fail over certain apps in the event of an unforeseen issue. Think about it, if your mail servers crashed but the rest of your datacenter is fine, would it make sense to just fail over the mail servers, or the entire datacenter? Just failing over the mail servers would make sense if they are in their own protection group.\nIf we look at the protection groups menu of Site Recovery we\u0026rsquo;ll want to click the shield icon with the \u0026ldquo;+\u0026rdquo; sign on it. Give the new protection group a name. Of course give it a descriptive name. A name like \u0026ldquo;Protection Group 1\u0026rdquo; doesn\u0026rsquo;t work very well when you have lots of protection groups. Name it something easy to identify. Back to my examples, I\u0026rsquo;ve named my protection group, \u0026ldquo;Test-PG1\u0026rdquo;. Yep, I\u0026rsquo;m a hypocrite. Click Next.\nSelect the Protected Site and a replication strategy. In my lab, I\u0026rsquo;ve setup vSphere Replication so I\u0026rsquo;ve chosen that as my replication type. Click Next.\nNOTE If you are using Array Based Replication, make sure that you don\u0026rsquo;t have multiple protection groups on the same LUN or consistency group. The entire LUN would be taken offline during a failover of a protection group, so having some VMs that aren\u0026rsquo;t supposed to failover on the same LUN could cause you an issue.\nSelect the Virtual Machines to fail over. The populated list will only show virtual machines that are being replicated. As you can see from the screenshot below, the VM named \u0026ldquo;FailoverVM\u0026rdquo; is available for protection even though I have many VMs in my vCenter. \u0026ldquo;FailoverVM\u0026rdquo; is the only one that is being replicated. Click Next.\nNOTE: If you are using Array Based Replication, you will be selecting a datastore vs individual virtual machines. The same rule about replication holds true, however. Only replicated datastores should show up in this menu.\nGive the Protection group a good description. Click Next.\nReview the Protection Group settings and click Finish.\nSummary Protection groups are simple to setup in Site Recovery Manager, but could take a considerable amount of planning to make sure VMs are in the correct LUNs. The planning of your entire disaster recovery plan should be designed with this in mind.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-protection-groups/","summary":"\u003cp\u003eSRM \u003ca href=\"http://wp.me/p32uaN-176\"\u003eSites and resource mappings\u003c/a\u003e are all done.  It\u0026rsquo;s time to create some Protection Groups for our new VMware Site Recovery Manager deployment.\u003c/p\u003e\n\u003cp\u003eA protection group is a collection of virtual machines that should be failed over together.  For instance, you may want all of your Microsoft Exchange servers to fail over together, or you may want a Web, App, Database Tier to all failover at the same time.  It is also possible that your main goal for SRM is to protect you in the event of a catastrophic loss of your datacenter and you\u0026rsquo;re concerned with every VM.  It still a good idea to create multiple protection groups so that you can fail over certain apps in the event of an unforeseen issue.  Think about it, if your mail servers crashed but the rest of your datacenter is fine, would it make sense to just fail over the mail servers, or the entire datacenter?  Just failing over the mail servers would make sense if they are in their own protection group.\u003c/p\u003e","title":"SRM 5.8 Protection Groups"},{"content":"If you plan to use Array Based Replication for your SRM implementation, you\u0026rsquo;ll need to install and configure your Storage Replication Adapter on your SRM Servers. The SRA is used for SRM to communicate with the array to do things like snapshots, and mounting of datastores.\nPair the Arrays Once your SRAs have been installed in both your sites and you\u0026rsquo;ve gotten the arrays replicating, you\u0026rsquo;ll want to pair the arrays in SRM so that they can be used for protection Groups. Open the \u0026ldquo;Array Based Replication\u0026rdquo; tab in the \u0026ldquo;Site Recovery\u0026rdquo; menu of the web client. Click the Add button.\nHere, we\u0026rsquo;ll want to add a pair of array managers (one for each site).\nSelect the sites that you\u0026rsquo;ll be workign with.\nChoose the SRA type. If you only have one SRA installed, only one option should be available. In this case, we\u0026rsquo;re using EMC RecoverPoint.\nNow, we need to configure the manager with the IP Addresses, and names as well as an account that has enough privileges to create snapshots, as well as mount and unmount LUNs.\nNow we\u0026rsquo;ll configure the opposite site\u0026rsquo;s array manager as well. Same rules apply. Once we\u0026rsquo;ve configured the array managers we can enable them which will make them a pair that replicate to each other.\nFinish the wizard.\nProtection Groups for Arrays When you create a protection group for virtual machines being replicated by array based replication, you will give it a name as usual, and a site pair.\nChoose the Protected site and the that it\u0026rsquo;s an Array Based Pair. Select the pair.\nSelect the datastores that contain the virtual machines. All of the virtual machines on this array pair will be protected.\nSummary Array based replication does not take much additional effort for VMware Site Recovery Manager, but may take some additional planning to make sure your protection groups are in the right datastores. Remember that all VMs in a datastore will be failed over together.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-array-based-replication/","summary":"\u003cp\u003eIf you plan to use Array Based Replication for your SRM implementation, you\u0026rsquo;ll need to install and configure your Storage Replication Adapter on your SRM Servers.  The SRA is used for SRM to communicate with the array to do things like snapshots, and mounting of datastores.\u003c/p\u003e\n\u003ch1 id=\"pair-the-arrays\"\u003ePair the Arrays\u003c/h1\u003e\n\u003cp\u003eOnce your SRAs have been installed in both your sites and you\u0026rsquo;ve gotten the arrays replicating, you\u0026rsquo;ll want to pair the arrays in SRM so that they can be used for protection Groups.  Open the \u0026ldquo;Array Based Replication\u0026rdquo; tab in the \u0026ldquo;Site Recovery\u0026rdquo; menu of the web client.  Click the Add button.\u003c/p\u003e","title":"SRM 5.8 Array Based Replication"},{"content":"In the previous post we installed VMware Site Recovery Manger and now we need to do our Site Setup.\nIf you notice, now that SRM has been installed, the vSphere Web Client now has a Site Recovery menu in it. (If it doesn\u0026rsquo;t, log out and back in)\nFrom here, we can go into the new SRM menus.\nSite Pairing Once you\u0026rsquo;ve gotten to the SRM Menus, we\u0026rsquo;ll want to click on Sites to configure our Sites.\nNote: If you see the error below, this means that you\u0026rsquo;ve got an SSL Certificate mismatch between the SRM Server and the vCenter server. If you use custom SSL certificates for vCenter, you must use them on your SRM Installation as well.\nAssuming all your installations have gone well, you\u0026rsquo;ll see a screen like the one below. Click the \u0026ldquo;Pair Site\u0026rdquo; link to get started with the site configuration.\nEnter the vCenter information for the remote vCenter. This will pair your site with the opposite site and create a relationship between them.\nIf you are using the default VMware certificates, you\u0026rsquo;ll need some login information entered.\nIf you are using custom SSL certificates from a certificate Authority, login information is not needed.\nOnce Site Pairing is done, you\u0026rsquo;ll see two sites in the SRM Sites menu\nResource Mapping Now that the sites are paired, we can setup mappings for the relationships between the two sites. This includes Resource Pools, Folders, and Networks.\nOpen up one of your sites and you\u0026rsquo;ll see a helpful \u0026ldquo;Guide to Configuring SRM\u0026rdquo; menu. We\u0026rsquo;ll go right down the list by selecting the Create resource mappings.\nSelect a relationship between the protected network and the recovery network. Once you\u0026rsquo;ve created your relationship click the Add Mappings button to add it to your mapping list. When done, you can click the check box to create the same mapping in the reverse direction for fail back operations. You can select a many to one relationship here, but if you do, you won\u0026rsquo;t be able to select the Reverse Mapping option. Click OK.\nNow we can click the \u0026ldquo;Create folder mappings\u0026rdquo; link in the guide to create a relationship for the virtual machine folders. Repeat the process we did for resources, only this time for virtual machine folders. The same rules apply for many to one relationships. Click OK.\nThe next mapping we\u0026rsquo;ll need to do is for networking. Map a network in the protected site to a recovery network. Don\u0026rsquo;t worry about IP Addressing yet, we can customize this later, but you\u0026rsquo;ll need to know what network the virtual machines will map to during a failover.\nPlaceholder Datastores The next section of the \u0026ldquo;SRM Configuration Guide\u0026rdquo; is to create placeholder datastores. These datastores hold the configuration information for the virtual machines that are to be failed over. Think of this as a .vmx file that is registered with vCenter without disks. During a failover this virtual machine becomes active and the replicated virtual disks are attached to it. This datastore should not be a replicated datastore, and does not need to be very large to store these files.\nConfigure the placeholder datastore. Select one or more datastores to house the virtual machine files. Click OK.\nOnce done, you\u0026rsquo;ll want to go to the other site and configure a datastore for it as well. This is so the mappings are already done if you fail over and want to fail back.\nSummary We\u0026rsquo;ve now installed SRM and configured the sites. We can now start looking at setting up replication and protection groups in the next post.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-site-setup/","summary":"\u003cp\u003eIn the \u003ca href=\"http://wp.me/p32uaN-16I\"\u003eprevious post we installed VMware Site Recovery Manger\u003c/a\u003e and now we need to do our Site Setup.\u003c/p\u003e\n\u003cp\u003eIf you notice, now that SRM has been installed, the vSphere Web Client now has a Site Recovery menu in it.  (If it doesn\u0026rsquo;t, log out and back in)\u003c/p\u003e\n\u003cp\u003eFrom here, we can go into the new SRM menus.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/12/SRM58SiteSetup1.png\"\u003e\u003cimg alt=\"SRM58SiteSetup1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/12/SRM58SiteSetup1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"site-pairing\"\u003eSite Pairing\u003c/h1\u003e\n\u003cp\u003eOnce you\u0026rsquo;ve gotten to the SRM Menus, we\u0026rsquo;ll want to click on Sites to configure our Sites.\u003c/p\u003e","title":"SRM 5.8 Site Setup"},{"content":"SRM Installation Prerequisites Database Prerequisites Before you are able to install SRM, you\u0026rsquo;ll need a database to store configuration files. Create a database on your SQL Server to house the configuration information. Note: You\u0026rsquo;ll need a database server in both the protected site and recovery site; one for each SRM Server.\nPre-create the SQL Database and assign your SRM Service account AT LEAST the ADMINISTER BULK OPERATIONS, CONNECT, AND CREATE TABLE permissions. Ensure the SRM database schema has the same name as the database user account. The SRM database service account should be the database owner of the SRM database The SRM database schema should be the default schema of the SRM database user. On your SRM Servers, install the SQL Server native client for your version of SQL Server. Create an ODBC connection to the SRM database on your SRM Servers. Select the SQL Native Client appropriate for your database server.\nGive it a name, and point the ODBC connection to the server.\nEnter login information.\nEnter the SRM database\nInstaller Prerequisites VMware Site Recovery Manager installation is relatively simple. Grab the installer from http://vmware.com/downloads and run the installer on your SRM Server. There are a couple of notes to be aware of when installing though.\nRight click the installer and run as Administrator if you are leaving UAC on. This makes re-installation easier in the future. The installation should be done by a user who will be running the SRM Service. The logged in installer by default is the service account that is used. The logged in user should have administrative access to the server it\u0026rsquo;s being installed on. Consistent use of SSL Certificates need to be used If any vCenter is using custom SSL Certificates then the SRM Services must also use SSL certificates from a Certificate Authority If the protected site vCenter uses SSL Certificates then the recovery site vCenter Server should also use SSL certificates from a Certificate Authority If you need to use custom SSL certificates from a certificate authority instead of the default VMware certificates, the CN for both SRM servers should be identical I recommend reviewing this post from Sam McGeown if you need to use SSL Certificates Install Run the installer as an administrator. Click Next on the welcome screen.\nYou can view additional prerequisite tasks from the next screen. I\u0026rsquo;ve covered many of them already in this post.\nChoose the location for the install files.\nEnter the location of the vCenter server associated with the SRM Server, as well as some credentials to register the service with vCenter.\nGive the site a name and enter an email address and select the Host IP Address and ports to be used.\nIf you\u0026rsquo;re using the same SRM Server for multiple sites, select the Custom SRM Option and enter the information, otherwise just use the default if it\u0026rsquo;s a 1 to 1 relationship with another SRM Server.\nIf you\u0026rsquo;re using the default SSL Certificates then click the \u0026ldquo;Automatically generate a certificate and then enter some certificate information to generate a cert.\nIf you\u0026rsquo;re using custom SSL Certificates, then you\u0026rsquo;ll load your SSL Certificate during this phase.\nNext, select your Data Source that you created prior to the installation. (If you forgot to do this, there is a button to do it during setup)\nSelect the connection counts for the database.\nEnter the password for the service account for the Site Recovery Manager Server. The user will be the logged in user.\nOnce all the information has been entered, click the Install button.\nWatch as the installer progresses.\nSummary There are quite a few things that need to be done before the install process happens, but it will make your life simpler to have these done beforehand. Now that you\u0026rsquo;ve installed SRM on one of your sites, repeat the process for the second site.\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-installation/","summary":"\u003ch1 id=\"srm-installation-prerequisites\"\u003eSRM Installation Prerequisites\u003c/h1\u003e\n\u003ch2 id=\"database-prerequisites\"\u003eDatabase Prerequisites\u003c/h2\u003e\n\u003cp\u003eBefore you are able to install SRM, you\u0026rsquo;ll need a database to store configuration files.  Create a database on your SQL Server to house the configuration information.  Note: You\u0026rsquo;ll need a database server in both the protected site and recovery site; one for each SRM Server.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003ePre-create the SQL Database and assign your SRM Service account AT LEAST the \u003cstrong\u003eADMINISTER BULK OPERATIONS, CONNECT, AND CREATE TABLE\u003c/strong\u003e permissions.\u003c/li\u003e\n\u003cli\u003eEnsure the SRM database schema has the same name as the database user account.\u003c/li\u003e\n\u003cli\u003eThe SRM database service account should be the database owner of the SRM database\u003c/li\u003e\n\u003cli\u003eThe SRM database schema should be the default schema of the SRM database user.\u003c/li\u003e\n\u003cli\u003eOn your SRM Servers, install the SQL Server native client for your version of SQL Server.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eCreate an ODBC connection to the SRM database on your SRM Servers.  Select the SQL Native Client appropriate for your database server.\u003c/p\u003e","title":"SRM 5.8 Installation"},{"content":"VMware Site Recovery Manager consists of several different pieces that all have to fit together, let alone the fact that you are working with two different physical locations.\nThe following components will all need to be configured for a successful SRM implementation:\n2 or more sites 2 or more Single Sign On Servers 2 or more vCenter Servers 5.5 2 or more SRM Servers Storage – Either storage arrays with replication, or 2 or more Virtual Replication Appliances Networks It’s worth noting that SSO, vCenter, and SRM could all be installed on the same machine, but you’ll need this many instances of these components.\nAs of VMware Site Recovery Manager 5.8 you can do a traditional Protected to Recovery Site implementation like the one shown below. This can be a unidirectional setup with a warm site ready for a failover to occur, or it can be bi-directional where both sites are in use and a failure at either site could be failed over to the opposite site.\nEach site will require their own vCenter Server and SRM Server, as well as a method of replication such as a storage array.\nAlong with a 1 to 1 setup, SRM 5.8 can manage a many to one failover scenario where multiple sites could fail over to a single site. This would require an SRM instance for each of the protected sites as seen in the diagram below.\nThe configuration that is not available at the moment is a single site to multiple failover sites. *as of SRM 5.8\n","permalink":"https://theithollow.com/2015/01/05/srm-5-8-architecture/","summary":"\u003cp\u003eVMware Site Recovery Manager consists of several different pieces that all have to fit together, let alone the fact that you are working with two different physical locations.\u003c/p\u003e\n\u003cp\u003eThe following components will all need to be configured for a successful SRM implementation:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e2 or more sites\u003c/li\u003e\n\u003cli\u003e2 or more Single Sign On Servers\u003c/li\u003e\n\u003cli\u003e2 or more vCenter Servers 5.5\u003c/li\u003e\n\u003cli\u003e2 or more SRM Servers\u003c/li\u003e\n\u003cli\u003eStorage – Either storage arrays with replication, or 2 or more Virtual Replication Appliances\u003c/li\u003e\n\u003cli\u003eNetworks\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIt’s worth noting that SSO, vCenter, and SRM could all be installed on the same machine, but you’ll need this many instances of these components.\u003c/p\u003e","title":"SRM 5.8 Architecture"},{"content":" During the process of setting up a new vCenter Server in my lab, I ran into an issue adding SSL Certificates to my vCenter services. I followed my own blog posts about how to do this so that I wouldn\u0026rsquo;t miss anything, but nevertheless ran into an error that took me quite a while to get fixed.\nAfter creating all my certificate requests using the VMware SSL Automation Tool, I updated my SSO with my custom certificate without issue. The next step is to make sure the Inventory Service trusts the new SSO Certificate, which also went without a hitch.\nWhen I tried to update the Inventory Service SSL Certificate, I received this puzzling error message.\nWhen I looked at this a little bit further, I noticed that the certificates I downloaded from my certificate authority looked to be too long, which is when I found my issue. During the certificate request you need to download the certificate and not the certificate chain.\nAfter re-requesting all of my certificates and re-running the Automation tool again, I got a successfully completed message.\nSummary This took me a few hours to troubleshoot and finally figure out where I went wrong. Hopefully this post will save someone else from this trouble.\n","permalink":"https://theithollow.com/2014/12/29/vmware-ssl-automation-tool-error-generating-pfx/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/sslguide.png\"\u003e\u003cimg alt=\"sslguide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/sslguide-300x300.png\"\u003e\u003c/a\u003e During the process of setting up a new vCenter Server in my lab, I ran into an issue adding SSL Certificates to my vCenter services.  I followed my own \u003ca href=\"/home-lab-ssl-certificates/\"\u003eblog posts\u003c/a\u003e about how to do this so that I wouldn\u0026rsquo;t miss anything, but nevertheless ran into an error that took me quite a while to get fixed.\u003c/p\u003e\n\u003cp\u003eAfter creating all my certificate requests using the VMware SSL Automation Tool, I updated my SSO with my custom certificate without issue.  The next step is to make sure the Inventory Service trusts the new SSO Certificate, which also went without a hitch.\u003c/p\u003e","title":"VMware SSL Automation Tool - Error Generating pfx"},{"content":"Upgrades for the vRealize Automation software (formerly vCloud Automation Center) seem to be coming quite often these days. This post gives a quick overview on how to upgrade your current environment to the latest release. Of course for official documentation, please check out VMware\u0026rsquo;s documentation for details. vRealize Upgrade Instructions\nPre-Install Steps Obviously you should ensure that you\u0026rsquo;ve got backups in the event something catastrophic should occur. Be sure to grab a backup of the IaaS database, and snapshot your vRA appliances, as well as any of the servers running the IaaS components such as the Model Manager, DEM Workers, Orchestrators and Agents. For this guide, we have a vRA appliance, and a single IaaS Server running the rest of the components. A separate SQL Server is housing the database.\nThe following instructions are for upgrading from 6.1 to 6.2. Different upgrade paths may be similar but not necessarily the same.\nLogin to your appliance and stop the vco-server service. If you happen to have multiple appliances configured with HA, be sure to stop the services on each of the appliances. You can stop the service by connecting to your appliance via SSH and running service vco-server stop . Next, run chkconfig vco-server off. To check the server status, you can use service vco-server status and make sure the status shows up as \u0026ldquo;Not Running\u0026rdquo;.\nIf you have multiple vRealize Automation appliances you should also stop the vcas-server, apache2, and rabbitmq-server services on the appliances you\u0026rsquo;re not actively upgrading. In the case of this post, I only have one appliance so I left the services running.\nNext, stop the vCAC services on the IaaS Host(s). The recommend stop order is below:\nVmware vCloud Automation Center agents DEM Workers DEM Orchestrators vCAC Manager Services vRealize Automation Appliance Upgrades Now that the services are stopped, log into the portal management by going to the appliance URL port 5480. Go to the Update tab and click check updates. After a few seconds, the appliance should respond that an update is available. At that point, you can click the Install Updates button. NOTE: Be prepared to wait if you do this method, it took over 30 minutes to install the updates which would depend on customer Internet speed, demand for the update from VMware etc.\nIf you don\u0026rsquo;t want to automatically download the updates, you can download an ISO or zip file and mount a different repository to make the process quicker. Just change the settings on the appliance management portal under the update\u0026ndash;\u0026gt; settings tab.\nAfter the install has finished, it will notify you that a reboot is required. Reboot the appliance. When it comes back up, you can see the appliance version is updated under the \u0026ldquo;System\u0026rdquo; tab.\nIaaS Components Upgrade Now that the appliance(s) have been upgraded, we can move onto the IaaS server(s). The first step to upgrading the IaaS components, is to upgrade the IaaS Database which holds all of the request information. To run the upgrade, grab the database upgrade scripts from the appliance portal. http://vcacAppliance:5480/installer/ Save and unpack the database upgrade scripts on a server with access to the IaaS Database.\nNOTE: VMware specifically mentions only running the DBUpgrade Scripts once. I don\u0026rsquo;t know the problem if this is run multiple times, but be aware of this just in case.\nOpen a command prompt and change the directory to where you stored the unpacked database upgrade scripts. The following switches are available.\n-S \u0026ndash; The database server followed by [,port number] or [SQLinstance] -d \u0026ndash; The database name -E \u0026ndash; Windows Authentication is used to connect to the database. No details needed. -U \u0026ndash; Username if needed -l \u0026ndash; the path for the log files. By default, the logs are stored in the DBUpgrade script directory. I ran the following script with my SQL Server, database name and I am using Windows Authentication so I just specified the -E switch.\nRun DBUpgrade -S [SQLDatabase] -d [DatabaseName] -E Now that the database upgraded successfully, we can go back to the vCAC appliance page and download the IaaS Installer to our IaaS server(s). Run the installer from an elevated session on the IaaS Server(s).\nEnter the root username and password to connect to the appliance, and click the check box to accept the certificate. Click, next.\nChoose the Upgrade option which should be highlighted already. Click, next.\nSelect the components that are housed on the IaaS server that need to be upgrade. I chose all of them since all the services are running on a single IaaS server.\nIf you are in a distributed install, upgrade the services in the following order:\nWebsites Manager Services DEM orchestrator and workers Agents Enter the Database server and database name as well. Click, next.\nClick Upgrade.\nWait for the upgrade to complete. This could take some time so wait patiently.\nClick Next once the install is completed.\nRebranding Congratulations! Your vCAC instance has been upgraded to vRealize Automation 6.2. If you would like to change the branding to show \u0026ldquo;vRealize\u0026rdquo; instead of the \u0026ldquo;vCloud Automation Center\u0026rdquo; you can login to the vRealize Appliance Portal, and go to the vRA Settings tab and then click SSO. Re-enter the SSO Settings, and click the \u0026ldquo;Apply Branding\u0026rdquo; checkbox. Click Save Settings.\nThe next time you login to the vRealize Portal, you\u0026rsquo;ll see \u0026ldquo;VMware vRealize Automation\u0026rdquo; if that\u0026rsquo;s important to you.\nSummary As always, check the vSphere official documentation for upgrade steps, but this post should give you a good idea about the process of upgrading and some screenshots in case you get stuck.\n","permalink":"https://theithollow.com/2014/12/16/vrealize-automation-6-2-upgrade/","summary":"\u003cp\u003eUpgrades for the vRealize Automation software (formerly vCloud Automation Center) seem to be coming quite often these days.  This post gives a quick overview on how to upgrade your current environment to the latest release.  Of course for official documentation, please check out VMware\u0026rsquo;s documentation for details.  \u003ca href=\"http://pubs.vmware.com/vra-62/topic/com.vmware.ICbase/PDF/vrealize-automation-62-upgrading.pdf\"\u003evRealize Upgrade Instructions\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"pre-install-steps\"\u003e\u003cstrong\u003ePre-Install Steps\u003c/strong\u003e\u003c/h2\u003e\n\u003cp\u003eObviously you should ensure that you\u0026rsquo;ve got backups in the event something catastrophic should occur.  Be sure to grab a backup of the IaaS database, and snapshot your vRA appliances, as well as any of the servers running the IaaS components such as the Model Manager, DEM Workers, Orchestrators and Agents.  For this guide, we have a vRA appliance, and a single IaaS Server running the rest of the components.  A separate SQL Server is housing the database.\u003c/p\u003e","title":"vRealize Automation 6.2 Upgrade"},{"content":"\nPowerShell is an amazing tool that has limitless potential for Administrators, Engineers and Architects to automate routine tasks or do reporting on things their system management applications aren\u0026rsquo;t built for. Whenever there is a task to be done on multiple systems and it might need to be done more than once, I find myself reaching for this valuable tool.\nThe problem with PowerShell, just like a programming language is that it can be intimidating to get started. This post is to give you a basic understanding of what you\u0026rsquo;ll be getting into before you start running PowerShell cmdlets.\nObjects Dictionary.com defines an object as: \u0026ldquo;athing,person,ormattertowhichthoughtoractionisdirected\u0026rdquo;.\nI spy a computer monitor. You\u0026rsquo;re computer monitor is an object right? Assuming we\u0026rsquo;re not talking about scripting for a second, no one would argue with me that your computer monitor is not an object. If you look around you, I\u0026rsquo;m sure you can find tons of objects just sitting around but I chose to use a computer monitor because there is a decent chance you\u0026rsquo;re looking at one right now and everyone can relate to this.\nLet\u0026rsquo;s look at an object in a PowerShell context. Some examples of an object for scripting could be:\nA file A directory a service a device If I open up a PowerShell window, I can take this object and put it into a variable for future use. Let\u0026rsquo;s try it with a file called \u0026ldquo;Hollow.bmp\u0026rdquo; which is just a bitmap picture of the theITHollow.com logo.\nI\u0026rsquo;ve created a variable called \u0026ldquo;variable\u0026rdquo; and put in it the object \u0026ldquo;Hollow.bmp\u0026rdquo;\nSimple so far right?\nProperties Now that we have an object defined in a variable, let\u0026rsquo;s talk about properties. A property is a distinctive attribute or quality about an object. Going back to the analogy of your computer monitor being an object, let\u0026rsquo;s think about what a property of your computer monitor would be. Just to name a few of them, think of Screen Resolution, Bezel Color, Refresh Rate and Size. Everyone\u0026rsquo;s computer monitor should have these types of properties associated with them.\nJust like our computer monitors, PowerShell Objects also have properties. Let\u0026rsquo;s look at our variable again where the bitmap file \u0026ldquo;Hollow.bmp\u0026rdquo; was loaded into it.\nHere, I\u0026rsquo;ve run two commands. The first one is just to recall the $variable. When I type in my object, a list of default properties will be returned. In this case you can see that Directory, Last Write Time and Name are a few attributes that can be used to describe the object in our variable.\nThe second command was to list all of the properties of the object in our variable. As you can see there are many more properties of the object that can be referenced besides the default properties of the object.\nMethods Methods are things that you can do with an object. This could be modifying a property or returning a value to you. Back to the Computer Monitor object example, what kind of methods could you imagine your monitor has? How about PowerOff, PowerOn or RotateScreen? These are all things that can be done to the object.\nIn our PowerShell script we can list the methods available on our Hollow.bmp file.\nBelow, I\u0026rsquo;ve listed the methods that are available on the Hollow.bmp file and you can see they are all things to do to the file. Also, you\u0026rsquo;ll notice that in the definition you\u0026rsquo;ll see how the method is typically called.\nFor instance, the Encrypt method has a definition that looks like \u0026ldquo;Encrypt()\u0026rdquo;. The parentheses are used to store additional variables. Since Encrypt doesn\u0026rsquo;t have anything inside the parentheses, there are no options that can be added to the method. See Below, where I\u0026rsquo;ve used both the Encrypt and MoveTo methods on the file. See that the encrypt method has no additional paramaters, but the MoveTo method needs a new file location.\nSummary This post isn\u0026rsquo;t going to make you a PowerShell expert, but understanding how Objects, Methods and Properties all work should give you a good jump start on your scripting. These are just some ideas that are helpful to know going into your scripting endeavors so you\u0026rsquo;re not just blindly copying scripts off the Internet to get your tasks done.\n","permalink":"https://theithollow.com/2014/12/08/before-you-start-powershell/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/12/poshscreen1.png\"\u003e\u003cimg alt=\"poshscreen1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/12/poshscreen1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003ePowerShell is an amazing tool that has limitless potential for Administrators, Engineers and Architects to automate routine tasks or do reporting on things their system management applications aren\u0026rsquo;t built for.  Whenever there is a task to be done on multiple systems and it might need to be done more than once, I find myself reaching for this valuable tool.\u003c/p\u003e\n\u003cp\u003eThe problem with PowerShell, just like a programming language is that it can be intimidating to get started.   This post is to give you a basic understanding of what you\u0026rsquo;ll be getting into before you start running PowerShell cmdlets.\u003c/p\u003e","title":"Before You Start PowerShell"},{"content":"Pure Storage presented at Storage Field Day 6 and I had the opportunity to visit their Headquarters for a second time to discuss their technology. I\u0026rsquo;ve written about \u0026ldquo;Pure\u0026rdquo; before after they presented at Virtualizaton Field Day 3 back in February but it was based more around their \u0026quot; Forever Flash\u0026quot; services. This time I was more interested in their architecture and found that their company name \u0026ldquo;Pure Storage\u0026rdquo; may be a bit misleading. Everyone knows that they produce arrays that are all SSD and provide tons of IOPS and low latency, blah blah blah. These arrays are far from being just a device full of fast storage. There is a lot of know how based on SSD architecture that has been put into this array to get more out of it than just fast drives.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Storage Field Day 6. This was the only compensation given.\nLet\u0026rsquo;s take a look at their write IO flow to see some information about what makes \u0026ldquo;Pure Storage\u0026rdquo;, more than just pure storage.\nWhen blocks are sent by a host to be written to the Pure array, 8 bit granularity pattern removal is used to reduce the amount of data entering NVRAM. This pattern removal includes zeros. This pattern removal usually happens when a virtual disk is trying to be eager-zeroed. The data is broken into 32KB chunks which are checksummed and written to at least half of the NVRAM modules to obtain quorum. The write is acknowledged to the host and the data reduction process begins. Next, 512-byte granularity Inline Deduplication takes place. Dedupe uses a hash table to identify potential duplicate blocks but a binary comparison is done before updating metadata. The hash table is not used as a trusted source, but rather a good place to start looking for matches. On the off chance that the controllers are very heavily utilized, this process is skipped to ensure high performance. Inline Compression is done next to compress the deduped blocks in NVRAM before writing them to SSD. The Lempel–Ziv–Oberhumer (LZO) algorithm is used for compression. Not everything is compressed here though, only blocks that would have moderate or higher cost savings will be compressed. Parity is calculated for the RAID 3D Algorithm to protect the data while on SSD The write is flushed from NVRAM to the SSDs. From the write IO Flow you can see that a lot of effort was put into making sure that the number of writes to solid state disks is minimized to prolong the life of the drives, but not at the expense of performance. Notice that the process may skip over Dedupe if there is too much activity.\nPure has a pair of background processes that help take care of any additional blocks that need attention. A background deduplication process is used to catch any blocks that might have been skipped during the inline process. As well, a second compression algorithm is applied to further reduce the space used.\nAnother process that happens in the background is monitoring disk writes. Some solid state disks behave inconsistently when both reads and writes are taking place simultaneously. The performance of the read, the write or both can be affected in this situation. Pure watches for these types of scenarios and to avoid these possible performance bottlenecks, will rebuild the parity on blocks to move the data to different drives. Pretty Cool! RAID being used as a function of performance, instead of just be a penalty for availability.\nSummary Pure Storage is obviously fast, I mean it\u0026rsquo;s an all flash array, but this isn\u0026rsquo;t just a storage appliance with solid state disks in it. It really was built with SSDs in mind and is more than pure storage, it\u0026rsquo;s pure storage with intelligence.\nCheck out some other posts from Storage Field Day 6 about Pure Storage from the other delegates.\nStorage Field Day 6 Livin La Pure\u0026rsquo;a Vida Storage Field Day 6 Day 2 Pure Storage\n","permalink":"https://theithollow.com/2014/12/01/just-pure-storage/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/purestoragelogo.jpg\"\u003e\u003cimg alt=\"purestoragelogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/purestoragelogo-300x118.jpg\"\u003e\u003c/a\u003e\u003ca href=\"http://purestorage.com\"\u003ePure Storage\u003c/a\u003e presented at \u003ca href=\"http://techfieldday.com/event/sfd6\"\u003eStorage Field Day 6\u003c/a\u003e and I had the opportunity to visit their Headquarters for a second time to discuss their technology.  I\u0026rsquo;ve written about \u0026ldquo;Pure\u0026rdquo; before after they presented at \u003ca href=\"http://techfieldday.com/event/vfd3\"\u003eVirtualizaton Field Day 3\u003c/a\u003e back in February but it was based more around their \u0026quot; \u003ca href=\"http://purestorage.com/forever\"\u003eForever Flash\u003c/a\u003e\u0026quot; services.  This time I was more interested in their architecture and found that their company name \u0026ldquo;Pure Storage\u0026rdquo; may be a bit misleading.  Everyone knows that they produce arrays that are all SSD and provide tons of IOPS and low latency, blah blah blah.  These arrays are far from being just a device full of fast storage.  There is a lot of know how based on SSD architecture that has been put into this array to get more out of it than just fast drives.\u003c/p\u003e","title":"It\u0026#039;s not just PURE Storage"},{"content":"Please find links and articles about home labs so that you can try out things on your own! Hybrid Cloud Lab EMC vVNX for Home Labs Windows Server 2012 Storage for Home Labs My Home Lab, the Baby Dragon II Modified! Poor Man\u0026rsquo;s SRM Lab in a Box HP Microserver fully loaded with SSDs Open VPN for Home Lab Access Remotely A list of Simulators to get started with more advanced stuff theITHollow Hybrid Cloud LabMany of my daily activities at work now revolve around the idea of a Hybrid Cloud so some of my home lab activities have also followed suit\n","permalink":"https://theithollow.com/home-lab/","summary":"\u003ch1 id=\"please-find-links-and-articles-about-home-labs-so-that-you-can-try-out-things-on-your-own\"\u003ePlease find links and articles about home labs so that you can try out things on your own!\u003c/h1\u003e\n\u003ch1 id=\"hybrid-cloud-lab\"\u003e\u003ca href=\"/2015/03/09/hollow-lab-2015-baby-dragon-hybrid-cloud/\"\u003eHybrid Cloud Lab\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"emc-vvnx-for-home-labs\"\u003e\u003ca href=\"/2015/05/12/emc-vvnx-for-your-home-lab/\"\u003eEMC vVNX for Home Labs\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"windows-server-2012-storage-for-home-labs\"\u003e\u003ca href=\"/2013/09/24/windows-server-2012-as-a-storage-device-for-vsphere-home-lab/\"\u003eWindows Server 2012 Storage for Home Labs\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"my-home-lab-the-baby-dragon-ii-modified\"\u003e\u003ca href=\"/2013/04/new-baby-dragon-home-lab/\"\u003eMy Home Lab, the Baby Dragon II Modified!\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"poor-man\"\u003e\u003ca href=\"/2012/05/poor-mans-srm-lab-whitebox/\"\u003ePoor Man\u0026rsquo;s SRM Lab in a Box\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"hp-microserver-fully-loaded-with-ssds\"\u003e\u003ca href=\"/2013/12/add-ssds-hp-microserver/\"\u003eHP Microserver fully loaded with SSDs\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"open-vpn-for-home-lab-access-remotely\"\u003e\u003ca href=\"/2014/02/open-vpn-home-labs/\"\u003eOpen VPN for Home Lab Access Remotely\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"a-list-of-simulators-to-get-started-with-more-advanced-stuff\"\u003e\u003ca href=\"/2013/02/storage-simulators\"\u003eA list of Simulators to get started with more advanced stuff\u003c/a\u003e\u003c/h1\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2015/03/HollowLab1-1024x658.jpg\"\n         alt=\"theITHollow Hybrid Cloud LabMany of my daily activities at work now revolve around the idea of a Hybrid Cloud so some of my home lab activities have also followed suit\" width=\"1024\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003etheITHollow Hybrid Cloud LabMany of my daily activities at work now revolve around the idea of a Hybrid Cloud so some of my home lab activities have also followed suit\u003c/p\u003e","title":"Home Labs"},{"content":"\nSetup Home Lab SSL Certificate Authority Setup Home Lab SSL Root Certificates Create VMware SSL Web Certificate Create VMware SSL Certificate Requests Replacing VMware vCenter SSL Certificates Add SSL Certificate to VMware vCOps ","permalink":"https://theithollow.com/home-lab-ssl-certificates/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/sslguide.png\"\u003e\u003cimg alt=\"sslguide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/sslguide.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"setup-home-lab-ssl-certificate-authority\"\u003e\u003ca href=\"http://wp.me/p32uaN-XF\"\u003eSetup Home Lab SSL Certificate Authority\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"setup-home-lab-ssl-root-certificates\"\u003e\u003ca href=\"/2014/08/setup-home-lab-ssl-root-certificates/\"\u003eSetup Home Lab SSL Root Certificates\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"create-vmware-ssl-web-certificate\"\u003e\u003ca href=\"/2014/08/create-vmware-ssl-web-certificate/\"\u003eCreate VMware SSL Web Certificate\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"create-vmware-ssl-certificate-requests\"\u003e\u003ca href=\"/2014/08/create-vmware-ssl-certificate-requests/\"\u003eCreate VMware SSL Certificate Requests\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"replacing-vmware-vcenter-ssl-certificates\"\u003e\u003ca href=\"/2014/08/replacing-vmware-vcenter-ssl-certificates/\"\u003eReplacing VMware vCenter SSL Certificates\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"add-ssl-certificate-to-vmware-vcops\"\u003e\u003ca href=\"/2014/09/add-ssl-certificate-vmware-vcops/\"\u003eAdd SSL Certificate to VMware vCOps\u003c/a\u003e\u003c/h1\u003e","title":"Home Lab SSL Certificates"},{"content":"vRealize Automation 7 Guide\nvRealize Automation 7 Guide\nVMware Site Recovery Manager 5.5 Guide VMware Site Recovery Manager 5.8 Guide vRealize Automation 6 Guide (Formerly vClouad Automation Center) Home Lab SSL Certificates Microsoft Dynamic Access Control Microsoft Exchange 2013 Transition ","permalink":"https://theithollow.com/guides/","summary":"\u003cp\u003e\u003cstrong\u003e\u003ca href=\"/2016/01/11/vrealize-automation-7-guide/\"\u003evRealize Automation 7 Guide\u003c/a\u003e\u003c/strong\u003e\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/01/vRA7Guide1.png\"\n         alt=\"vRealize Automation 7 Guide\" width=\"649\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003evRealize Automation 7 Guide\u003c/p\u003e\n        \u003c/figcaption\u003e\n\u003c/figure\u003e\n\n\u003cp\u003e\u003ca href=\"/2016/07/18/guide-getting-started-azure/\"\u003e\u003cimg alt=\"Getting Started with Microsoft Azure Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2016/07/Azure-Guide.png\"\u003e\u003c/a\u003e\u003ca href=\"/2013/11/vmware-site-recovery-manager-55-guide/\" title=\"VMware Site Recovery Manager 5.5 Guide\"\u003e\u003cstrong\u003eVMware Site Recovery Manager 5.5 Guide\u003c/strong\u003e\u003c/a\u003e \u003ca href=\"/2013/11/vmware-site-recovery-manager-55-guide/\" title=\"VMware Site Recovery Manager 5.5 Guide\"\u003e\u003cimg alt=\"5.5Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/5.5Guide.png\"\u003e\u003c/a\u003e \u003cstrong\u003e\u003ca href=\"http://wp.me/p32uaN-189\"\u003eVMware Site Recovery Manager 5.8 Guide\u003c/a\u003e\u003c/strong\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/12/5.8Guide.png\"\u003e\u003cimg alt=\"5.8Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/12/5.8Guide.png\"\u003e\u003c/a\u003e \u003cstrong\u003e\u003ca href=\"/vrealize-automation-6-guide-formerly-vcac/\" title=\"vRealize Automation 6 Guide (formerly vCAC)\"\u003evRealize Automation 6 Guide (Formerly vClouad Automation Center)\u003c/a\u003e\u003c/strong\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/09/vRealize6Guide.png\" title=\"vRealize Automation 6 Guide (formerly vCAC)\"\u003e\u003cimg alt=\"vRealize6Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/09/vRealize6Guide.png\"\u003e\u003c/a\u003e \u003cstrong\u003e\u003ca href=\"/home-lab-ssl-certificates/\"\u003eHome Lab SSL Certificates\u003c/a\u003e\u003c/strong\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/sslguide.png\"\u003e\u003cimg alt=\"sslguide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/sslguide.png\"\u003e\u003c/a\u003e \u003cstrong\u003e\u003ca href=\"/category/series/microsoft-dynamic-access-control/\"\u003eMicrosoft Dynamic Access Control\u003c/a\u003e\u003c/strong\u003e \u003cstrong\u003e\u003ca href=\"/category/series/microsoft-exchange-2013-transition/\"\u003eMicrosoft Exchange 2013 Transition\u003c/a\u003e\u003c/strong\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox.jpg\"\u003e\u003cimg alt=\"R2D2Mailbox\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Guides"},{"content":" When I saw the Virtual SAN solution from VMware my first thought was that a small office might really love this solution because it eliminated the requirement for a Storage Area Network (SAN) for small offices. Often I would see some small remote offices that have a requirement for a few servers and a bit of highly available storage but this was cost prohibitive for a variety of reasons. Remote offices would need power, cooling and some sort of staff (possibly remote staff) to manage all of these services as well as paying for a shared storage device. Remote office SAN\u0026rsquo;s are typically a waste since some of these offices really have a very minimal storage footprint, such as less than 5 TB of disk space. It would seem like a waste to by a highly available SAN to house 2TB of data, am I right?\nWhen I saw StorMagic at Storage Field Day 6, I realized that they saw the same thing that I did, except they saw an opportunity that Virtual SAN doesn\u0026rsquo;t have. Virtual SAN is too expensive for some of these companies because it requires three servers just for high availability and also requires a combination of SSD and HDDs. StorMagic\u0026rsquo;s strategy is all about targeting these \u0026ldquo;Edge\u0026rdquo; locations with bare-bones simple methodology to limit cost for these sites. They used the term \u0026ldquo;Edge Sites\u0026rdquo; over Remote Offices because \u0026ldquo;Remote Office\u0026rdquo; give you the illusion that there is an actual office involved with a certain number of employees. When they say \u0026ldquo;Edge\u0026rdquo; they could be talking about a phone closet in a remote location that is used for monitoring something such as a wind farm. StorMagic attempted to limit costs by only requiring two nodes. This could be a large cost savings if you are comparing VSAN and StorMagic because of the elimination of one-third of the necessary servers. Take that multiplied by a large number of sites, and the cost savings would definitely be noticeable.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Storage Field Day 6. This was the only compensation given.\nThe Technology StorMagic uses a pair of iSCSI virtual SAN appliances (VSA) that work with two nodes to provide the High Availability. The VSA runs on multiple hypervisors allowing for a lot of customer choice and isn\u0026rsquo;t concerned with the types of underlying hardware including the disk speeds. You could use two different vendors for server hardware and SSDs on one server and HDDs on another server. (I\u0026rsquo;m not saying you should do this, but rather pointing out that it\u0026rsquo;s possible.) Both of the VSAs are active, so clients can connect to either of the VSAs to retrieve data. In order to prevent a split-brain scenario like in the event of a host isolation event, a quorum service is used as sort of a heartbeat to determine who has the most up to data copy of the data.\nData is written directly to the VSA via the iSCSI connection and then synchronous Mirrored to the additional VSA before the transaction is committed. This is to ensure that the failure of a single host doesn\u0026rsquo;t stop all of the data from being lost. If this were to happen a third party High Availability state could be used to power on servers on the other host, i.e VMware\u0026rsquo;s High Availability will power VMs up on the available node and all the data will be available.\nThere is a plugin available for vCenter to manage all of the deployment of the StorMagic VSA, but also a standalone management interface to allow you to use the free ESXi version to run the VSAs. Another option to allow you to run your edge offices cheaply.\nPricing These aren\u0026rsquo;t exact numbers and your mileage may vary, but a general rule of thumb is that the product is $500 per TB up to 16TB and anything above 16TB is considered \u0026ldquo;unlimied\u0026rdquo; which is a flat rate of around $10,000. You have to admit, that for a small remote office (or Edge) this is priced very well for companies to deploy many offices for a pretty low price point.\nSummary This is a niche storage solution that could be very useful to companies looking for some High Availability at edge sites and aren\u0026rsquo;t concerned about fancy bells and whistles. This product has a limited function by design and if utilized properly, could save a lot of money in the right instances. Check them out if you\u0026rsquo;re looking for an edge solution for your virtual environment.\n","permalink":"https://theithollow.com/2014/11/24/highly-available-enterprise-edge-storage-stormagic/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/StorMagicLogo.png\"\u003e\u003cimg alt=\"StorMagicLogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/StorMagicLogo-300x61.png\"\u003e\u003c/a\u003e When I saw the Virtual SAN solution from VMware my first thought was that a small office might really love this solution because it eliminated the requirement for a Storage Area Network (SAN) for small offices.  Often I would see some small remote offices that have a requirement for a few servers and a bit of highly available storage but this was cost prohibitive for a variety of reasons.  Remote offices would need power, cooling and some sort of staff (possibly remote staff) to manage all of these services as well as paying for a shared storage device.  Remote office SAN\u0026rsquo;s are typically a waste since some of these offices really have a very minimal storage footprint, such as less than 5 TB of disk space.  It would seem like a waste to by a highly available SAN to house 2TB of data, am I right?\u003c/p\u003e","title":"Highly Available Enterprise Edge Storage from StorMagic"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/events/categories/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"Categories"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/events/locations/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"Locations"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/events/my-bookings/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"My Bookings"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/events/tags/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"Tags"},{"content":" So you just bought VMware Virtual SAN and have stood up your site with some ESXi hosts. When all of a sudden you realize that Virtual SAN is block based and you really needed file based storage. OH NO! What could we do to resolve this situation?\nI appears as though Nexenta was looking out for this situation and developed a product called Nexenta Connect for VMware Virtual SAN and they presented it during Storage Field Day 6\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Storage Field Day 6. This was the only compensation given..\nI am personally of the belief that the use cases for this type of solution are minimal. Maybe you\u0026rsquo;ve got servers in a non-VSAN cluster that need shared storage, and you want to present some from the pool you already have. This would be a good reason to use Nexenta Connect. However, from a high level perspective, I would really try not to buy a product like VMware VSAN and then buy a second product to put on top of it to make it work the way I want. I would prefer to purchase an NFS product from the beginning if that was my end goal, or if I already had VSAN and needed some file services, I\u0026rsquo;d stand up a virtual machine as a file server. But there are benefits to using a product like Nexenta Connect.\nNexentaStor is a familar software product that can be used to take traditional x86 hardware and turn it into a fairly full featured SAN, with capabilities such as replication, compression, deduplication and snapshots. One of the issues is that particular product is meant for a scale up approach. So if I\u0026rsquo;m running out of capacity, I need to find a way to add more capacity to the server it\u0026rsquo;s running on, maybe by adding a local shelf or by replacing drives with bigger ones. What if I\u0026rsquo;m looking for a scale out solution? Well VMware VSAN pretty much takes care of that for me. So if I need more storage, I can add more hosts and add it to the global storage pool. Once that is done, if I\u0026rsquo;m running Nexenta Connect on top of it, my underlying storage looks like it just scaled up and we\u0026rsquo;re off to the races!\nNexenta Connect does have a list of features that are not available on VSAN as of yet, so there are some conceivable reasons why you might take this approach of using a product on a product.\nProvides NFS v3v4 and SMB Volume level snapshots Volume level replication Inline Data Dedupliation Inline Data Compression Summary I still believe that there is a very small use case for this type of solution, but if you\u0026rsquo;ve already purchased VMware VSAN and need some extra capabilities, Nexenta Connect would be a very simple way to add to your existing solution to keep your business moving forward. For more information please check out Nexenta\u0026rsquo;s site at http://www.nexenta.com/products/nexentaconnect or check out the following articles related to the SFD6 presentation from Nexenta.\nNexenta - Back in da house\u0026hellip; Storage Field Day 6 - Day 2 - Nexenta Sorry Nexenta, but I don\u0026rsquo;t get it\u0026hellip;and questions arise\n","permalink":"https://theithollow.com/2014/11/20/vsan-want-run-nfs-check-nexenta/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/nexentalogo1.png\"\u003e\u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/nexentalogo1-300x91.png\"\u003e\u003c/a\u003e So you just bought VMware Virtual SAN and have stood up your site with some ESXi hosts.  When all of a sudden you realize that Virtual SAN is block based and you really needed file based storage.  OH NO!  What could we do to resolve this situation?\u003c/p\u003e\n\u003cp\u003eI appears as though Nexenta was looking out for this situation and developed a product called Nexenta Connect for VMware Virtual SAN and they presented it during \u003ca href=\"http://techfieldday.com/event/sfd6/\"\u003eStorage Field Day 6\u003c/a\u003e\u003c/p\u003e","title":"Have VSAN?  Want to run NFS on it?  Check out Nexenta!"},{"content":"During Storage Field Day 6, I was fortunate enough to get presentation from Andrew Warfield from CohoData about a variety of things. I\u0026rsquo;ll say a variety mainly because my head is still swimming from all of the concepts that Andy was trying convey. If you don\u0026rsquo;t believe me, watch the videos and decide for yourself. WARNING!!! BE PREPARED TO PUT YOUR THINKING CAP ON! One of the concepts Andy was talking about was the idea that going forward, all arrays should be hybrid arrays. Immediately, my mind wondered what some of the \u0026ldquo;All Flash\u0026rdquo; array vendors would say about this, but he went on to explain this premise in more detail.\nConsider what would happen if you could see analytics about your workloads to see cache hit ratios. This is something that several vendors can show now including Coho and Cloud Physics. You can now decide to buy more flash based off of how much a workload will benefit from it. Buying additional flash that isn\u0026rsquo;t large enough for your working set won\u0026rsquo;t provide you much value. For an example, take a look at the crude diagram below. You can see that you\u0026rsquo;ll gain larger benefits from flash if you buy 50 GBs of flash vs only 45 GBs due to the working sets. Alternatively going from 20GBs to 40GBs gives you almost no performance benefit at all due to the size of this working set.\nNext consider what happens with multiple workloads on a storage array. We have our heavy readwrite apps like databases next to some file storage that is rarely accessed. If I\u0026rsquo;m going to put more money into my array to improve performance how should I do this?\nWhich strategy makes more sense?\n1. Continue to add more and more flash to the array up to the point where the entire array is a single tier?\n2. Look at the working set of your most important working sets and put even faster media in the array for even faster retrieval?\nSince we don\u0026rsquo;t really care about the data that is rarely accessed, should we really spend the money on making that data more quickly accessible by putting more and more SSDs into the array, or leave it alone and use the money to speed up the important stuff? If we decide to quit adding SSDs and instead add NVMe we can further accelerate the most demanding workloads. Andy\u0026rsquo;s argument is that hybrid array\u0026rsquo;s aren\u0026rsquo;t just going to be around for a while until flash is cheaper, but rather the hybrid model will change to a faster medium than SSDs and solid state.\nSummary Hybrid Arrays may very well change from SSDs and spinning disk to NVMe and SSDs or NVDIMM and SSD but they would still be considered hybrid. What do you think? Does this methodology make sense, or will arrays be more like Pure Storage and XtremIO where all the storage is Solid State, much like older arrays were all spinning disks? I\u0026rsquo;d love to hear thoughts about this idea in the comments.\n","permalink":"https://theithollow.com/2014/11/17/will-new-storage-arrays-hybrid/","summary":"\u003cp\u003eDuring \u003ca href=\"http://techfieldday.com/event/sfd6/\"\u003eStorage Field Day 6\u003c/a\u003e, I was fortunate enough to get presentation from \u003ca href=\"https://twitter.com/andywarfield\"\u003eAndrew Warfield\u003c/a\u003e from \u003ca href=\"http://cohodata.com\"\u003eCohoData\u003c/a\u003e about a variety of things.  I\u0026rsquo;ll say a variety mainly because my head is still swimming from all of the concepts that Andy was trying convey.  If you don\u0026rsquo;t believe me, watch the videos and decide for yourself.  WARNING!!! BE PREPARED TO PUT YOUR THINKING CAP ON!  One of the concepts Andy was talking about was the idea that going forward, all arrays should be hybrid arrays.  Immediately, my mind wondered what some of the \u0026ldquo;All Flash\u0026rdquo; array vendors would say about this, but he went on to explain this premise in more detail.\u003c/p\u003e","title":"Will All New Storage Arrays be Hybrid?"},{"content":" It\u0026rsquo;s hard to get too excited about a monitoring system, especially one that\u0026rsquo;s main focus is to notify a hardware vendor of a problem. However, Nimble Storage has an impressive phone home solution called InfoSight that they are leveraging for more than just fault notifications and opening tickets on failed hardware. This solution is being used for a variety of analytical purposes to both improve their product as well as improve the customer\u0026rsquo;s experience with their array purchase.\nPhone Home Nothing too special here right? It\u0026rsquo;s nothing new for a storage array to phone home to send faults, but Nimble is sending additional telemetry data about workloads as well. Nimble can take this data to help determine if there is enough solid state cache in the system for the array to function well. Maybe this is selfish of Nimble to get this data, after all, their solution to fixing this \u0026ldquo;not enough cache\u0026rdquo; problem would likely be to sell you more SSDs but as a customer, don\u0026rsquo;t you want this to be the case? You\u0026rsquo;ll be able to see this data in your InfoSight portal, and if you choose to give your preferred partner access to this data, they can help get you the hardware you need to fix it.\nAlright, if that doesn\u0026rsquo;t knock your socks off, then how about this. Nimble is using the data from all of their customers who choose to enable phone home (about 94% of their install base) to proactively alert other customers of upcoming issues. I\u0026rsquo;ve heard already of an issue that had happened with a particular workload on a certain firmware version that caused a problem with an array and after InfoSight collected it, Nimble was able to tell their customers that also had similar environments exactly when the array would start encountering problems. That is pretty cool!\nWhen it comes to firmware updates, Nimble Storage will use the telemetry data from your array and if any issues might affect your particular array, they will remove the version of firmware that could affect it from their support portal on a customer basis. For instance, if you are on 2.0 and a 2.1 bug is identified for something your array is doing, you might only see an upgrade path from 2.0 to 2.2. Also, on the topic of upgrades, Nimble mentioned to the delegates of Storage Field Day 6 that about 60% of their customers will run firmware upgrades to the array during the middle of the day. With shocked looks on our faces we asked them to repeat that statistic and they held fast. That’s pretty amazing confidence in the array if that is true. I\u0026rsquo;d love to see some hard statistics on this from their analytics engine.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Storage Field Day 6. This was the only compensation given.\nFuture Updates Customers might not see an enormous gains from Nimble getting tons of performance data on all the arrays, but current customers will see future benefits because Nimble will have tons of information to improve the product for later releases. Nimble Storage and their shareholders should be very happy about this data because they can improve their product to meet the demands their customers really need. There won\u0026rsquo;t need to be any guessing here. InfoSight collects telemetry data every 5 minutes on all of their arrays by default and that data is loaded into their HP Vertica Analytics Engine for analysis. (yes, they house all of this data on Nimble Storage Arrays and not some place in the cloud like Amazon. They are eating their own dog food here. In fact some of their arrays are named \u0026ldquo;Kibble\u0026rdquo; and \u0026ldquo;Bits\u0026rdquo;. So Clever).\nCloud Solutions OK, this is something brand new that Nimble Storage just announced and I love the idea, especially for small to medium businesses. Nimble is now offering a \u0026quot; Storage On-Demand\u0026quot; solution which results in Nimble (or a partner) to install an array in your data center, but you only pay for the amount of space you use on it, on a monthly basis. It\u0026rsquo;s like having cloud storage, only in your own server racks! This is only possible because of InfoSight reporting capabilities. At the end of each month, Nimble Storage can send out a bill based on the data InfoSight has collected. Nimble is really deploying an overprovisioned array to your site, but not making you pay for the wasted space that you didn\u0026rsquo;t use. There will be several tiers that you can get in such as Platinum, Gold and Silver, etc. Where this correlates to the size of the controllers you need as well as the amount of SSD Cache you want.\nI think this is an amazing opportunity for small to medium business because of how difficult a large capital expense is to small companies. In addition, these small shops without a dedicated SAN Administrator can really benefit from InfoSight telling them how their array is doing and when changes should be made to it. This really could replace the need for an additional full time employee.\nNimble Storage should love this because it helps introduce the product to companies. If the array really does what Nimble says it will, the customer is going to continue to pay for this service, and if they decide that Nimble isn\u0026rsquo;t doing their job can drop the solution anytime after the first year. Nimble will be forced to continue to innovate to keep these customers.\nMinimum requirements to enter into the \u0026ldquo;Storage On-Demand\u0026rdquo; solution is a 40TB array and a 1 year contract. After one year it will be month to month. You should be lucky to find a cell phone contract with those terms.\nSummary Nimble Storage is obviously an array vendor and most of the time we only talk about how the array performs and why it\u0026rsquo;s better or worse than other platforms. But in this case Nimble has a very compelling value add to their solution that is worth a second look if you\u0026rsquo;re deciding between them and one of their competitors. To learn more about the solutions, check out the following articles from fellow Storage Field Day 6 Delegates:\nCrowd-sourced Metrics and Automated Support Should Be Key Features In Your Next Storage Array - It\u0026rsquo;s Good for the Business Storage Field Day 6 Day 3 - Nimble Storage Go Ahead - Update your Storage Operating System in the Middle of the Day\n","permalink":"https://theithollow.com/2014/11/10/nimble-storage-data-analytics-infosight/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/Nimble-Storage-Logo.png\"\u003e\u003cimg alt=\"Nimble-Storage-Logo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/Nimble-Storage-Logo.png\"\u003e\u003c/a\u003e It\u0026rsquo;s hard to get too excited about a monitoring system,  especially one that\u0026rsquo;s main focus is to notify a hardware vendor of a problem.  However, \u003ca href=\"http://NimbleStorage.com\"\u003eNimble Storage\u003c/a\u003e has an impressive phone home solution called InfoSight that they are leveraging for more than just fault notifications and opening tickets on failed hardware.  This solution is being used for a variety of analytical purposes to both improve their product as well as improve the customer\u0026rsquo;s experience with their array purchase.\u003c/p\u003e","title":"Nimble Storage Data Analytics - InfoSight"},{"content":" I\u0026rsquo;ve seen front-end storage systems before and never really been too impressed with them. My primary thoughts about a front-end storage system was this, \u0026ldquo;My storage array already has a front-end, why do I want to put another layer of abstraction in front of it.\u0026rdquo; Obviously, there is still a use case for having a single namespace to hide the underlying systems, which might be neat so that a company could use multiple arrays of different types or even vendors and have a single place to go to access that storage. For the most part, I still think that this is a luxury that many companies can\u0026rsquo;t justify since some extra time spent by the infrastructure team will do the job.\nAvereCloud2\nEnter Storage Field Day 6 and the presentation by Avere Systems. When Avere did their presentation I was forced to re-think my opinions of a third party front-end storage solution. There were a few specific use cases that I thought were compelling reasons for enterprises to consider a solution like Avere.\nThe coolest idea was to use their newer Virtual FXT product to provide access to your on-premises storage up to Amazon EC2. Amazon typically has very inconsistent performance when it comes to some of their cloud services, especially S3 storage but it\u0026rsquo;s very cost effective for a company to do some DevOps on Amazon\u0026rsquo;s cloud since you can spin down services when a project is done and you stop paying for them. It\u0026rsquo;s a cloud service, nuff said? Avere allows you to bypass Amazon\u0026rsquo;s storage system and use your own on-prem storage which is (hopefully) more consistent performance. It does this by utilizing their Virtual FXT appliance that can be spun up on an EC2 image and it will cache reads and writes for EC2 but utilizes your own hardware. I still don\u0026rsquo;t think this is a fit for everyone, but is an pretty interesting solution.\nThe second use case is to use these front-end NAS devices for Small Office Home Office (SOHO) to provide much better performance for NFS systems that might be in a branch office, and the primary storage is in the corporate office. This may be a cheaper solution for some organizations than doing some sort of cluster across datacenters or replicating data between filers. Caching hot data in your local site will probably get you \u0026ldquo;good enough\u0026rdquo; performance to run a small office environment without needing to buy a large NAS Solution for this small office. You could assume that the Virtual FXT appliance will be expanded to run on vSphere or Hyper-V as well as Amazon EC2. It\u0026rsquo;s a logical next step, especially since VMware and Microsoft also have their own clouds running on vSphere Hyper-V.\nLastly, we can also assume that we have an aging, legacy filer that is still useful, but not providing the performance that you really need. Throwing an appliance such as the Avere FXT in front of the filer, can handle the read-caching and write-back caching can both improve the readwrite latency with faster drives, hardware etc., but it will also take some stress off of the primary storage array.\nSummary Thanks to Avere Systems\u0026rsquo; demonstration about their product, I have a newly realized appreciation for this type of service and it really does have a useful story that can be told and it will be added to my utility belt for architectural obstacles in the future.\nCheck out more information about Avere from the following articles:\nAvere Systems, great technology but\u0026hellip; Storage Field Day 6 Day 1 - Avere Avere Announces Software Updates, Adds Virtual FXT Filer\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Storage Field Day 6. This was the only compensation given.\n","permalink":"https://theithollow.com/2014/11/05/local-premises-storage-ec2-provided-avere-systems/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/avere-logo.png\"\u003e\u003cimg alt=\"avere-logo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/avere-logo.png\"\u003e\u003c/a\u003e I\u0026rsquo;ve seen front-end storage systems before and never really been too impressed with them.  My primary thoughts about a front-end storage system was this, \u0026ldquo;My storage array already has a front-end, why do I want to put another layer of abstraction in front of it.\u0026rdquo;  Obviously, there is still a use case for having a single namespace to hide the underlying systems, which might be neat so that a company could use multiple arrays of different types or even vendors and have a single place to go to access that storage.  For the most part, I still think that this is a luxury that many companies can\u0026rsquo;t justify since some extra time spent by the infrastructure team will do the job.\u003c/p\u003e","title":"Local Premises Storage for EC2 Provided by Avere Systems"},{"content":" This week there will be another great TechFieldDay.com event, November 5th - 7th in Silicon Valley. If you\u0026rsquo;re not familiar with the Tech Field Days (and shame on you if this is the case), this is an event that brings together IT product vendors and independent bloggers to share thoughts and ideas. This event specifically focuses on enterprise storage and data protection for both physical and virtual environments. You guessed it, this one is called a Storage Field Day and it\u0026rsquo;s the sixth SFD that the folks at GestaltIT have put together.\nI highly recommend tuning in to the live feed over at techfieldday.com and listen to some of the vendors that have graciously brought us independent bloggers into their own offices to give us a glimpse at what they\u0026rsquo;re working on. It should be a great time. I was lucky enough to attend the Virtualization Field Day 3 back in February and it was a great experience that I got to share with several delegates and companies that I\u0026rsquo;ve conversed with over social media in the past. I have very high hopes for this event.\nHere is a list of vendors that I hope will not let me down. I expect them to be great.\nIf you want to follow along on twitter, check out the hashtag #SFD6 or follow our delegates. Also, feel free to tweet your own questions to the #SFD6 group and someone will get your questions to the presenters. It like an entire online community will be in the room.\nEric Shanks (Boy, I really hope you\u0026rsquo;re already following this guy!) - @Eric_Shanks\nArjan Timmerman - @ArjanTim\nChin-Fah Heoh - @StorageGaga\nDan Frith - @PenguinPunk\nDennis Martin - @Demartek\nEnrico Signoretti - @ESignoretti\nJarett Kulm - @JK47TheWeapon\nJohn Obeto - @JohnObeto\nJon Klaus - @JonKlaus\nNigel Poulton - @NigelPoulton\nRay Lucchesi - @RayLucchesi\nScott Lowe - @OtherScottLowe (OtherScottLowe in your twitter feed, but Scott Lowe in your hearts!)\nDON\u0026rsquo;T Forget to add the master minds behind this event, Stephen Foskett, Tom Hollingsworth, and of course Claire Chaplais.\nYou can also watch the live stream right here, from theITHollow.com site, or the videos will all be available online after the event.\n","permalink":"https://theithollow.com/2014/11/03/storage-field-day-6/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/11/SFD-Logo2-150x150.png\"\u003e\u003cimg alt=\"SFD-Logo2-150x150\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/11/SFD-Logo2-150x150.png\"\u003e\u003c/a\u003e This week there will be another great \u003ca href=\"http://techfieldday.com\"\u003eTechFieldDay.com\u003c/a\u003e event, November 5th - 7th in Silicon Valley.  If you\u0026rsquo;re not familiar with the Tech Field Days (and shame on you if this is the case), this is an event that brings together IT product vendors and independent bloggers to share thoughts and ideas.  This event specifically focuses on enterprise storage and data protection for both physical and virtual environments.  You guessed it, this one is called a Storage Field Day and it\u0026rsquo;s the sixth SFD that the folks at \u003ca href=\"http://gestaltit.com/\"\u003eGestaltIT\u003c/a\u003e have put together.\u003c/p\u003e","title":"Storage Field Day 6"},{"content":"During a recent install, I got stuck on an issue (or so I thought) assigning an SSL Certificate to some of the vRealize Automation Appliances. I went through all of the installation procedures and the appliance stated \u0026ldquo;SSL Certificate Installed Successfully\u0026rdquo;, but when I went to the appliance, the certificate still showed the default VMware certificate.\nYou can see when I go to the appliance, I was getting a warning on the SSL Certificate.\nIf I clicked on the details I found out that the certificate was actually the default VMware certificate. I KNOW I installed my own certificate.\nAfter troubleshooting this for a bit, thinking I was going crazy, I got some help from the community and found out that the console URL doesn\u0026rsquo;t use the certificate that you apply. Sure enough, if you check the web URL for the console, my certificate was installed as is expected.\nI know this is probably a small thing, but just in case someone else was trying to resolve this \u0026ldquo;issue\u0026rdquo;, it is by design.\n","permalink":"https://theithollow.com/2014/10/27/vmware-appliance-console-certificates/","summary":"\u003cp\u003eDuring a recent install, I got stuck on an issue (or so I thought) assigning an SSL Certificate to some of the vRealize Automation Appliances.  I went through all of the installation procedures and the appliance stated \u0026ldquo;SSL Certificate Installed Successfully\u0026rdquo;, but when I went to the appliance, the certificate still showed the default VMware certificate.\u003c/p\u003e\n\u003cp\u003eYou can see when I go to the appliance, I was getting a warning on the SSL Certificate.\u003c/p\u003e","title":"VMware Appliance Console Certificates"},{"content":"The home lab got a vCAC (now renamed vRealize Automation) refresh to version 6.1 recently and although I\u0026rsquo;d posted a guide to installing vCAC 6 earlier, I found myself having a few errors with my vCAC 6.1 deployment. The only difference in my environment was the version of Windows I used for the IaaS components. Instead of using server 2008R2 as I did with 6.0, I used Server 2012 R2 for vCAC 6.1 since it was now supported.\nI found that once I added my vCenter server as an endpoint to vCAC, I could see my clusters as I would have expected, but they didn\u0026rsquo;t show any resources. When I went to add a reservation, I couldn\u0026rsquo;t see how much RAM was available or any of the datastores that I could use.\nIf you look in the vCAC logs, you\u0026rsquo;d find errors every two minutes refering to \u0026ldquo;Error executing query usp_SelectAgent\u0026rdquo; like the one below.\nAfter a bit of playing around I found Event ID 4874 in the Windows event log on the IaaS Server.\nThis was the smoking gun. That error led me to a VMware KB Article that explained the issue. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=2038943\nWhat I found was that my SQL Server component registry security settings needed to be modified in order for IaaS to query the database correctly. After changing the below settings, everything worked again. I hope this helps someone else having similar issues.\n","permalink":"https://theithollow.com/2014/10/20/vcac-usp_selectagent-sql-errors/","summary":"\u003cp\u003eThe home lab got a vCAC (now renamed vRealize Automation) refresh to version 6.1 recently and although I\u0026rsquo;d posted a \u003ca href=\"/vrealize-automation-6-guide-formerly-vcac/\"\u003eguide\u003c/a\u003e to installing vCAC 6 earlier, I found myself having a few errors with my vCAC 6.1 deployment.  The only difference in my environment was the version of Windows I used for the IaaS components.  Instead of using server 2008R2 as I did with 6.0, I used Server 2012 R2 for vCAC 6.1 since it was now supported.\u003c/p\u003e","title":"vCAC usp_SelectAgent SQL Errors"},{"content":" I was asked by a coworker, why I blog. He asked, \u0026ldquo;Why would you spend the time writing, when people have official documentation to use?\u0026rdquo; His point was that it\u0026rsquo;s silly to write how-to articles about things that are already officially documented by a vendor. To further his curiosity, he wanted to know why I would post things that could possibly get me into trouble if I\u0026rsquo;m posting negative things or incorrect information.\nHe had a point, but my answer is this: I\u0026rsquo;ve been helped by so many people, reading their work, talking to them in person, or conversing via twitter, and I feel an obligation to help others do the same thing. Official documentation is sometimes confusing and often leaves out pieces of information that may make the solution seem poor. For instance, a bug in the software is rarely mentioned in the official documentation by a vendor. A software vendor wouldn\u0026rsquo;t want to make their product seem less impressive, after all it\u0026rsquo;s their job to sell software.\nThis week, I was particularly in the weeds with a project and had several challenges deploying a solution based on the official documentation alone. This was a complex solution that required many moving pieces to fit together correctly and I was stuck. On two occasions I ended up reaching out to bloggers and people I knew in the industry for a push in the right direction.\nBased on a blog post I\u0026rsquo;d read on virtualizationteam.com I was working through a problem and needed a hand. Luckily the author, Eiad Al-Aqqad, had contact information on his site and I sent him a quick email. Within an hour I had a response and got me moving again. It was a simple question, and a simple answer, but was a cricital piece of what I was working on and it helped immensely. I should also mention that I\u0026rsquo;ve never met Eiad Al-Aqqad before, so him taking a second to answer a question of a complete stranger is really something.\nLater in the week, I was stuck on a different issue on the same solution we were deploying. At the end of a hard day I decided to again ask for help and I knew someone who would know the answer. Via Twitter I sent up a call for help from Yves Sandfort.\nAs you can see, Yves was in Europe and at a tech conference, but took a second to respond to my tweet and later on, my more lengthy email. Sure enough, the following morning when I got up, he had responded to my email and I was again on my way after he confirmed my suspicions about what was happening.\nNeither of these guys that I asked for help from were co-workers. Neither of them got paid for the assistance they lent me. Neither of these guys expected to get anything from me in return, but I can only imagine that they did this due to their sense of community. I can only speak for myself when I say that it\u0026rsquo;s out of a sense of duty to help encourage others as I\u0026rsquo;ve been encouraged.\nI\u0026rsquo;ve never been part of such a large group of people with similar interests that are so willing to help each other regardless of whether or not they get compensated. I\u0026rsquo;d like to thank all of the resources I\u0026rsquo;ve used in the past and I hope someone has gotten something out of my posts as well.\n","permalink":"https://theithollow.com/2014/10/13/sense-community/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/10/community.jpg\"\u003e\u003cimg alt=\"community\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/10/community-300x150.jpg\"\u003e\u003c/a\u003e I was asked by a coworker, why I blog.  He asked, \u0026ldquo;Why would you spend the time writing, when people have official documentation to use?\u0026rdquo;  His point was that it\u0026rsquo;s silly to write how-to articles about things that are already officially documented by a vendor.  To further his curiosity, he wanted to know why I would post things that could possibly get me into trouble if I\u0026rsquo;m posting negative things or incorrect information.\u003c/p\u003e","title":"A Sense of Community"},{"content":"I recently had to rebuild part of my home lab due to a very poor decision to host all of my nested ESXi hosts on a single SSD. Kids, Do NOT do that at home! Obviously this is a lab and budget was a constraint, but it was a bummer when my SSD finally failed. It might be useful to review some steps used to build ESXi Servers inside VMware Workstation. Especially since Workstation 10 can clone ESXi which makes things much quicker.\nInstall your VMware Workstation on the physical host you want to house your ESXi hosts on. Create a new virtual machine and install ESXi like you would in any other environment. Nothing too special about this.\nOnce the ESXi hosts was created there are a few things that you will want to do, before you clone your ESXi hosts. This has all been written about in posts from other people, including William Lam, but I wanted to put them altogether in one post.\nNOTE: Steps one and two are completely ripped off from William\u0026rsquo;s post. If you want more information please see it directly here: http://www.virtuallyghetto.com/2013/12/how-to-properly-clone-nested-esxi-vm.html\n1. Open an SSH session to the ESXi host and run the following command to ensure that your MAC Addresses don\u0026rsquo;t get copied with the ESXi clone.\nesxcli system settings advanced set -o /Net/FollowHardwareMac -i 1\n2. Next, you\u0026rsquo;ll need to remove the system UUID entry from /etc/vmware/esx.conf . This is a unique ID for each ESXi host and trust me, this will cause you issues if you try to join multiple ESXi hosts to a vCenter. My preferred method to modify this file is to connect to the ESXi host with WinSCP so I can modify it with a GUI.\n3. Install the VMware Tools Version from VMware. This is currently a fling that you can find more information about here. https://labs.vmware.com/flings/vmware-tools-for-nested-esxi This is super simple as you can download it directly from the internet. Run the following command from your SSH session on the ESXi host:\nesxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxi_tools_for_guests/esx-tools-for-esxi-9.7.1-0.0.00000.i386.vib -f\n4. Clone Away!\n","permalink":"https://theithollow.com/2014/10/06/cloning-nested-esxi-workstation/","summary":"\u003cp\u003eI recently had to rebuild part of my home lab due to a very poor decision to host all of my nested ESXi hosts on a single SSD.  Kids, Do NOT do that at home!  Obviously this is a lab and budget was a constraint, but it was a bummer when my SSD finally failed.  It might be useful to review some steps used to build ESXi Servers inside VMware Workstation.  Especially since Workstation 10 can clone ESXi which makes things much quicker.\u003c/p\u003e","title":"Cloning Nested ESXi in Workstation"},{"content":"When I work with smaller sized customers, I often hear that they don\u0026rsquo;t have any networking monitoring software available. Usually there is some server monitoring there, and something that pings network devices, but nothing that can display how much bandwidth is being used, and when.\nIf you are in this situation, I implore you to check out PRTG monitor from Paessler. This is a great piece of software, that can do much more than monitor your Internet bandwidth, but that\u0026rsquo;s what I use it the most for. There is a full version, but the free version will allow you to monitor up to 10 ports which is plenty if you\u0026rsquo;re just monitoring your WAN, or a few ports like your ESXi hosts in your home lab!\nI\u0026rsquo;m monitoring my Cisco ASA with this software and can watch my Ethernet0/1 interface to determine how much ingressegress traffic is happening at any given time. You can look back over 365 days to see how traffic has changed as well if you\u0026rsquo;d like. Mostly, I use it to see live data though.\nThe setup is very easy, and can even scan your network to find your devices if you\u0026rsquo;d like. Enter your Windows credentials, SNMP Credentials, and VMware credentials and let the wizards do their magic. You can also have it monitor websites, cloud services such as Office365 or you name it.\nHere is some SNMP info it looks for. This is what\u0026rsquo;s monitoring my ASA bandwidth.\nSummary Just because you\u0026rsquo;ve got a limited budget doesn\u0026rsquo;t mean you can\u0026rsquo;t get some basic but highly valuable information from your network. Check out this tool that I\u0026rsquo;ve used for many years if you\u0026rsquo;re trying to get some insight on some of your interfaces.\n","permalink":"https://theithollow.com/2014/09/29/free-bandwidth-monitoring/","summary":"\u003cp\u003eWhen I work with smaller sized customers, I often hear that they don\u0026rsquo;t have any networking monitoring software available.  Usually there is some server monitoring there, and something that pings network devices, but nothing that can display how much bandwidth is being used, and when.\u003c/p\u003e\n\u003cp\u003eIf you are in this situation, I implore you to check out \u003ca href=\"http://www.paessler.com/prtg\"\u003ePRTG monitor\u003c/a\u003e from Paessler.   This is a great piece of software, that can do much more than monitor your Internet bandwidth, but that\u0026rsquo;s what I use it the most for.  There is a full version, but the free version will allow you to monitor up to 10 ports which is plenty if you\u0026rsquo;re just monitoring your WAN, or a few ports like your ESXi hosts in your home lab!\u003c/p\u003e","title":"Free Bandwidth Monitoring"},{"content":" I never thought that I\u0026rsquo;d be writing this post, but the day has come where I decided to switch to an Apple laptop. If you\u0026rsquo;ve known me, you were probably aware of my disdain for Apple products. I was of the opinion that they are offering the same equipment with a higher price tag and people who purchased that stuff were suckers. So now, either I\u0026rsquo;ve been snookered into this mass hysteria of Mac Madness, or things aren\u0026rsquo;t really how I originally thought.\nMy reasoning for purchasing a Mac was somewhat just curiosity. I was not incredibly happy with my current laptop, mainly because of the limited resolution, so I started looking at new laptops on the Internet. I had a useable laptop already, just one that I wasn\u0026rsquo;t super happy with, so if I didn\u0026rsquo;t immediately take to Mac OS, I could always switch back to my Windows machine until I grew accustomed to my new Mac.\nSurprise Once I got my new Macbook Pro 15 I never felt the urge to switch back to the old laptop. Mac OS was certainly an adjustment, but so was the change from Windows 7 to Windows 8. The interface was totally different to me, but pretty simple to navigate and it was so basic, that the learning curve was fairly minimal.\nI found the retina display to be pretty amazing. It wasn\u0026rsquo;t just a buzzword to get people to buy something, it really is great to work on. But my favorite feature so far, it the \u0026ldquo;Spaces\u0026rdquo; feature. Mac allows you to have different desktops and put different apps on different screens. I can then rotate between them depending on what I\u0026rsquo;m working on. Maybe a Work space, a blogging space with my Wordpress page open and a goofing off space with Facebook or news.\nThe performance of the laptop is great. It\u0026rsquo;s 16GB of RAM a 256 GB SSD and Core i7 Processor. It\u0026rsquo;s been rock solid so far and I haven\u0026rsquo;t had any issues so far.\nThe power cable is also pretty nifty. There is a magnetic connector that connects to the laptop so if anyone happens to trip over the cable, it just disconnects from the laptop instead of breaking something. It\u0026rsquo;s the little things.\nChallenges The migration was not rainbows and funny papers though. By far the most difficult thing for me to get adjusted to was to lose my keyboard shortcut muscle memory. I had used \u0026ldquo;Windows Key + Any\u0026rdquo; for just about everything. Launching Remote Desktop, Locking my Computer, opening an Explorer window. Needless to say, there is no Windows Key on a Macintosh. Next was the Alt Key. Alt + Tab to change Windows, Alt + D to go directly to the Internet Explorer Address bar, etc.\nI\u0026rsquo;m also a big fan of Microsoft Office and some of the applications I use for work require the Windows Operating System. I installed Fusion and have done a P2V on that old laptop that I wasn\u0026rsquo;t super fond of to keep all my old settings. Throwing the Virtual Machine in a \u0026ldquo;spaces\u0026rdquo; is awesome to keep everything neatly separated.\nThe biggest challenge has been not being able to \u0026ldquo;right click\u0026rdquo; on anything. If you want to bring up a context menu, you need to press the control button on the keyboard and then click. I think you could use a two button mouse with laptop and it would work, but if you\u0026rsquo;re planning on jumping onboard, you might as well learn things in the new way.\nSummary I\u0026rsquo;m sure I\u0026rsquo;m going to get some pointed comments from friends who have known my dislike for Apple products for so long, but it\u0026rsquo;s time to eat crow. I\u0026rsquo;m happy with the new purchase and only time will tell if the hardware will hold up.\n","permalink":"https://theithollow.com/2014/09/23/microsoft-guy-converted-apple/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/HelloMAC.jpg\"\u003e\u003cimg alt=\"HelloMAC\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/HelloMAC-150x150.jpg\"\u003e\u003c/a\u003e I never thought that I\u0026rsquo;d be writing this post, but the day has come where I decided to switch to an Apple laptop.  If you\u0026rsquo;ve known me, you were probably aware of my disdain for Apple products.  I was of the opinion that they are offering the same equipment with a higher price tag and people who purchased that stuff were suckers.  So now, either I\u0026rsquo;ve been snookered into this mass hysteria of Mac Madness, or things aren\u0026rsquo;t really how I originally thought.\u003c/p\u003e","title":"A Microsoft Guy Converted to Apple"},{"content":" I just bought two new 27 inch (yeah, they\u0026rsquo;re large) monitors for my home office thanks to a suggestion from Satyam Vaghani over twitter of course. He pointed me towards the QNIX QX2710 monitor and I was first surprised with the price. At less than $350 I had to give it a shot. I mean really, who wouldn\u0026rsquo;t want two 27 inch monitors on their desk?\nThe Good The resolution was something I was really looking at. I selected the 2560 X 1440 resolution so that I didn\u0026rsquo;t feel like I needed to buy a new monitor in a year or two because something cooler came out. I\u0026rsquo;d say at this point in time, a 1920 X 1080 resolution is fairly standard. Also, with it being a much larger monitor than I\u0026rsquo;ve been accustomed to, the resolution needs to increase as well to provide a clear picture.\nI know it\u0026rsquo;s a small thing but the ability to tilt the monitor is nice. Changing the angle at which you view the screen can really reduce strain when you\u0026rsquo;re sitting at a desk for many hours at a time.\nAnother requirement I had was that it had to have a Vesa mount. As of right now these behemoth\u0026rsquo;s are just sitting on my office desk, but I hope to some day get a monitor stand and mount these. I read some reviews and saw that some people were not happy with the quality of these mounts with this monitor, but I haven\u0026rsquo;t tried this yet so can\u0026rsquo;t comment. Just be aware if this is a requirement for you.\nIt wasn\u0026rsquo;t one of my requirements, but if I found a monitor I liked, I preferred to have an LED monitor over an LCD monitor. Mainly for power consumption but to me, they have a better picture quality as well. I understand there is some debate over this so you make your own decisions here. The overall power consumption of the monitor is 43W at peak.\nThese monitors come with a refresh rate of 60 Hz which is a very standard refresh rate, but I did some looking and found that people were overclocking these monitors in upwards of 120 Hz with no problems. Feeling a bit adventurous, but still the conservative person that I am I used the following article and overclocked the monitor to 96 Hz without issue. (Yeah, I got skeered, but I didn\u0026rsquo;t want to ruin the new monitors that i just payed good money for). NOTE: Be aware that overclocking your monitor will almost certainly void your warranty, so do this at your own risk.\nQNIX also has a perfect pixel guarantee so you can be sure that if you have a burnt out pixel on delivery, they will fix or replace the monitor. I\u0026rsquo;ve heard that actually going through this process is pretty painful, but both of my monitors arrived with 0 pixels being busted so I can\u0026rsquo;t tell you for sure.\nThe Bad Connections could be a problem. The monitor came with a single DVI-D port on it so this limits your video card choices a bit. Not only that, but using a standard DVI-D cable won\u0026rsquo;t due, be sure to plug in the Dual DVI cables that come with the monitor. In my haste, I attempted to reuse my older DVI-D cables that were already lying on my desk and I was not happy with the results.\nDid I mention it takes up a lot of desk space? Yeah, the best part is that these monitors are 27 inches. The worst part is that, they are 27 inches. Eh, I want my cake and eat it too, but if physical space is a concern, be careful buying these monitors. One thing to consider is the depth of the desk you\u0026rsquo;ll be putting these on. You may need to scoot these things back just a bit or it will be like sitting in the front row of the movie theater. If you\u0026rsquo;ve got a deep desk, this will be a plus for a large monitor like this one.\nHomeMonitorQnix\nThe monitor is pretty light for being as big as it is but with this light feeling, it also \u0026ldquo;feels\u0026rdquo; like the product may be a bit flimsy. I want to be clear that I\u0026rsquo;ve had no problems with these things, and don\u0026rsquo;t plan to be moving them around a whole lot, but I do get the impression that they are a bit fragile.\nResponse time may be an issue for your if you\u0026rsquo;re a gamer. The 6 ms response time is not great and you\u0026rsquo;ll get some motion blur if you\u0026rsquo;re shooting bad guys in \u0026ldquo;Call of Duty\u0026rdquo;. I\u0026rsquo;m usually not sensitive to this sort of thing, but I threw on my favorite shoot \u0026rsquo;em up game just to see how it looked and I noticed the motion blur. This may not be a big deal to you if the plans are to use this on your work machine. I assure you that you won\u0026rsquo;t see the motion blur with your spreadsheets.\nHollow Points For the most part, I would say that this is a very good monitor for the price you can pay for it. If you\u0026rsquo;re looking for more real estate for work purposes, this monitor will be great for you. If you\u0026rsquo;re looking to do gaming, maybe keep looking for a faster response time to eliminate motion blur.\nI am very much enjoying my new monitors right now though and hope they prove to have some longevity.\n","permalink":"https://theithollow.com/2014/09/15/qnix-q2710-monitor-review/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/qnix1.png\"\u003e\u003cimg alt=\"qnix1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/qnix1-150x150.png\"\u003e\u003c/a\u003e I just bought two new 27 inch (yeah, they\u0026rsquo;re large) monitors for my home office thanks to a suggestion from \u003ca href=\"https://twitter.com/SatyamVaghani\"\u003eSatyam Vaghani\u003c/a\u003e over twitter of course.  He pointed me towards the \u003ca href=\"http://www.amazon.com/gp/product/B00CAKD6LI/ref=as_li_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B00CAKD6LI\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\u0026amp;linkId=DYHFALSGKXGDOWOJ\"\u003eQNIX QX2710\u003c/a\u003e monitor and I was first surprised with the price.  At less than $350 I had to give it a shot.  I mean really, who wouldn\u0026rsquo;t want two 27 inch monitors on their desk?\u003c/p\u003e\n\u003ch1 id=\"the-good\"\u003e\u003cstrong\u003eThe Good\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eThe resolution was something I was really looking at.  I selected the 2560 X 1440 resolution so that I didn\u0026rsquo;t feel like I needed to buy a new monitor in a year or two because something cooler came out.  I\u0026rsquo;d say at this point in time, a  1920 X 1080 resolution is fairly standard.  Also, with it being a much larger monitor than I\u0026rsquo;ve been accustomed to, the resolution needs to increase as well to provide a clear picture.\u003c/p\u003e","title":"QNIX Q2710 Monitor Review"},{"content":"\nvRealize Automation vCAC Appliance Deployment vRealize Automation IaaS Component Installation vRealize Automation Basic Setup and Configuration vRealize Automation Endpoint Setup vRealize Automation Policies and Reservations vRealize Automation Server Blueprints vRealize Automation Service Designer - vCO vRealize Automation Service Blueprint vRealize Automation Custom Resource Properties vRealize Automation Customizations vRealize Automation Approvals Additional Resources Yves Sandfort has a great video on vCAC 6.0 worth looking up if you\u0026rsquo;d rather watch a video over reading. vCAC 6.0 Video Nick Colyer is a master when it comes to vCO and vCAC workflows. Check out his blog for continual updates. vCloud Automation Center Official documentation from VMware. VMware Communities for vCAC and vRealize Automation. VMware Product Page ","permalink":"https://theithollow.com/vrealize-automation-6-guide-formerly-vcac/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/09/vRealize6Guide.png\"\u003e\u003cimg alt=\"vRealize6Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/09/vRealize6Guide.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"vrealize-automation-vcac-appliance-deployment\"\u003e\u003ca href=\"/2014/07/trouble-configuring-vcac-appliance/\"\u003evRealize Automation vCAC Appliance Deployment\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-iaas-component-installation\"\u003e\u003ca href=\"http://wp.me/p32uaN-ZQ\"\u003evRealize Automation IaaS Component Installation\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-basic-setup-and-configuration\"\u003e\u003ca href=\"http://wp.me/p32uaN-10b\"\u003evRealize Automation Basic Setup and Configuration\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-endpoint-setup\"\u003e\u003ca href=\"http://wp.me/p32uaN-10l\"\u003evRealize Automation Endpoint Setup\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-policies-and-reservations\"\u003e\u003ca href=\"http://wp.me/p32uaN-10w\"\u003evRealize Automation Policies and Reservations\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-server-blueprints\"\u003e\u003ca href=\"http://wp.me/p32uaN-10N\"\u003evRealize Automation Server Blueprints\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-service-designer---vco\"\u003e\u003ca href=\"http://wp.me/p32uaN-115\"\u003evRealize Automation Service Designer - vCO\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-service-blueprint\"\u003e\u003ca href=\"http://wp.me/p32uaN-11I\"\u003evRealize Automation Service Blueprint\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-custom-resource-properties\"\u003e\u003ca href=\"http://wp.me/p32uaN-11Z\"\u003evRealize Automation Custom Resource Properties\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-customizations\"\u003e\u003ca href=\"http://wp.me/p32uaN-12p\"\u003evRealize Automation Customizations\u003c/a\u003e\u003c/h2\u003e\n\u003ch2 id=\"vrealize-automation-approvals\"\u003e\u003ca href=\"http://wp.me/p32uaN-12y\"\u003evRealize Automation Approvals\u003c/a\u003e\u003c/h2\u003e\n\u003ch1 id=\"additional-resources\"\u003e\u003cstrong\u003eAdditional Resources\u003c/strong\u003e\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003e\n\u003ch3 id=\"yves-sandfort-has-a-great-video-on-vcac-60-worth-looking-up-if-youd-rather-watch-a-video-over-reading-vcac-60-video\"\u003e\u003ca href=\"https://twitter.com/yvessandfort\"\u003eYves Sandfort\u003c/a\u003e has a great video on vCAC 6.0 worth looking up if you\u0026rsquo;d rather watch a video over reading.   \u003ca href=\"http://www.ntpro.nl/blog/archives/2569-VMware-vCAC-6.0-Basic-Hypervisor-Blueprint-by-Yves-Sandfort.html\"\u003evCAC 6.0 Video\u003c/a\u003e\u003c/h3\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003ch3 id=\"nick-colyer-is-a-master-when-it-comes-to-vco-and-vcac-workflows-check-out-his-blog-for-continual-updates\"\u003e\u003ca href=\"http://twitter.com/vnickc\"\u003eNick Colyer\u003c/a\u003e is a master when it comes to vCO and vCAC workflows.  Check out his blog for continual updates.\u003c/h3\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003ch3 id=\"vcloud-automation-centerofficial-documentation-from-vmware\"\u003evCloud Automation Center \u003ca href=\"https://www.vmware.com/support/pubs/vcac-pubs.html\"\u003eOfficial documentation from VMware\u003c/a\u003e.\u003c/h3\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003ch3 id=\"vmware-communities-for-vcac-and-vrealize-automation\"\u003eVMware \u003ca href=\"https://communities.vmware.com/community/vmtn/vcloud-automation-center\"\u003eCommunities for vCAC\u003c/a\u003e and vRealize Automation.\u003c/h3\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003ch3 id=\"vmware-product-page\"\u003eVMware \u003ca href=\"http://www.vmware.com/products/vcloud-automation-center/\"\u003eProduct Page\u003c/a\u003e\u003c/h3\u003e\n\u003c/li\u003e\n\u003c/ul\u003e","title":"vRealize Automation 6 Guide (formerly vCAC)"},{"content":"It may seem like a trivial thing, but setting up some customizations for your vCAC (now renamed vRealize Automation) deployment can really make your IaaS solution stand out, and a good looking portal might help with buy-in from your users.\nBranding Setting up your portal with a logo and a color scheme that mimic\u0026rsquo;s your organization is a typical thing to do after getting a portal up and running.\nLogin to your vCAC instance with a Tenant Administrator login, go to the Administration Tab \u0026ndash;\u0026gt; Branding. Here, you can upload your logo, add a product name (or department name), background colors, text colors and whatever you\u0026rsquo;d like.\nGo to the footer tab to add links to copyright notices and privacy policies.\nEmail Setup To connect your vCAC to a mail server, go to Administration Tab \u0026ndash;\u0026gt; Notifications \u0026ndash;\u0026gt; Email Servers. Here you can click the gree \u0026ldquo;+\u0026rdquo; sign and add either an inbound or an outbound mail server. (You may want to add both)\nEnter a name for the mail server, the DNS Name or IP Address, port, and login information. Be sure to enter an email address that will be sending emails. You may want a generic email account to do this.\nNext, setup some notifications by clicking on the scenarios tab. This will allow you to pick which scenarios should trigger an email to go out. I will warn you that selecting all of the scenarios will fill up your mailbox pretty quickly and you don\u0026rsquo;t want this to be a nuisance. Don\u0026rsquo;t setup so many scenarios that you just ignore the emails once it gets rolled out to the organization.\nSummary Considering all of the things that you did to make sure this IaaS platform is well designed, the branding and notifications may seem like a trivial matter or a very small thing, but doing this little bit of customization can really affect the end user satisfaction, so be sure to do it and spend adequate time on it.\n","permalink":"https://theithollow.com/2014/09/08/vrealize-6-customizations/","summary":"\u003cp\u003eIt may seem like a trivial thing, but setting up some customizations for your vCAC (now renamed vRealize Automation) deployment can really make your IaaS solution stand out, and a good looking portal might help with buy-in from your users.\u003c/p\u003e\n\u003ch1 id=\"branding\"\u003e\u003cstrong\u003eBranding\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eSetting up your portal with a logo and a color scheme that mimic\u0026rsquo;s your organization is a typical thing to do after getting a portal up and running.\u003c/p\u003e\n\u003cp\u003eLogin to your vCAC instance with a Tenant Administrator login, go to the Administration Tab \u0026ndash;\u0026gt;  Branding.  Here, you can upload your logo, add a product name (or department name), background colors, text colors and whatever you\u0026rsquo;d like.\u003c/p\u003e","title":"vRealize Automation 6 Customizations"},{"content":"Your powerful new cloud automation software is up and running, but we need to have some sort of check and balance to be sure that people aren\u0026rsquo;t creating VMs on a whim because it\u0026rsquo;s so easy to do. For this, we can use an approval process. Maybe a supervisor, or even the CIO can approval the additional resources.\nApproval Policies To setup an approval policy, login as a Tenant Administrator and go to the Administration Tab \u0026ndash;\u0026gt; Approval Policies. Click the familiar green \u0026ldquo;+\u0026rdquo; icon to add a new policy.\nYou\u0026rsquo;ll need to select an approval policy type from the drop down list. There are quite a few that are available to pick from, but I used the Catalog Item Request which is pretty broad reaching.\nEnter a descriptive name for the approval policy and a description. Be sure to make the policy \u0026ldquo;Active\u0026rdquo;. When you\u0026rsquo;re done, you\u0026rsquo;ll need to add a pre-approval or post-approval level. For most cases, the Pre Approval would make the most sense. Click the green \u0026ldquo;+\u0026rdquo; icon to add a new level.\nEnter a level name. Keep in mind that a single request could go through multiple levels such as a supervisor level, then the IT level, then an executive level or whatever makes sense for the organization. We\u0026rsquo;ll keep it simple here and only create one level.\nYou can select \u0026ldquo;always required\u0026rdquo; or setup some conditions on when this type of approval is needed such as if the VM has more than 10 GB or RAM or whatever. Enter the approvers and determine if all of them need to approve this, or just one person can approve it.\nOnce done you\u0026rsquo;ll see an approval policy that look something like this. Click Add.\nNow we need to go to the Administration Tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Entitlements and modify the entitlements that require an approval. I\u0026rsquo;ve select the Windows OS Service and the windows servers.\nJust click the drop down next to the entitlement and change the policy. The drop down should show a list of policies that you can use. Be sure to use descriptive names so that if you have many approval policies this isn\u0026rsquo;t a nightmare to manage.\nIf we make a request now on one of the items we set an approval policy on, you\u0026rsquo;ll notice the status almost immediately goes to \u0026ldquo;Pending Approval\u0026rdquo;.\nWhomever the approver is, should see a message in their inbox stating that an approval is waiting. The drop down on the approval will allow actions to be taken.\nThe request could be approved or rejected with a comment that will be seen by the requester. Summary Approvals are going to be a necessary thing if your IaaS is going to be successful. Departments and groups will run amok if there aren\u0026rsquo;t proper controls put on all of the blueprints.\n","permalink":"https://theithollow.com/2014/09/08/vrealize_automation_approvals/","summary":"\u003cp\u003eYour powerful new cloud automation software is up and running, but we need to have some sort of check and balance to be sure that people aren\u0026rsquo;t creating VMs on a whim because it\u0026rsquo;s so easy to do.  For this, we can use an approval process.  Maybe a supervisor, or even the CIO can approval the additional resources.\u003c/p\u003e\n\u003ch1 id=\"approval-policies\"\u003e\u003cstrong\u003eApproval Policies\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eTo setup an approval policy, login as a Tenant Administrator and go to the Administration Tab \u0026ndash;\u0026gt; Approval Policies.  Click the familiar green \u0026ldquo;+\u0026rdquo; icon to add a new policy.\u003c/p\u003e","title":"vRealize Automation 6 Approvals"},{"content":"In the last post, we showed how to use vCAC to surface a vCO workflow. The problem presents itself when the vCO workflow is looking for something other than a string for a variable. What if you are looking for an object? For example there may be a user named \u0026ldquo;Clarice Starling\u0026rdquo; and that name could be a string. But the Active Directory object for user Clarice Starling has many attributes such as account, description, permissions etc and that is not a string. So if you want to perform an action on an object from vCAC, what do you do?\nNotice that if I open a different vCO workflow that adds an AD user to an AD Group, the input types are not strings, but their own type.\nCustom Resources The answer is a custom resource.\nIf we go to the Advanced Services tab \u0026ndash;\u0026gt; Custom Resources we can add a new \u0026ldquo;Custom Resource\u0026rdquo; that maps to an object. Click the green \u0026ldquo;+\u0026rdquo; sign to add a new one.\nIn the Resource Type tab, we select a new resource type. When I start typing AD\u0026hellip; into the box, the list of available resource properties starts to populate and then I can select the property that I want.\nName the property and give it a description.\nIn the details form, we\u0026rsquo;ll be able to customize how the requester will interact with the resource property. I\u0026rsquo;ve left the defaults.\nCreate a new Service Blueprint like we did in our last post.\nI grabbed the vCO workflow \u0026ldquo;Add a user to a user group\u0026rdquo;.\nYou can see we now have two service blueprints, but of course I need to publish this blueprint.\nAgain, customize the item to your liking and add it to a service.\nLog out of vCAC and log back in as the user that has access to request this service. We now have a new item to request.\nin the last post, we just entered all of our information, but now when you start typing, the catalog item is going to search through the associated resource properties. This allows you to select that object instead of entering a string.\nI did the same thing for the user and I\u0026rsquo;m now ready to submit the request.\nMy service blueprint completed successfully.\nSummary Resource properties can really be a life saver if you\u0026rsquo;re deploying a lot of vCO workflows. I should mention that you can get away without resource properties, but you\u0026rsquo;ll need to do a lot of work in vCO to take a string and convert it to an object. Resource Properties are much easier to manage.\n","permalink":"https://theithollow.com/2014/09/08/vrealize-automation-6-custom-resource-properties/","summary":"\u003cp\u003eIn the last post, we showed how to use vCAC to surface a vCO workflow.  The problem presents itself when the vCO workflow is looking for something other than a string for a variable.  What if you are looking for an object?  For example there may be a user named \u0026ldquo;Clarice Starling\u0026rdquo; and that name could be a string.  But the Active Directory object for user Clarice Starling has many attributes such as account, description, permissions etc and that is not a string.  So if you want to perform an action on an object from vCAC, what do you do?\u003c/p\u003e","title":"vRealize Automation 6 Custom Resource Properties"},{"content":"We\u0026rsquo;ve got the main section of vCAC (now renamed vRealize Automation) setup and running and have created some blueprints to create some servers, but that\u0026rsquo;s really just the tip of the iceberg. We can utilize vCAC to perform tasks as well and in my opinion this is where vCAC really makes a big difference.\nService Blueprints Instead of creating server blueprints, now we create service blueprints. They\u0026rsquo;ll be a similar setup to what you\u0026rsquo;ve seen in previous posts. Go to the Advanced Services Tab \u0026ndash;\u0026gt; Service Blueprints and click the green \u0026ldquo;+\u0026rdquo; sign to add a new blueprint. You\u0026rsquo;ll see the \u0026ldquo;Orchestrator\u0026rdquo; tree available since we added it as an endpoint in a previous post. Notice that we can pick from already created vCO workflows. This is great if you\u0026rsquo;re already utilizing vCO to perform some of your routine tasks. Not only that but there is a pretty big community where you can share your vCO workflows and import them to quickly \u0026ldquo;vRealize\u0026rdquo; the capabilities of vCAC. (yeah, sorry for the pun but you knew it was coming at some point right?)\nIn the example below, I\u0026rsquo;ve selected a custom workflow that I created that creates a user in a specific OU that I\u0026rsquo;ve pre-created.\nNext, I go over to the \u0026ldquo;Blueprint Form\u0026rdquo;. This is what the user will see when they request this service request. You can modify this if you\u0026rsquo;d like. In my example I\u0026rsquo;ve left the defaults.\nSidebar - If you look back to your vCO workflow, you\u0026rsquo;ll likely see a match between the inputs in vCO and your vCAC request form. If you don\u0026rsquo;t, there is either a problem, or you\u0026rsquo;ve hard-coded some information and removed it from the form to limit the choices the user will be making. Also, take note that the\nBelow is what the inputs from my vCO workflow looked like. Take note the parameter types for this vCO workflow are all of type \u0026ldquo;String\u0026rdquo;. This will be important in our next post. OK, back to the service blueprint and finish with the rest of the settings. I\u0026rsquo;ve left everything defaulted as this is a pretty simple workflow.\nMy service blueprint now exists, but just like a server blueprint, we have to publish it to our users. Click the dropdown and go to publish.\nNow we can go to the Administration Tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Catalog Items. Find the service blueprint you just created and click the configure link from the drop down.\nHere we\u0026rsquo;ll add some details about the blueprint and customize the way it looks. Most importantly, we need to make sure the status is active and that we\u0026rsquo;ve assigned it to a \u0026ldquo;service\u0026rdquo;. Remember that a service is actually a group of items that are similar. Sometimes this term is a little misleading. I\u0026rsquo;ve got a service called Active Directory where I\u0026rsquo;m storying anything related to Active Directory operations.\nNext, go to entitlements. Add a group of users who should be able to run this blueprint. When you\u0026rsquo;re done, log out and log back into vCAC with a user you\u0026rsquo;ve provisioned the service for.\nI\u0026rsquo;ve logged back into the vCAC portal as a user with permission to run the new Service Blueprint. Notice that I have a tab listed called \u0026ldquo;Active Directory\u0026rdquo;. This is the \u0026ldquo;service\u0026rdquo; that we talked about previously.\nFind the Service Blueprint, and click Request.\nEnter some information in about the request, such as a description and a reason. This is more important if you have an approval process.\nGo to the next tab where you need to fill out the form. This is the information that is used as variables in the vCO workflow. For instance If I create a new user, I need to know the name of the new user. When you\u0026rsquo;re ready, click submit.\nYou should see a successful request message.\nIf you go to the \u0026ldquo;Requests\u0026rdquo; tab, you\u0026rsquo;ll be able to monitor the status of the request.\nI\u0026rsquo;ve checked Active Directory just to be sure that the service blueprint worked as it says it did. You can see that a new user was created in my OU.\nSummary This was a basic example of how to use vCAC to do some work for you by accessing existing vCO workflows. You should be able to see how powerful of a tool this could be now that you can build machines from template and also perform actions. Think how useful this could be to a department such as Human Resources. For a new hire, they could run a single workflow that:\nCreated a new users Created an email address Updated their direct reports and contact information in AD Built them a virtual desktop Sent an email to the company welcoming them. And that list could much larger. How much time would this save if they ran a workflow instead of opening several service tickets with different departments?\nIn the next post we\u0026rsquo;ll take a closer look at some of the integration between vCAC and vCO.\n","permalink":"https://theithollow.com/2014/09/08/vrealize-6-service-blueprint/","summary":"\u003cp\u003eWe\u0026rsquo;ve got the main section of vCAC (now renamed vRealize Automation) setup and running and have created some blueprints to create some servers, but that\u0026rsquo;s really just the tip of the iceberg.  We can utilize vCAC to perform tasks as well and in my opinion this is where vCAC really makes a big difference.\u003c/p\u003e\n\u003ch1 id=\"service-blueprints\"\u003e\u003cstrong\u003eService Blueprints\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eInstead of creating server blueprints, now we create service blueprints.  They\u0026rsquo;ll be a similar setup to what you\u0026rsquo;ve seen in previous posts.  Go to the Advanced Services Tab \u0026ndash;\u0026gt; Service Blueprints and click the green \u0026ldquo;+\u0026rdquo; sign to add a new blueprint.\n\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/ServiceBlueprint1.png\"\u003e\u003cimg alt=\"ServiceBlueprint1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/ServiceBlueprint1.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"vRealize Automation 6 Service Blueprint"},{"content":"vCAC 6 (now renamed vRealize Automation) allows us to provision more than just virtual machines. We can also publish vCenter Orchestrator packages. To do so, we need to configure the Service Designer.\nGo to the Administration Tab \u0026ndash;\u0026gt; Groups and create a group that will have access to the service designer. I just used the Domain Admins group, mainly because it\u0026rsquo;s my lab. Click the dropdown to edit the group properties. Under the roles, be sure to select Service Architect.\nAt this point, you should log out and log back in. The Advanced Services tab won\u0026rsquo;t show up until a user in that group logs in again. Now we can add a new endpoint such as vCO. Go to the Infrastructure Tab \u0026ndash;\u0026gt; Endpoints. This time, add a new endpoint for vCO.\nMy vCO settings are listed below. If your existing credentials aren\u0026rsquo;t adequate to connect to vCO, then you might need to create some new ones.\nSummary We\u0026rsquo;ve now added vCO as an endpoint so in the next post, we can publish one of our vCenter Orchestrator packages.\n","permalink":"https://theithollow.com/2014/09/08/vcac-6-service-designer-vco/","summary":"\u003cp\u003evCAC 6 (now renamed vRealize Automation) allows us to provision more than just virtual machines.  We can also publish vCenter Orchestrator packages.  To do so, we need to configure the Service Designer.\u003c/p\u003e\n\u003cp\u003eGo to the Administration Tab \u0026ndash;\u0026gt; Groups and create a group that will have access to the service designer.  I just used the Domain Admins group, mainly because it\u0026rsquo;s my lab.  Click the dropdown to edit the group properties. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/Advanced-services-designer1.png\"\u003e\u003cimg alt=\"Advanced services designer1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/Advanced-services-designer1.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"vRealize Automation 6 Service Designer and vCO"},{"content":"We\u0026rsquo;re finally ready to start building some blueprints. Resources are available, reservations have been set, groups have been created and now we can build some blueprints.\nBlueprints Go to the Infrastructure Tab \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Blueprints and then click \u0026ldquo;New Blueprint\u0026rdquo; \u0026ndash;\u0026gt; Virtual \u0026ndash;\u0026gt; vSphere (vCenter).\nGive the blueprint a name and a description. In my case, I\u0026rsquo;m creating a server 2008 R2 blueprint. Select a Reservation Policy and a machine prefix. Then enter a number of days for Archives. This is the number of days the virtual machines will be available after they expire. Think recycling bin in Windows. Also, if you\u0026rsquo;re so inclined, you can enter a dollar amount to assign to this template per day, so that later on each department can see how much money these VMs cost the company.\nClick the Build Information Tab. Select the Blueprint type of Server, action clone and then the provisioning workflow \u0026ldquo;CloneWorkFlow\u0026rdquo;. These selections will clone a VM Template that already exists in the vSphere endpoint. Click the Button next to \u0026ldquo;Clone from\u0026rdquo; and select a VM Template to use.\nNext, you can also use a VMware customization file as well, but you must type it in, and the name must be identical.\nNext, we can set the machine resources that can be requested such as the minimum and maximum CPUs, memory, and storage amounts.\nUnder the properties tab, we can add additional custom properties to this blueprint if desired.\nUnder actions, you will need to set what actions can be performed on this virtual machine, such as power state, reprovisioning, and snapshots.\nNow that the blueprint is created, you need to click the dropdown and select \u0026ldquo;Publish\u0026rdquo;.\nServices Now we should add some services. A service might seem confusing, but it\u0026rsquo;s a list of applications. For instance you may have a Windows Service which includes your Windows 2008 and WIndows 2012 Server blueprints, and another called Linux which includes CentOS and RHEL blueprints.\nGo to the Administration Tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Services and then click the \u0026ldquo;+\u0026rdquo; icon to create a new service.\nI\u0026rsquo;ve chosen to create a service called Windows OS. Be sure to set the status to \u0026ldquo;Active\u0026rdquo; if you plan to use it. Also feel free to add icons if you want to pretty up the install.\nYou\u0026rsquo;ll now see the service listed. Click the drop down on the right side and select \u0026ldquo;Manage Catalog Items\u0026rdquo;.\nSelect the blueprint that was created.\nEntitlements We\u0026rsquo;ve got a published blueprint, but now we want to give access to that blueprint or service.\nGo to the Administration Tab \u0026ndash;\u0026gt; Catalog Management \u0026ndash;\u0026gt; Entitlements. Enter a name for the entitlement,, set it to active, and enter your list of users from the identity store.\nGo to the Items and Approvals tab. Select the services, Catalog Items and Actions that you\u0026rsquo;d like to assign.\nRequest a Catalog Item Well that was easy! (yeah, I\u0026rsquo;m kidding. This has been a long process, but if you log out of vCAC, and log back in as the user that was entitled access to the blueprint, you should see an entry listed in the \u0026ldquo;Catalog\u0026rdquo; tab.\nClick Request.\nEnter some information about the Virtual Machine such as the number of Machines to create, the lease duration, the CPUs, memory and storage sizes and some reasons for the request.\nYou should be taken to a very satisfying screen with a green checkmark on it, letting you know that the request was made successfully.\nTake a peak over at your vSphere environment and you can see the VM being created.\nSummary We finally got to deploy something from vCloud Automation Center (vRealize Automation! I would take a nice break and feel good about what you\u0026rsquo;ve built and when you\u0026rsquo;re ready come back and check out the next few posts to further customize and add features to our vCAC instance.\n","permalink":"https://theithollow.com/2014/09/08/vcac-6-blueprints-catalogs/","summary":"\u003cp\u003eWe\u0026rsquo;re finally ready to start building some blueprints.  \u003ca href=\"http://wp.me/p32uaN-10l\"\u003eResources are available\u003c/a\u003e, \u003ca href=\"http://wp.me/p32uaN-10w\"\u003ereservations have been set\u003c/a\u003e, \u003ca href=\"http://wp.me/p32uaN-10w\"\u003egroups have been created\u003c/a\u003e and now we can build some blueprints.\u003c/p\u003e\n\u003ch1 id=\"blueprints\"\u003e\u003cstrong\u003eBlueprints\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eGo to the Infrastructure Tab \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Blueprints and then click \u0026ldquo;New Blueprint\u0026rdquo; \u0026ndash;\u0026gt; Virtual \u0026ndash;\u0026gt; vSphere (vCenter).\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/vcac-bprint1.png\"\u003e\u003cimg alt=\"vcac-bprint1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/vcac-bprint1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eGive the blueprint a name and a description.  In my case, I\u0026rsquo;m creating a server 2008 R2 blueprint.   Select a Reservation Policy and a machine prefix.  Then enter a number of days for Archives.  This is the number of days the virtual machines will be available after they expire.  Think recycling bin in Windows.  Also, if you\u0026rsquo;re so inclined, you can enter a dollar amount to assign to this template per day, so that later on each department can see how much money these VMs cost the company.\u003c/p\u003e","title":"vRealize Automation 6 Blueprints and Catalogs"},{"content":"In this vCAC (now renamed vRealize Automation) series, we\u0026rsquo;ve got access to some of our resources now after connecting our vCenter Endpoint, so now we want to create some policies to control how our new VMs will be deployed.\nMachine Prefixes We\u0026rsquo;ll be creating a lot of new virtual machines so we\u0026rsquo;ll want to put a prefix on all these machines so we can identify them. You can have more than one prefix so that you can have different prefixes by department, company, user or so on.\nGo to the Infrastructure tab \u0026ndash;\u0026gt; Blueprints \u0026ndash;\u0026gt; Blueprints and you\u0026rsquo;ll notice that we\u0026rsquo;re getting some warnings that the prefixes and business groups are added yet. Let\u0026rsquo;s go do this quick and we\u0026rsquo;ll be ready to create a blueprint. Click the Manage Machine Prefixes link to go straight there. Add a new machine prefix by clicking the \u0026ldquo;New Machine Prefix\u0026rdquo;. Enter your prefix, the number of digits that will be appended at the end, and what number you should start counting from. For instance a VM created from the example below would look like \u0026ldquo;hollow-01\u0026rdquo; and the next \u0026ldquo;hollow-02\u0026rdquo; and so on.\nBusiness Groups We also saw that we need a business group. This would be a group of users who you can assign a resource to. Go to Infrastructure Tab \u0026ndash;\u0026gt; Groups \u0026ndash;\u0026gt; Business Groups and select the \u0026ldquo;New Business Group\u0026rdquo; link. Enter a name for the group, I\u0026rsquo;ve used a group called the \u0026ldquo;NightWatch\u0026rdquo; which will be a subset of my organization \u0026ldquo;Neighborhood Watch\u0026rdquo;. Select the machine prefix that you created earlier. You must also enter a group manager role. This is someone who will be able to manage the machines and add blueprints to the group. Then you can setup a manager who will get email alerts, it might be good to put in a distribution list instead of a single email address for this one. Also, you can setup a support role who can be used as a helpdesk group for users who are having issues.\nYou\u0026rsquo;ll also notice a custom properties section that can be a bit confusing. These are attributes of the VMs that can be applied to all the blueprints. For instance I\u0026rsquo;ve used the VMware.VirtualCenter.Folder property to set which VMware folder the VMs will be put into.\nFor more information on properties please see the vCAC 6 documentation.\nReservation Policies Now we need to set some reservations. These policies are restrictions on how many resources can be utilized. Go to Infrastructure tab \u0026ndash;\u0026gt; Reservations \u0026ndash;\u0026gt; Reservation Policies. Click the \u0026ldquo;New Reservation Policy\u0026rdquo; link. Once you\u0026rsquo;re done, add a Storage Reservation Policy as well.\nNext, click on the Network Profiles and add a new one. Choose \u0026ldquo;External\u0026rdquo; to use the networks already included in your vSphere environment.\nEnter a name for the network profile and then enter the subnet, gateway and DNS information associated with this network. Also setup WINS (if you really must).\nClick over the the IP Ranges tab and enter a name and a list of IP Addresses that the machine blueprints will be using when you deploy them.\nWhen you\u0026rsquo;re all done, you\u0026rsquo;ll see the list of IPs and their allocation.\nNow, go to the Reservations Tab and click \u0026ldquo;New Reservation \u0026ndash;\u0026gt; Virtual \u0026ndash;\u0026gt; vSphere (vCenter)\u0026rdquo;.\nNow we\u0026rsquo;re going to add some compute resources. Select the cluster, give the reservation a name, and select which tenant. Select a business group that will be using this reservation, and select the policy to assign this to. Set a priority which will only matter if you\u0026rsquo;ve got more than one reservation.\nClick over to the Resources tab and you\u0026rsquo;ll need to select an amount of memory and one or more datastores that will hold the VMs.\nOn the Network tab, select the network profile that you created earlier.\nLastly, select the alerts tab and set the alert notification thresholds the way that you see fit.\nSummary We\u0026rsquo;ve gone through a lot to put up some boundries on how VMs will be provisioned. Let\u0026rsquo;s FINALLY start building some blueprints in our next post!\n","permalink":"https://theithollow.com/2014/09/08/vcac-6-policies-reservations/","summary":"\u003cp\u003eIn this vCAC (now renamed vRealize Automation) series, we\u0026rsquo;ve got \u003ca href=\"http://wp.me/p32uaN-10l\"\u003eaccess to some of our resources\u003c/a\u003e now after connecting our vCenter Endpoint, so now we want to create some policies to control how our new VMs will be deployed.\u003c/p\u003e\n\u003ch1 id=\"machine-prefixes\"\u003e\u003cstrong\u003eMachine Prefixes\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eWe\u0026rsquo;ll be creating a lot of new virtual machines so we\u0026rsquo;ll want to put a prefix on all these machines so we can identify them.  You can have more than one prefix so that you can have different prefixes by department, company, user or so on.\u003c/p\u003e","title":"vRealize Automation 6 Policies and Reservations"},{"content":"We\u0026rsquo;ve completed the vCAC (now rename to vRealize Automation) appliance deployment, installed IaaS components, setup tenants and identity stores. Now it\u0026rsquo;s time to get cracking on connecting to some resources that we can use for our applications. I would like to point out that for this section we\u0026rsquo;ll be logged in as a user that is both an infrastructure admin as well as a tenant admin. I\u0026rsquo;ve also chosen to complete this configuration under my newly created \u0026ldquo;Neighborhood Watch\u0026rdquo; tenant. When adding resources to your tenants, you can do this at the default tenant level and have the sub-tenants use them, or configure the resources at each tenant level. I would steer away from doing it in both places to make troubleshooting easier at a later date. I mean, what happens when you\u0026rsquo;re sharing the same vCenter at the default level as well as the sub-tenant level? That could get a bit tricky.\nAdd License Key Sorry folks, if you are at this stage and don\u0026rsquo;t have a license key, you\u0026rsquo;re probably in trouble. We need to add a valid license key by going to the infrastructure tab \u0026ndash;\u0026gt; Administration \u0026ndash;\u0026gt; Licensing. Enter your license key so you can continue with your endpoints. Credentials Now that we got those silly licenses taken care of, go to the Infrastructure Tab \u0026ndash;\u0026gt; Endpoints \u0026ndash;\u0026gt; Credentials. You\u0026rsquo;ll need to enter some login information so that you can connect to your endpoints. This is actually pretty neat because you\u0026rsquo;ll save these credentials for future use. Add a new set of credentials by clicking the \u0026ldquo;+\u0026rdquo; sign and fill out the information. A good description and a name are always recommended. Then enter your a vCenter username and password so you can add the vCenter resources in a future step. Note: when you\u0026rsquo;re done entering, be sure to click the green check mark on the left side to save them. Sometimes I miss this so don\u0026rsquo;t feel bad if you exited without saving. vCenter Endpoint Now we can click on the Endpoint menu and add our vCenter. Click the \u0026ldquo;New Endpoint\u0026rdquo; dropdown and go to Virtual \u0026ndash;\u0026gt; vSphere (vCenter) option.\nFill in the information for your vCenter.\nNOTE: the endpoin name, must match the agent we configured during the IaaS components install. If you kept the defaults, you must use the name \u0026ldquo;vCenter\u0026rdquo; and YES, the case matters. Capital \u0026ldquo;C\u0026rdquo;\nType in the address of your vCenter. This should be something like https://vcenter.name/sdk\nSelect your credentials. We just created these before, so you can select the button on the right and select the credentials you created. Click OK to finish the endpoint creation.\nAt this point I recommend restarting the vCloud Automation Center Agent service on the IaaS server. This isn\u0026rsquo;t necessary, but if you don\u0026rsquo;t do this and immediately go to the next steps, you\u0026rsquo;ll find out that things don\u0026rsquo;t look quite right.\nFabric Groups We\u0026rsquo;ve connected to our vCenter Endpoint, but now we need to setup a fabric group. The fabric group is a set of physical resources that we\u0026rsquo;re going to store our virtual machines. Go to Infrastructure tab \u0026ndash;\u0026gt; Groups \u0026ndash;\u0026gt; Fabric Groups. Click the \u0026ldquo;New Fabric Group\u0026rdquo; option.\nEnter a name for the fabric group, and a description. Then start typing in a name for the fabric administrator. When you start typing, the field should start populating a list of names that matches the string you\u0026rsquo;ve entered. These names are the names from your identity store that we created earlier. Select a name for the fabric administrator. Then in the last section select a cluster that you\u0026rsquo;ll be running your virtual machines on. If no resources show up, be sure that the credentials you created to connect to the endpoint have access to the resources, and if you didn\u0026rsquo;t restart the vCAC service this could also cause the resources to be missing.\nAfter you setup the fabric group, go to the Infrastructure Tab \u0026ndash;\u0026gt; Endpoints \u0026ndash;\u0026gt; Endpoint (yes, I know I said that twice) and look for your compute resources. Click the dropdown next to the cluster you just added and go to Data Collection.\nYou\u0026rsquo;ll notice that the frequency is set to daily. To speed up this process, we can click \u0026ldquo;request now\u0026rdquo; under Inventory, State, and Performance. Summary We should now have some resources that we can use for our blueprints but I\u0026rsquo;m sorry to say, we have to setup some additional policies in our next post before we can do that. Soon, though. Soon!\n","permalink":"https://theithollow.com/2014/09/08/vcac-6-vcenter-endpoint-setup/","summary":"\u003cp\u003eWe\u0026rsquo;ve completed the vCAC (now rename to vRealize Automation) \u003ca href=\"/2014/07/trouble-configuring-vcac-appliance/\"\u003eappliance deployment\u003c/a\u003e, installed \u003ca href=\"http://wp.me/p32uaN-ZQ\"\u003eIaaS components\u003c/a\u003e, \u003ca href=\"http://wp.me/p32uaN-10b\"\u003esetup tenants and identity stores\u003c/a\u003e.  Now it\u0026rsquo;s time to get cracking on connecting to some resources that we can use for our applications.  I would like to point out that for this section we\u0026rsquo;ll be logged in as a user that is both an infrastructure admin as well as a tenant admin.  I\u0026rsquo;ve also chosen to complete this configuration under my newly created \u0026ldquo;Neighborhood Watch\u0026rdquo; tenant.   When adding resources to your tenants, you can do this at the default tenant level and have the sub-tenants use them, or configure the resources at each tenant level.  I would steer away from doing it in both places to make troubleshooting easier at a later date.  I mean, what happens when you\u0026rsquo;re sharing the same vCenter at the default level as well as the sub-tenant level?  That could get a bit tricky.\u003c/p\u003e","title":"vRealize Automation 6 vCenter Endpoint Setup"},{"content":"If you\u0026rsquo;ve followed the series this far, you\u0026rsquo;ve got your vCAC (now renamed vRealize Automation) appliance deployed and your IaaS components installed. The tricky parts are over with, and now the fun begins\u0026hellip; configurations! What are you waiting for? Go login at the http://vcacapplaincename/shell-ui-app/ url.\nAdd a Tenant Under Administration \u0026ndash;\u0026gt; Tenants, you will see the default tenant which is vsphere.local. This is the context where you can create additional tenants and should probably be considered to be a \u0026ldquo;Do Not Touch\u0026rdquo; tenant. Even if you\u0026rsquo;re only going to have a single tenant, it would be a good idea to create a new one just in case. It\u0026rsquo;s pretty easy to create more tenants if you make a mistake, but tough to recreate the default tenant. Click the \u0026ldquo;+\u0026rdquo; to create a new tenant.\nGive the new tenant a name, remember that a tenant could be a different company, a department, a customer or just about any group you can think of. In my example, I\u0026rsquo;ve created a Neighborhood Watch tenant.\nNext we setup an identity store to allow access to your vCAC instance. The identity store will be a list of users that you can assign permission to. For example below, I\u0026rsquo;ve configured the hollow.local Active Directory as the identity store. Fill out a user account that can read from the directory, a connection to the AD Server and a search DN.\nYou\u0026rsquo;ll see your identity story listed, but notice that you could add multiple identity stores to a tenant if you\u0026rsquo;d like to. If you\u0026rsquo;ve got multiple Domain\u0026rsquo;s you need to connect you can do that here.\nNext, you will want to setup your administrators. First, I\u0026rsquo;d like to mention that you can just start typing in the boxes and a list of users that matches the string you\u0026rsquo;ve typed in will appear. Depending on your resources and the connection to the identity server, this may take a second just be patient.\nThere are two sets of administrators that you\u0026rsquo;d need to configure.\nTenant Administrators: This is an admin with access to manage user groups, branding, notifications and approvals and entitlements.\nInfrastructure Administrators: The Infra Admin is responsible for setting up endpoints which basically means all of the connections to resources used by vCloud Automation Center. Typically the Infrastructure Admin is someone who has some control over the vCenter servers, vCenter Orchestrator, Cloud Services or things like this.\nIf this is a lab environment, it may be easiest to make the two administrators the same person so that you can easily perform actions for both roles.\nNow you\u0026rsquo;ll see that you\u0026rsquo;ve got two tenants in the appliance.\nDefault Tenant Identity store Just because we don\u0026rsquo;t want to do much with the default tenant, doesn\u0026rsquo;t mean we shouldn\u0026rsquo;t set up an identity store for it. This allows us to have more than one person login to make changes to the tenants. We probably don\u0026rsquo;t want a general login used for the default tenant so that we can properly audit, so an identity store would be a good idea.\nWe can select the drop down and choose edit on the vsphere.local tenant to add an identity store. And then run through the same steps as we did with the new tenant.\nSummary We\u0026rsquo;ve added a new tenant and setup some parameters about who can login to the clouds that we\u0026rsquo;re creating. Next up, we should connect vCAC to some resources that we can utilize to build our blueprints.\n","permalink":"https://theithollow.com/2014/09/08/vcac-6-basic-configurations/","summary":"\u003cp\u003eIf you\u0026rsquo;ve followed the series this far, you\u0026rsquo;ve got your vCAC (now renamed vRealize Automation) \u003ca href=\"/2014/07/trouble-configuring-vcac-appliance/\"\u003eappliance deployed\u003c/a\u003e and your \u003ca href=\"http://wp.me/p32uaN-ZQ\"\u003eIaaS components installed\u003c/a\u003e.  The tricky parts are over with, and now the fun begins\u0026hellip; configurations!  What are you waiting for?  Go login at the http://vcacapplaincename/shell-ui-app/ url.\u003c/p\u003e\n\u003ch1 id=\"add-a-tenant\"\u003e\u003cstrong\u003eAdd a Tenant\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eUnder Administration \u0026ndash;\u0026gt; Tenants, you will see the default tenant which is vsphere.local.  This is the context where you can create additional tenants and should probably be considered to be a \u0026ldquo;Do Not Touch\u0026rdquo; tenant.  Even if you\u0026rsquo;re only going to have a single tenant, it would be a good idea to create a new one just in case.  It\u0026rsquo;s pretty easy to create more tenants if you make a mistake, but tough to recreate the default tenant.  Click the \u0026ldquo;+\u0026rdquo; to create a new tenant.\u003c/p\u003e","title":"vRealize Automation 6 Basic Configurations"},{"content":"Deploying the vCAC (now renamed to vRealize Automation) appliance is only the first step towards getting your Infrastructure as a Service (IaaS) up and running. The next step is to get the IaaS components installed on a Windows machine. There are a number of prerequisites but luckily there is a powershell script that can take care of most of it for you. Find the script here. I must mention first that for vCAC 6 (at the time of this writing) .Net 4.5 is required. This does not mean that .Net 4.5 or higher needs to be installed. .Net 4.5 sp1 does not work with the IaaS components which also means that Server 2012 R2 is not a candidate to install the IaaS components on. Use a Server 2008R2 or Server 2012 with .Net 4.5 installed. (vRealize 6.1 fully supports .Net 4.5.1 according to the VMware rep I spoke with at VMworld)\nRun the script. This script will check for your prerequisites and if they are not met, will attempt to install them for you. Notice that it also tells you your .Net version as well.\nNow that the prerequisites are met, you can run the component installer. You can find this by going to the URL of the vRealize appliance you deployed previously. Click the \u0026ldquo;vCloud Automation Center IaaS installation page\u0026rsquo; hyperlink to take you to the installers.\nHere you can now download the setup file by running the setup.exe. NOTE: If you decide to download the installer, do not rename it. The name of the installer is important to the setup.\nRun through the installer.\nRead all of the EULA select the \u0026ldquo;I accept\u0026rdquo; checkbox and then click next.\nThe appliance host name should be provided for you assuming that you didn\u0026rsquo;t change the name of the setup.exe file as mentioned previously. You\u0026rsquo;ll then need to enter the username and password of the vCAC appliance. The username should be root.\nSince this is for a lab environment I\u0026rsquo;ve selected the complete install. If this is for an enterprise deployment, these pieces of the IaaS may be split up across multiple machines for high availability and performance reasons.\nCheck the prerequisites. My prerequisites list looked like the one below. I selected \u0026ldquo;bypass\u0026rdquo; and then can click next. If you have more concerns feel free to fix all of the PreReqs and continue. My SQL Server will be on a different machine and I decided to turn off the Windows Firewall on this machine.\nNow you must enter an account name that the services will run as. You should probably use a service account for this. The next box is a passphrase which is only used as an encryption key for data at rest in the database. Lastly, you need to setup your SQL database. Enter a server name or IP Address, the database name and a set of credentials with access to modify the database.\nTwo of the agents used to do the work of the IaaS components need to be entered here. I\u0026rsquo;ve chosen to leave the defaults of \u0026ldquo;DEM\u0026rdquo; and \u0026ldquo;DEO\u0026rdquo;. Also, you\u0026rsquo;ll configure a vSphere Agent as well. This is what will talk to your vCenter server. I would not change the name of the Endpoint here, but do take note of it, including the case. It will be used again later and must be spelled exactly as it is shown here. vCenter\nNow we need to setup the information for registration. Enter the name of the vCAC appliance that you deployed previously. Once this is done, you can click the \u0026ldquo;Load\u0026rdquo; button to automatically grab the correct SSO Default Tenant from the appliance. Then click \u0026ldquo;Download\u0026rdquo; in order to grab a certificate. Check the box to accept the certificate.\nNow we need to enter some credentials that can authenticate with the SSO server. By default \u0026ldquo;administrator@vsphere.local\u0026rdquo; is used. Enter the password and click \u0026ldquo;test\u0026rdquo;. If it works you\u0026rsquo;ll see a \u0026ldquo;passed\u0026rdquo; message. Enter the DNS name of your IaaS Server and click \u0026ldquo;test\u0026rdquo; again to be sure that DNS resolves properly.\nReview the install.\nWait for the installation routine to finish.\nIf everything is successful, click next.\nClick Finish.\nSummary This is a long process but the fun is just starting here. Now you should have the vCAC appliance deployed and your IaaS Server setup. We can now start some of the configuration, and yes this will take some time, but remember that a transition to IT as a Service is about taking your time to setup automation which will save time in the future. This is a change of mindset that may take a bit to get used to.\n","permalink":"https://theithollow.com/2014/09/08/vcac-6-iaas-installation/","summary":"\u003cp\u003eDeploying the vCAC (now renamed to vRealize Automation) appliance is only the first step towards getting your Infrastructure as a Service (IaaS) up and running.  The next step is to get the IaaS components installed on a Windows machine.  There are a number of prerequisites but luckily there is a powershell script that can take care of most of it for you.  Find the script \u003ca href=\"http://blogs.vmware.com/vsphere/2013/12/vmware-vcloud-automation-center-6-pre-req-automation-script.html#Download\"\u003ehere\u003c/a\u003e.  I must mention first that for vCAC 6 (at the time of this writing) .Net 4.5 is required.  This does not mean that .Net 4.5 or higher needs to be installed.  .Net 4.5 sp1 does not work with the IaaS components which also means that Server 2012 R2 is not a candidate to install the IaaS components on.  Use a Server 2008R2 or Server 2012 with .Net 4.5 installed.  (vRealize 6.1 fully supports .Net 4.5.1 according to the VMware rep I spoke with at VMworld)\u003c/p\u003e","title":"vRealize Automation 6.0 IaaS Installation"},{"content":" In this day and age, almost all the programs we interact with are web pages. Many of the applications we deploy end up having a web front end and are configured with a default SSL Certificate. It\u0026rsquo;s much more secure to have your own trusted certificate and in previous posts I\u0026rsquo;ve gone over how to setup the Public Key Infrastructure (PKI) in a home lab, as well as deploying Web Certificate Templates for our applications.\nDeploying vCOps in your VMware infrastructure is a very common thing to be done for almost all deployments. Let\u0026rsquo;s be sure to install a trusted certificate when we do the deployment.\nCreate the Certificate Request In a previous post I went over creating certificate requests for other VMware services and this will be much like those services. We can run the SSL Automation tool from VMware again, and this time select the \u0026ldquo;Other service\u0026rdquo; category. Fill out the requested information.\nThis will create a certificate request and create a request file. If you want more information please see this post about creating the certificate requests.\nOnce you\u0026rsquo;ve sent the certificate request to the Certificate Authority, you\u0026rsquo;ll receive a certificate file. Again, if you want more information about this, please see my post on Creating and downloading Certificate Request files for VMware.\nBuilding the Certificate Now that you\u0026rsquo;ve gotten the certificates from the certificate authority, we can build the final file that needs to be uploaded to vCOps. In the past, we\u0026rsquo;ve had to add the Root64.crt file to the rui.crt file. Please go ahead and do this again, just as we did previously. Once you\u0026rsquo;re done with that, we need to also add the rui.key file to the end of the chain.pem file. vCOps will require you to have the private key in the SSL certificate.\nGo back to the vCOps URL with the /admin url. https://vCOps/Admin. Upload and install the new file that you\u0026rsquo;ve created.\nOnce done, you should be able to reload your browser session and you\u0026rsquo;ll see that the https:// session is now trusted.\nTroubleshooting If the web browser isn\u0026rsquo;t showing a trusted certificate, look for the following:\nIs the root CA trusted on your machine Reboot the vCOps appliance Clear your browser cache ","permalink":"https://theithollow.com/2014/09/02/add-ssl-certificate-vmware-vcops/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1.png\"\u003e\u003cimg alt=\"piotr_halas_padlock\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1-150x150.png\"\u003e\u003c/a\u003e In this day and age, almost all the programs we interact with are web pages.  Many of the applications we deploy end up having a web front end and are configured with a default SSL Certificate.  It\u0026rsquo;s much more secure to have your own trusted certificate and in previous posts I\u0026rsquo;ve gone over how to setup the Public Key Infrastructure (PKI) in a home lab, as well as deploying Web Certificate Templates for our applications.\u003c/p\u003e","title":"Add SSL Certificate to VMware vCOps"},{"content":"\nThe Array I know what you\u0026rsquo;re thinking, show me the product! What does it look like, how big is it.\n4U storage shelf 2U Storage Controller (dual controllers) CIFSNFSiSCSI 48TB or 96 TB with additional 2.4TB or 4.8 TB SSD Homemade RAID that allows for 2 disk failures on the same storage pool Recovery Something out of the norm with DataGravity\u0026rsquo;s array is that they use a completely separate set of disks for snapshots. Disks are automatically assigned to different pools are nothing is required from the Administrator to set this up. This diverges from what the rest of the industry is typically does. There are two schools of thought here though:\nIf you split your disks up, you can ensure that losing physical disks doesn\u0026rsquo;t mean you also lost your backups. A good rule of thumb is to not store your backups on the same physical disks that you have your data on because a failure could cause you to lose both. If you split up your disks, you are using less spindles to provide performance for the array. Generally more spindles means better performance. I don\u0026rsquo;t want to suggest that one method is better than the other, but this is something you\u0026rsquo;d want to consider when purchasing an array. This decision is something that could set DataGravity apart from their competitors.\nAnalytics The thing that really sets this array apart from other storage devices is it\u0026rsquo;s powerful analytics engine. The DataGravity array has two controllers but only one of the controllers actively manages storage IO traffic. The second controller is used for the analytics of the data that is on the array. The idea here is that the analytics is offloaded so as not to affect the storage IO. The second thing is that since DataGravity uses a different set of disks for data backups, the Analytics controller can run it\u0026rsquo;s processing against those disks which will also prevent the engine from impacting the disks that are being used for IO processes.\nThe analytics engine allows you to see file level information which is an amazing tool for storage admins. The file statistics can show who accessed the file, when, who wrote to it etc. This sort of analytics can even be done if the file is inside of a virtual machine\u0026rsquo;s vmdk file. From speaking with people at DataGravity, I found out that they are cracking open the virtual machine (on the backup disks, not production) and reading the info into the analytics engine.\nObviously, you can see how the auditing analytics could be great for a storage or security admin, but this analytics goes much deeper. It will allow you to see trends on types of files, or even cooler, the number of IOPS by file name! If you\u0026rsquo;re having a storage issue, wouldn\u0026rsquo;t it be awesome to find out which file is causing all of the IO, even if that file is inside of a virtual machine vmdk?\nSo, you might be asking how much the analytics engine costs. Well, the answer is it\u0026rsquo;s included with the array. The CEO, Paula Long, stated at Tech Field Day eXtra 2014, \u0026ldquo;You bought the array, you should know what\u0026rsquo;s on it.\u0026rdquo;\nSummary DataGravity seems to have a neat place in the storage industry. The belief that \u0026ldquo;You bought the storage array; you should be able to see what\u0026rsquo;s on it\u0026rdquo; is a core fundamental differentiator of this array from others. I can see amazing use cases for this type of array within corporations that need to be able to manage highly sensitive data. This could be an amazing tool for smaller organizations that don\u0026rsquo;t have large enough teams to do their own analytics or have their own storage teams. It\u0026rsquo;s easy to use and easy to manage with great insight into the underlying data.\n","permalink":"https://theithollow.com/2014/08/27/got-analytics-storage-array-datagravity/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/datagravity.png\"\u003e\u003cimg alt=\"datagravity\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/datagravity.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"the-array\"\u003e\u003cstrong\u003eThe Array\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eI know what you\u0026rsquo;re thinking, show me the product!  What does it look like, how big is it.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e4U storage shelf\u003c/li\u003e\n\u003cli\u003e2U Storage Controller (dual controllers)\u003c/li\u003e\n\u003cli\u003eCIFSNFSiSCSI\u003c/li\u003e\n\u003cli\u003e48TB or 96 TB with additional 2.4TB or 4.8 TB SSD\u003c/li\u003e\n\u003cli\u003eHomemade RAID that allows for 2 disk failures on the same storage pool\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/product-shot-specifications_0.png\"\u003e\u003cimg alt=\"product-shot-specifications_0\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/product-shot-specifications_0.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"recovery\"\u003e\u003cstrong\u003eRecovery\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eSomething out of the norm with DataGravity\u0026rsquo;s array is that they use a completely separate set of disks for snapshots.  Disks are automatically assigned to different pools are nothing is required from the Administrator to set this up.  This diverges from what the rest of the industry is typically does.  There are two schools of thought here though:\u003c/p\u003e","title":"You Got Your Analytics in my Storage Array - DataGravity"},{"content":" I got a chance to get a first hand look at Asigra at the Tech Field Day Extra sessions on the Monday of VMworld 2014. I went into the sessions thinking that his was just another backup company, but found that they have a very robust suite of backups and they\u0026rsquo;ve been around for a very long time.\nAsigra handles, Cloud Platform apps such as Office365, Salesforce, Google, storage array integration, vSphere snapshot integration, file level backups and the list went on.\nArchitecture The architecture consists of three separate pieces.\nLicense Server which is a secure offsite server so it can manage multiple different types of clouds. The DS System houses the backups and is in the secure offsite location. DS Client sits behind the firewall and communicates to the DS System. It is responsible for grabbing the data, dedupe, compression and encryption which then sends to the DS System. One of the cool things I found about this product was that you could have a DS Client at your production site, and another at your DR Site, the DR Site client can grab the data from the DS System on an on-going basis sort of like replication. Let me explain, with replication, you have Prod Site \u0026ndash;\u0026gt; DR Site. With Asigra, you\u0026rsquo;ll have Prod Site \u0026ndash;\u0026gt; DS System (in the cloud) \u0026ndash;\u0026gt; DR Site. This means that you could have your VMs backed up to the cloud, and then replicated all with a single policy. This makes a backup administrators job much easier since he can manage replication, and backups all in a single pane of glass. Couple this with the fact that Asigra can backup more than just virtual machines, this becomes a pretty powerful tool.\nSecure File Level Recovery for VMs This is all secret sauce and I wasn\u0026rsquo;t able to get a great answer for this due to patents filed on the software, but Asigra claims they are capable of doing a file level restore from a virtual machine disk file without mounting the VM, decrypting the files, uncompressing the files and then sending the packets back to your recovery system. This makes recovery VERY fast in comparison and doesn\u0026rsquo;t require additional space to mount the image just long enough to grab the required files. I was told that this \u0026ldquo;Secret Sauce\u0026rdquo; is in the process of being filed for patent so no more information could be given, but they can just resend required packets to restore the file, without mounting the image first. VERY COOL (if backups were cool) but I would love to see more about how this actually works.\nLicensing Models You first will have to pay a price for the storage space for the GB. This is not uncommon, and should almost be expected at this point, but the price per GB is pretty low. There is then an additional cost for recovery and it\u0026rsquo;s based on a sliding scale. If you have to recover a lot of data, you\u0026rsquo;ll be charged a higher rate per GB, whereas if you\u0026rsquo;ve really gotten your processes in place to prevent accidental deletions, etc then you\u0026rsquo;ll pay a smaller price per GB. This is kind of reverse wholesale, where the less you use it, the cheaper it is per GB. When I say \u0026ldquo;a lot\u0026rdquo; of date, this is based on a % of the data that you\u0026rsquo;ve backed up. Example, recovering 50% of your data is going to cost you much more per GB than recovering 5% of your data.\nAsigra also allows you to to schedule your tests ahead of time without incurring a charge for the recovery in that price per GB that we mentioned.\nIf you\u0026rsquo;d like to read more about Asigra licensing, please check out. http://www.asigra.com/solutions/recovery-license-model\nSummary I know that backups aren\u0026rsquo;t something COOL that everyone wants to talk about but Asigra seemed to have a pretty cool solution despite this fact. If you want to learn more, I invite you to check out their site and if you\u0026rsquo;re looking for a new solution and Asigra looks like the answer, check for one of the partners.\n","permalink":"https://theithollow.com/2014/08/25/asigra/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/asigra-logo.png\"\u003e\u003cimg alt=\"asigra-logo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/asigra-logo.png\"\u003e\u003c/a\u003e  I got a chance to get a first hand look at \u003ca href=\"http://www.asigra.com\"\u003eAsigra\u003c/a\u003e at the \u003ca href=\"http://techfieldday.com\"\u003eTech Field Day\u003c/a\u003e Extra sessions on the Monday of VMworld 2014.  I went into the sessions thinking that his was just another backup company, but found that they have a very robust suite of backups and they\u0026rsquo;ve been around for a very long time.\u003c/p\u003e\n\u003cp\u003eAsigra handles, Cloud Platform apps such as Office365, Salesforce, Google, storage array integration, vSphere snapshot integration, file level backups and the list went on.\u003c/p\u003e","title":"Asigra"},{"content":"\nHey! You got your VMworld in my Tech Field Day!\nThe makers of Tech Field Day are having an \u0026ldquo;Extra\u0026rdquo; set of sessions at VMworld 2014 this year in San Francisco.\nAs you may already know, the Tech Field Day group gets together a set of delegates to engage with some vendors about a variety of solutions. These discussions are all streamed live, as well as posted for later viewing. The discussions are to be technical in nature and can be directed in a much different path than a normal \u0026ldquo;set\u0026rdquo; presentation.\nAs I mentioned, it\u0026rsquo;s streamed live and the delegates will all be very active on Twitter, I assure you. If you have questions for any of the presenters, please feel free to tweet your questions and I\u0026rsquo;m sure someone will pass along the query.\nSince this is done during VMworld the setting is slightly different. In my case, I\u0026rsquo;m only a delegate for one day - Monday and other groups will be around for Tuesday and Wednesday. Don\u0026rsquo;t let the fact that I\u0026rsquo;m not a delegate on either of those days discourage you from participating. :) I can promise you that all of the delegates are very sharp and will be great to watch.\nMonday\u0026rsquo;s list of presenters is below. Uh \u0026hellip;oh, there is a secret company that you\u0026rsquo;ll need to tune in to find out who they are!\n","permalink":"https://theithollow.com/2014/08/20/tech-field-day-extra/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/TFD-Extra-Logo-150.png\"\u003e\u003cimg alt=\"TFD-Extra-Logo-150\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/TFD-Extra-Logo-150.png\"\u003e\u003c/a\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/08/VMworld2014-tfd.jpg\"\u003e\u003cimg alt=\"VMworld2014-tfd\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/08/VMworld2014-tfd.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eHey! You got your VMworld in my Tech Field Day!\u003c/p\u003e\n\u003cp\u003eThe makers of \u003ca href=\"http://techfieldday.com/event/vmwus14/\"\u003eTech Field Day\u003c/a\u003e are having an \u0026ldquo;Extra\u0026rdquo; set of sessions at VMworld 2014 this year in San Francisco.\u003c/p\u003e\n\u003cp\u003eAs you may already know, the Tech Field Day group gets together a set of delegates to engage with some vendors about a variety of solutions.  These discussions are all streamed live, as well as posted for later viewing.  The discussions are to be technical in nature and can be directed in a much different path than a normal \u0026ldquo;set\u0026rdquo; presentation.\u003c/p\u003e","title":"Tech Field Day Extra"},{"content":" Congratulations, if you\u0026rsquo;ve made it this far, you\u0026rsquo;re almost done with the replacing of your VMware SSL Certificates! If you\u0026rsquo;ve followed the previous posts, you\u0026rsquo;ll know that this has been a long path to completing your goal. This post finishes installing those certificates on your vCenter server. If you missed a part please check out the previous posts to get caught up.\nCreate a Home Lab Certificate Authority Deploy Root Certificates via Autoenrollment Create VMware-SSL Web Certificate Template Create VMware Services Certificate Requests\nInstall SSL Certificates Open up the VMware SSL Automation Tool and now we can go about deploying those SSL Certificates. We\u0026rsquo;ve already completed 1 and 2, so now we need to refer to the planning steps from part 1. If you can\u0026rsquo;t remember what they are, you can re-run option 1, but be sure to copy it to notepad or something so you can keep track of where you are at.\nFollow the instructions for your planning steps. This should guide you through each of the phases.\nNOTE: Many of the operations you\u0026rsquo;ll perform here will stop and start VMware services. You should be prepared for this in case the server is currently being access. This will not affect any of your virtual machines, but may stop you from accessing vCenter, VUM, vCO etc.\nTroubleshooting If you are having trouble with the update proces, be sure to check to see if you are updating according to the plan. The plan may have you update a service, then update another service and then go back to the first service to register it with some other service. Follow the instructions.\nSecondly, be sure that your cachain.pem files are located in the same folder as your rui.csr files etc. This is the directory that the tool is looking to find the certificates.\nThirdly, be sure that when you downloaded the certificates from your CA, you grabbed only the certificate and not the chain. See this post for additional information.\nLastly, be sure that you copied the Root64.cer file to the end of your rui.crt file, renamed it to cachain.pem and that there is no space between the two certificates.\nSummary This has been a long process, but hopefully valuable. SSL Certificates are something that shouldn\u0026rsquo;t be overlooked for an design since an untrusted certificate could mean that your environment has been compromised. It\u0026rsquo;s not a chance many corporations would want to take and hopefully the steps in this series have given you a good idea about how to replace SSL certificates for your VMware environment.\n","permalink":"https://theithollow.com/2014/08/18/replacing-vmware-vcenter-ssl-certificates/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1.png\"\u003e\u003cimg alt=\"piotr_halas_padlock\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1-150x150.png\"\u003e\u003c/a\u003e  Congratulations, if you\u0026rsquo;ve made it this far, you\u0026rsquo;re almost done with the replacing of your VMware SSL Certificates!  If you\u0026rsquo;ve followed the previous posts, you\u0026rsquo;ll know that this has been a long path to completing your goal.  This post finishes installing those certificates on your vCenter server.  If you missed a part please check out the previous posts to get caught up.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/2014/08/setup-home-lab-ssl-certificates-authority/\"\u003eCreate a Home Lab Certificate Authority\u003c/a\u003e \u003ca href=\"/2014/08/setup-home-lab-ssl-root-certificates/\"\u003eDeploy Root Certificates via Autoenrollment\u003c/a\u003e \u003ca href=\"/2014/08/create-vmware-ssl-web-certificate/\"\u003eCreate VMware-SSL Web Certificate Template\u003c/a\u003e \u003ca href=\"/2014/08/create-vmware-ssl-certificate-requests/\"\u003eCreate VMware Services Certificate Requests\u003c/a\u003e\u003c/p\u003e","title":"Replacing VMware vCenter SSL Certificates"},{"content":" I\u0026rsquo;ve seen quite a few VMware environments where when you login, you get that silly error message about a certificate not being trusted. This is something we can fix and more importantly be sure that the connections are trusted and encrypted.\nPrerequisites Trusted root certificates deployed to workstations - Instructions for Lab Environment Web-Certificate Template Deployed - Instructions for Lab Environment Certificate Authority Web Enrollment server - (If you followed the Lab Environment setup this should be on your CA already) Download OpenSSL and install it. I used 1.01h as the version for my lab which worked fine on a Server 2012 R2 Server which is also my vCenter Server. Download and install the vCenter Certificate Automation Tool from VMware. This is also found in the vCenter install media for vSphere 5.5. I prefer to create my certificate requests right from the VMware vCenter Server, so I install both the SSL Automation Tool and OpenSSL directly on the vCenter Server. If you\u0026rsquo;re using the VMware vCenter Server Appliance you\u0026rsquo;ll need to do this someplace else and there are some additional steps not listed in this post. Please see this KB article for more info: vCSA SSL Certs\nRunning the Certificate Automation Tool You can run the SSL-Updater tool by double clicking the batch file in the directory where you unzipped the tools. Look for ssl-updater.bat. Note: it might be useful to run this as Administrator if UAC is on.\nWhen you run the SSL Tool, you\u0026rsquo;ll get a menu with options. To begin with you should select option 1. This option will explain the steps that need to be done and the order in which to do them.\nWhen you select option 1, you\u0026rsquo;ll be presented with a new menu. This menu asks what you\u0026rsquo;re going to update. If you are going to do all of the services listed, look for option 8. You can see from the screenshot that the steps will be listed. You should copy that list to a text file or something to recall it later.\nAssuming you didn\u0026rsquo;t copy the list, and that the tool isn\u0026rsquo;t modified too much, you can use the list below.\n1. Go to the machine with Single Sign-On installed and - Update the Single Sign-On SSL certificate.\n2. Go to the machine with Inventory Service installed and - Update Inventory Service trust to Single Sign-On.\n3. Go to the machine with Inventory Service installed and - Update the Inventory Service SSL certificate.\n4. Go to the machine with vCenter Server installed and - Update vCenter Server trust to Single Sign-On.\n5. Go to the machine with vCenter Server installed and - Update the vCenter Server SSL certificate.\n6. Go to the machine with vCenter Server installed and - Update vCenter Server trust to Inventory Service.\n7. Go to the machine with Inventory Service installed and - Update the Inventory Service trust to vCenter Server.\n8. Go to the machine with vCenter Orchestrator installed and - Update vCenter Orchestrator trust to Single Sign-On.\n9. Go to the machine with vCenter Orchestrator installed and - Update vCenter Orchestrator trust to vCenter Server.\n10. Go to the machine with vCenter Orchestrator installed and - Update the vCenter Orchestrator SSL certificate.\n11. Go to the machine with vSphere Web Client installed and - Update vSphere Web Client trust to Single Sign-On.\n12. Go to the machine with vSphere Web Client installed and - Update vSphere Web Client trust to Inventory Service.\n13. Go to the machine with vSphere Web Client installed and - Update vSphere Web Client trust to vCenter Server.\n14. Go to the machine with vSphere Web Client installed and - Update the vSphere Web Client SSL certificate.\n15. Go to the machine with Log Browser installed and - Update the Log Browser trust to Single Sign-On.\n16. Go to the machine with Log Browser installed and - Update the Log Browser SSL certificate.\n17. Go to the machine with vSphere Update Manager installed and - Update the vSphere Update Manager SSL certificate.\n18. Go to the machine with vSphere Update Manager installed and - Update vSphere Update Manager trust to vCenter Server.\nCreate the Requests From the Automation Tool, we can now select option 2 which is the generate certificate signing requests. From here, we\u0026rsquo;ll need to select the service that we are creating a request for. No, you can\u0026rsquo; t do them all at once\nSelect the service, and answer the questions. You\u0026rsquo;ll need to know things like IP Addresses, DNS Names, Locations and a file location to export the requests and private keys.\nWhen the process is done, you\u0026rsquo;ll see three files in the file location you specified. Next, repeat this process for the rest of the services that you want to sign.\nOnce these files have been created, you can take the certificate signing requests and upload them to the Certificate Authority to obtain the certificate. You should be able to do this by going to https://NAMEOFCA/certsrv/default.asp assuming you followed the prior posts about setting up a Certificate Authority for your home lab.\nOnce here, choose \u0026ldquo;Request a Certificate\u0026rdquo;.\nChoose \u0026ldquo;Advanced Certificate Request\u0026rdquo;.\nChoose the base-64-encoded option.\nNow you need to take the rui.csr file and copy the entire contents into the web page request box. Choose the VMware-SSL certificate template (or any other Web Template you have created).\nChoose the Base64 encoded option and then click the \u0026ldquo;Download Certificate\u0026rdquo;.\nSave the file as rui.crt in the same directory as where the request came from. This needs to be the same one that the Automation Tool created them in for the later steps to work correctly. Once this is done, repeat the process for each of the services you are going to request SSL certificates for.\nOnce you\u0026rsquo;ve requested all of the certificates, go back to the default CA page and click the \u0026ldquo;Download a CA certificate, certificate chain, or CRL\u0026rdquo; link.\nHere we will download the RootCA. Choose Base64 and select the appropriate CA Certificate from the list. Then click \u0026ldquo;Download CA certificate chain\u0026rdquo; link.\nNow, save this file as \u0026ldquo;cachain.p7b\u0026rdquo; and I usually do this in the parent directory of the services I\u0026rsquo;m requesting. I don\u0026rsquo;t think this one matters too much.\nOnce exported, you need to open the cachain.p7b file, and export it.\nExport the file.\nWhen prompted, select the Base-64 encoded X.509 (.CER) option.\nSave the file as Root64.cer\nNow we need to open the rui.crt files for each of the services that we now have certificates for and paste the contents of the Root64.cer certificate to the end of the file. From the screenshot below, you can see my SSO Service rui.crt file has the Root64.cer file appended to the end.\nSave the file as chain.pem in the service folder. Don\u0026rsquo;t forget to do this same thing for each of the services you\u0026rsquo;ve requested.\nSummary Whew! I know there are quite a few steps here, but I assure you that the hard parts are over. In the next post, we\u0026rsquo;ll show you how to replace the default certificates in vCenter with the new certificates that you\u0026rsquo;ve created. We\u0026rsquo;re almost there.\n","permalink":"https://theithollow.com/2014/08/14/create-vmware-ssl-certificate-requests/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock.png\"\u003e\u003cimg alt=\"piotr_halas_padlock\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock-150x150.png\"\u003e\u003c/a\u003e I\u0026rsquo;ve seen quite a few VMware environments where when you login, you get that silly error message about a certificate not being trusted.  This is something we can fix and more importantly be sure that the connections are trusted and encrypted.\u003c/p\u003e\n\u003ch1 id=\"sslerror\"\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/SSLerror.png\"\u003e\u003cimg alt=\"SSLerror\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/SSLerror.png\"\u003e\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"prerequisites\"\u003e\u003cstrong\u003ePrerequisites\u003c/strong\u003e\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003eTrusted root certificates deployed to workstations - \u003ca href=\"/2014/08/setup-home-lab-ssl-root-certificates/\" title=\"Setup Home Lab SSL Root Certificates\"\u003eInstructions for Lab Environment\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eWeb-Certificate Template Deployed - \u003ca href=\"/2014/08/create-vmware-ssl-web-certificate/\" title=\"Create VMware SSL Web Certificate\"\u003eInstructions for Lab Environment\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003eCertificate Authority Web Enrollment server  -  (If you followed the \u003ca href=\"/2014/08/setup-home-lab-ssl-certificates-authority/\" title=\"Setup Home Lab SSL Certificate Authority\"\u003eLab Environment setup\u003c/a\u003e this should be on your CA already)\u003c/li\u003e\n\u003cli\u003eDownload \u003ca href=\"http://slproweb.com/products/Win32OpenSSL.html\"\u003eOpenSSL\u003c/a\u003e and install it.  I used 1.01h as the version for my lab which worked fine on a Server 2012 R2 Server which is also my vCenter Server.\u003c/li\u003e\n\u003cli\u003eDownload and install the \u003ca href=\"https://my.vmware.com/group/vmware/details?downloadGroup=SSL-TOOL-101\u0026amp;productId=285\"\u003evCenter Certificate Automation Tool\u003c/a\u003e from VMware.  This is also found in the vCenter install media for vSphere 5.5.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eI prefer to create my certificate requests right from the VMware vCenter Server, so I install both the SSL Automation Tool and OpenSSL directly on the vCenter Server.  If you\u0026rsquo;re using the VMware vCenter Server Appliance you\u0026rsquo;ll need to do this someplace else and there are some additional steps not listed in this post.  Please see this KB article for more info:  \u003ca href=\"http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC\u0026amp;docType=kc\u0026amp;docTypeID=DT_KB_1_1\u0026amp;externalId=2057223\"\u003evCSA SSL Certs\u003c/a\u003e\u003c/p\u003e","title":"Create VMware SSL Certificate Requests"},{"content":" In order to replace our VMware SSL Certifactes, we need to create a web certificate template that we can then reuse to deploy all of the individual service certificates like vCenter, SSO, Update Manager, vCenter Orchestrator, etc. This certificate will be issued on the vCenter Server and requested in a later process.\nIn part one of this series, we installed a certificate authority.\nIn part two of this series, we deployed client authentication certificates to all our workstations and servers.\nCreate VMware SSL Certificate To start, we need to go back to our Certificate Authority server, open the Certificate Authority MMC and right click the Certificate Templates folder. From here we can click Manage and we\u0026rsquo;ll be presented with our list of Certificate Templates.\nFind the Web Server Template. Right click it and choose Duplicate Template. (It is possible to modify the Web Server Certificate Template itself, but I find that it is a better practice to make a duplicate of it, and then modify the copy)\nOpen up the newly created copy of the Web Server Certificate Template. Give it a descriptive name like \u0026ldquo;VMware-SSL\u0026rdquo; as that\u0026rsquo;s what we\u0026rsquo;re going to use it for.\nGo to the \u0026ldquo;Extensions\u0026rdquo; tab and edit the \u0026ldquo;Key Usage\u0026rdquo; extension. Click the \u0026ldquo;Signature is proof of origin (nonrepudiation) check box as well as the \u0026ldquo;allow encryption of user data\u0026rdquo; box.\nNow edit the \u0026ldquo;Application Policies\u0026rdquo; extension and add \u0026ldquo;Client Authentication\u0026rdquo; to the list of policies.\nClick ok.\nNow we can deploy the certificate template we just created. Right Click \u0026ldquo;Certificate Templates\u0026rdquo; in the MMC and this time select New\u0026ndash;\u0026gt; Certificate Template to Issue. Select the SSL Certificate you just created. (VMware-SSL in our case)\nSummary We should now have our Certificate Authority, Root Certificates, and Web Certificate Templates all ready to go. Our next step is to start requesting certificates from the Authority to be deployed to our web services which I\u0026rsquo;ve outlined in the following post.\nIf you would like to know more, please check out the VMware KB article about setting up these certificates for use with VMware services.\n","permalink":"https://theithollow.com/2014/08/11/create-vmware-ssl-web-certificate/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1.png\"\u003e\u003cimg alt=\"piotr_halas_padlock\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1-150x150.png\"\u003e\u003c/a\u003e  In order to replace our VMware SSL Certifactes, we need to create a web certificate template that we can then reuse to deploy all of the individual service certificates like vCenter, SSO, Update Manager, vCenter Orchestrator, etc.  This certificate will be issued on the vCenter Server and requested in a later process.\u003c/p\u003e\n\u003cp\u003eIn \u003ca href=\"/2014/08/setup-home-lab-ssl-certificates-authority\"\u003epart one of this series\u003c/a\u003e, we installed a certificate authority.\u003c/p\u003e\n\u003cp\u003eIn \u003ca href=\"/2014/08/setup-home-lab-ssl-root-certificates\"\u003epart two of this series\u003c/a\u003e, we deployed client authentication certificates to all our workstations and servers.\u003c/p\u003e","title":"Create VMware SSL Web Certificate"},{"content":" Home Lab SSL Certificates aren\u0026rsquo;t exactly a high priority for most people, but they are something you might want to play with before you get into a production environment. In part one of this series, I went over installing an Enterprise Root CA just to get us up and running. Again, be aware that for a production environment you should use an Offline Root CA and a Subordinate CA, but we\u0026rsquo;re in a lab and don\u0026rsquo;t need the additional layer of security.\nConfigure Auto-Enrollment Open up the Certificate Authority MMC window from either Administrative Tools or via Server Manager \u0026ndash;\u0026gt; Tools.\nFrom here, we can right click on the \u0026ldquo;Certificate Templates\u0026rdquo; folder and choose \u0026ldquo;Manage\u0026rdquo;\nFrom here, we\u0026rsquo;ll look for the \u0026ldquo;Workstation Authentication\u0026rdquo; and choose \u0026ldquo;Duplicate Template\u0026rdquo;. (It\u0026rsquo;s not mandatory to do this, but I recommend doing it in case you need to go back to the original Workstation Authentication certificate template later on.) Give the new certificate template a name.\nNext, we want to change some of the properties of this certificate. I\u0026rsquo;ve changed my Validity period to 10 years (again this is a lab and not production), and I\u0026rsquo;ve selected to publish the certificate in Active Directory.\nIn the Security tab, I changed the \u0026ldquo;Domain Computers\u0026rdquo; permissions to read and autoenroll the certificate.\nFor the Extensions Tab i changed the Application Policies to include both Client and Server Authentication.\nAnd in the Subject Name tab, added the UPN checkbox.\nSave your changes.\nLastly, once you\u0026rsquo;re done here, you can go back to the Certificate Authority MMC. Right click on the Certificate Template Folder and choose New\u0026ndash;\u0026gt; Certificate Template to Issue. Then select the certificate template you just created.\nGroup Policy Now we need to have a way to tell the computers to automatically grab those certificates and install them as trusted certificates. Remember where we selected \u0026ldquo;Autoenroll\u0026rdquo; earlier on the certificate template? That doesn\u0026rsquo;t do anything until we configure a GPO to tell the computers to look for these certs.\nCreate a GPO (or use an existing one, it is a lab I suppose) and link it to an OU that should get these certificates automatically installed. Since this is a machine certificate, be sure to have it affect an OU with your computers in it if not the entire domain.\nResults It might not take affect right away, but if you look in your Certificate Authority MMC under issued certificates, you will start to see some of them show up. If you want to test this quickly, you can run a gpupdate /force on a workstation to see if it shows up right afterwards.\nSummary We now have the Root Certificate Authority Installed and certificates are automatically being enrolled with all of our domain computers. This is an important first step to any Public Key Infrastructure. Stay tuned to future posts on how we can now leverage more certificates to protect our web servers.\n","permalink":"https://theithollow.com/2014/08/07/setup-home-lab-ssl-root-certificates/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1.png\"\u003e\u003cimg alt=\"piotr_halas_padlock\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock1-150x150.png\"\u003e\u003c/a\u003e Home Lab SSL Certificates aren\u0026rsquo;t exactly a high priority for most people, but they are something you might want to play with before you get into a production environment.  In \u003ca href=\"/2014/08/setup-home-lab-ssl-certificates-authority\"\u003epart one of this series\u003c/a\u003e, I went over installing an Enterprise Root CA just to get us up and running.  Again, be aware that for a production environment you should use an Offline Root CA and a Subordinate CA, but we\u0026rsquo;re in a lab and don\u0026rsquo;t need the additional layer of security.\u003c/p\u003e","title":"Setup Home Lab SSL Root Certificates"},{"content":" If you would like to setup SSL certificates for your home lab, this guide should get you to a minimal installation. The goal of this post is to show you a basic way to setup certificates and should not be followed verbatim if you are planning a production deployment. For one thing, this post uses an Enterprise Root Certificate Authority and in a production environment you really should have an offline Root CA and an online Subordinate CA for security purposes.\nWith all that being understood, lets begin.\nPrerequisites Active Directory Domain already setup and configured Install Active Directory Certificate Services This post uses Server 2012 R2 for the certificate server, but similar steps could be used with other Operating Systems.\nWe use Server Manager to install the Active Directory Certificate Services and their associated features. Some screenshots below show exactly what we\u0026rsquo;re selecting. Any other screens during the install should use the defaults.\nConfigure the Certificate Authority Once the Roles and Services have been installed, the Server Manager should show a warning that configurations are now required.\nWhen you click on the hyperlink, the configuration wizard will start. I\u0026rsquo;ve included screenshots again with the tabs that need to be configured. All other screens can use the defaults.\nI\u0026rsquo;ve selected an Enterprise CA and a Root CA type. Again, for a production environment, this is probably not the same configuration that you should use. For more information about setting up a full blown CA please check out Derek Seamen\u0026rsquo;s blog, derekseaman.com (clever blog name). He has tons of articles about SSL can be very useful when setting up this stuff.\nSummary This should take care of the initial install and configuration of the SSL Services. Look for part two where we configure the Root Certificates and set them up for auto enrollment.\n","permalink":"https://theithollow.com/2014/08/04/setup-home-lab-ssl-certificates-authority/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock.png\"\u003e\u003cimg alt=\"piotr_halas_padlock\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/piotr_halas_padlock-150x150.png\"\u003e\u003c/a\u003e If you would like to setup SSL certificates for your home lab, this guide should get you to a minimal installation.  The goal of this post is to show you a basic way to setup certificates and should not be followed verbatim if you are planning a production deployment.  For one thing, this post uses an Enterprise Root Certificate Authority and in a production environment you really should have an offline Root CA and an online Subordinate CA for security purposes.\u003c/p\u003e","title":"Setup Home Lab SSL Certificate Authority"},{"content":"I thought it was necessary to get this post out. I\u0026rsquo;ve heard numerous people say that they\u0026rsquo;ve tried to install VMware\u0026rsquo;s vCloud Automation Center (vCAC) but for one reason or another it just didn\u0026rsquo;t seem to work. I myself recently installed this and had issues, but somehow got it to install correctly on the third try. If you\u0026rsquo;ve had trouble configuring the vCAC appliance then look for the tip below.\nWhat I found out recently, was that during the install process if you see the little circle spinning in the corner. DO NOT DO ANYTHING!!!! In the screenshot below, you\u0026rsquo;ll see the spinning circle with the \u0026ldquo;requesting information\u0026rdquo; label next to it. If you see this thing spinning, don\u0026rsquo;t touch anything in the window. Don\u0026rsquo;t click any tabs! Don\u0026rsquo;t click any buttons! Don\u0026rsquo;t click in any empty or populated text boxes! Don\u0026rsquo;t click on anything. If you do, you\u0026rsquo;ll run the risk of invalidating the configuration. If that happens, don\u0026rsquo;t try to fix it, just deploy a new OVF and start over. It will be faster.\nJust to walk through the rest of the process, we\u0026rsquo;ll see the additional screens and setup info.\nConfiguration Go to the Host Settings. Click on the Resolve Host Name Button.\nWarning!!! When you load this page, the spinning wheel might appear while trying to load the page information. Do NOT click on any of the buttons until it goes away. Once you click the Resolve host name, you may see the spinning wheel of doom again. Wait until it resolves your host name before continuing.\nNext click on the SSL Tab. You may see the death wheel again so be patient. Choose your certificate action, enter your certificate information and then click the replace certificate.\nWARNING!!! Don\u0026rsquo;t move on to the next tab until this has completed and you can see that the \u0026ldquo;SSL Certificate is replaced Successfully\u0026rdquo; message. Click the SSO Tab. Enter your SSO settings here and click save settings.\nWARNING!!!! Yeah, you get it, make sure before you click into anything or switch between tabs that the wheel of misfortune is not still spinning. The SSO Page should look similar to this after setting up SSO. I would like to mention that there is an Identity appliance that can be downloaded with vCAC, but you can use your own vSphere SSO appliance if you\u0026rsquo;d like.\nClick the Licensing tab.\nWARNING!!!! This is the tab that I messed up on. When you first open this tab, the \u0026ldquo;broken wheel\u0026rdquo; starts spinning. I assume it looks for a license when you open the tab. Wait until this completes until you enter your license key. If all goes well, then you should be able to login to your vCAC appliance after the setup.\nSummary I don\u0026rsquo;t remember if I said it previously or not, but please don\u0026rsquo;t click on ANYTHING until the spinning wheel stops turning. I mean it. Hopefully this is fixed in a future version of vCAC, but until then I hope this post saves people a lot of trouble. I know that I was kicking myself trying to figure out why I couldn\u0026rsquo;t get things working and now I know.\n","permalink":"https://theithollow.com/2014/07/28/trouble-configuring-vcac-appliance/","summary":"\u003cp\u003eI thought it was necessary to get this post out.  I\u0026rsquo;ve heard numerous people say that they\u0026rsquo;ve tried to install VMware\u0026rsquo;s vCloud Automation Center (vCAC) but for one reason or another it just didn\u0026rsquo;t seem to work.  I myself recently installed this and had issues, but somehow got it to install correctly on the third try.  If you\u0026rsquo;ve had trouble configuring the vCAC appliance then look for the tip below.\u003c/p\u003e","title":"Trouble Configuring the vCAC appliance"},{"content":" There are a few Linux commands that vSphere Administrators should know for basic troubleshooting purposes and I wanted to take a second to review them in case you\u0026rsquo;ve typically been a Windows Administrator (like me).\nFirst, traversing the Linux file system is pretty similar to going through Windows directories from the command line.\nchange directories\nWindows : CD C:dirname\nLinux : cd /dirname\nShow files and folders\nWindows : dir\nLinux : ls\nSince this post is for VMware Administrators I wanted to focus on the Head and Tail commands. If you\u0026rsquo;ve gotten into the ESXi host operating system, chances are you\u0026rsquo;re troubleshooting a problem and need to look for some logs.\nThere are a few commands that will help with this:\nHEAD Head is a way to read the top 10 lines of a file.\nfor instance\nhead /var/log/vmkernel.log would read the top 10 lines of the vmkernel log file. (maybe not incredibly useful)\nTAIL tail is , you guessed it, a way to read the bottom 10 lines of a file.\nfor instance,\ntail /var/log/vmkernel.log will show the last 10 lines of the vmkernel log file. (possibly more useful)\nIf you use the tail command with a -f (f for follow) it will continually update the screen as new entries are added.\nAs an example, tail -f /var/log/vmkernel.log will initially show the last 10 lines of the vmkernel log, but as more entries are written to it, they will show up like a running log file on your screen. Press Ctrl + C to cancel.\nThis can be an incredibly useful tool when troubleshooting events.\nSummary I know this is just the tip of the iceberg if you really want to learn anything about Linux, but this is pretty useful information for someone who spends their days in a Windows GUI and all of a sudden needs to troubleshoot ESXi hosts.\nAlso, if you\u0026rsquo;re looking for a neat way to turn on SSH to your hosts so you can run these commands, check out this post by a good friend of mine James Green : http://www.virtadmin.com/virtadmin-ssh-tool-vsphere-clusters/\n","permalink":"https://theithollow.com/2014/07/21/know-heads-tails-linux/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/1981-d-washington-quarter.png\"\u003e\u003cimg alt=\"1981-d-washington-quarter\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/1981-d-washington-quarter.png\"\u003e\u003c/a\u003e There are a few Linux commands that vSphere Administrators should know for basic troubleshooting purposes and I wanted to take a second to review them in case you\u0026rsquo;ve typically been a Windows Administrator (like me).\u003c/p\u003e\n\u003cp\u003eFirst, traversing the Linux file system is pretty similar to going through Windows directories from the command line.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003echange directories\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eWindows :  CD C:dirname\u003c/p\u003e\n\u003cp\u003eLinux : cd /dirname\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eShow files and folders\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eWindows :  dir\u003c/p\u003e","title":"Know Heads from Tails about Linux"},{"content":" VMtools is one of those nagging little pieces of software that always seems to be a pain to update. Back in my System Administration days, I commonly needed to report on which VMs had different versions of VMtools, and I have to admit, this was a more difficult property to find from my PowerCLI toolkit.\nTake a look at the old way of finding my VMtools versions through PowerCLI.\n[code language=powershell] get-view -viewtype virtualmachine | select Name, @{ Name=ToolsVersion\u0026quot;; Expression={$_.config.tools.toolsversion}} | FT -autosize [/code]\nWhew! That\u0026rsquo;s a pretty intricate command just to find the VMtools version, if I do say so.\nSo, when I saw that PowerCLI 5.5 R2 was released and had a new vmguest property for VMtools I wanted to give a big shout out to Alan Renouf and the PowerCLI team over at VMware. This was a long time coming.\nCheck out the new property that is available in vmguest.\nNow, we can run the command below to find our VMtools version, and this one is MUCH easier to remember.\n[code language=powershell] get-vm | get-vmguest | select VMName, ToolsVersion | FT -autosize [/code]\nIf you haven\u0026rsquo;t upgraded to PowerCLI 5.5 R2 you can grab the download here, and be sure to get Alan Renouf a shout for his team\u0026rsquo;s work on the new code.\n","permalink":"https://theithollow.com/2014/07/14/get-vmtools-powercli-5-5-r2/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/PowerCLI.jpg\"\u003e\u003cimg alt=\"PowerCLI\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/PowerCLI.jpg\"\u003e\u003c/a\u003e VMtools is one of those nagging little pieces of software that always seems to be a pain to update.  Back in my System Administration days, I commonly needed to report on which VMs had different versions of VMtools, and I have to admit, this was a more difficult property to find from my PowerCLI toolkit.\u003c/p\u003e\n\u003cp\u003eTake a look at the old way of finding my VMtools versions through PowerCLI.\u003c/p\u003e","title":"Get VMtools with PowerCLI 5.5 R2"},{"content":" There are a ton of features now that VMware has that may require either an SSD or a Non-SSD to be available in your ESXi host. Host Caching requires an SSD and Partner products like PernixData also require an SSD to be available on the host. VMware\u0026rsquo;s Virtual SAN (VSAN) currently require both an SSD and a Non-SSD to be available.\nI\u0026rsquo;ve seen that many people want to try out these products in a lab environment, but don\u0026rsquo;t want to go out and buy another disk just to familiarize themselves with the product. In these cases, you can fool ESXi into thinking there is a device of the type you want. This can be done by using the esxcli commands on the host as documented here on VMware\u0026rsquo;s site.\nI found these commands to be a bit tedious since you need to start the ssh service, grab your favorite ssh client, and then run a series of commands that I could never remember off the top of my head. So I chose to create a powerCLI script to do this for me, as well as do it from a GUI.\nThe script found below does the following things.\nConnects to your vCenter Server Displays your ESXi hosts and asks which one we\u0026rsquo;re going to modify a disk on Displays a list of disks you can modify Asks whether you want to convert the selected disk to an SSD or a Non-SSD Checks to see if your host is in maintenance mode, and if not puts it into Maintenance Mode Changes the drive type Reclaims the disk As you can see, I\u0026rsquo;ve run the script on one of my hosts and it modified the drive type for me.\nYou can see my disks before I run the script. vmhba1:C0:T1:L0 is a Non-SSD.\nRun the script and it will ask you for which host we\u0026rsquo;re looking for a disk on.\nSelect the disk we\u0026rsquo;re going to convert.\nChoose the type of disk to convert it to. Disk will show up in ESXi as the selected drive type.\n[code language=powershell] ####################################################################################### ## Author: Eric Shanks ## Site: theITHollow.com ## Use Cases: Flip the Device type of a disk between SSD and Non-SSD ## Disclaimer: This Script is free for use but should be reviewed andor modified ## before use. Not responsible for problems encountered while using ## this script. Good Luck ## Current Version 1.0 ## Change Log: ## Version 1.0 - Initial Release #######################################################################################\n##connect to the vCenter Add-PSSnapin vmware.vimautomation.core\n##Ask for vCenter Address and connect $vcenter = Read-Host -Prompt \u0026ldquo;Please Enter the FQDN or IP Address of your vCenter Server\u0026rdquo; Connect-VIServer $vcenter\n#get the host and disk that you want to convert $disk = Get-VMHost | Sort-Object | Out-GridView -PassThru | get-scsilun | Sort-object | Out-GridView -PassThru $Selectedhost = $disk.VMHost.Name $state = (Get-VMHost $Selectedhost).connectionstate\n##Determine if the host is in maintenance mode and if not allow user to put it in Maintenance If ($state -eq \u0026ldquo;Connected\u0026rdquo;) { $caption1 = \u0026ldquo;Choose Action\u0026rdquo;; $message = \u0026ldquo;It is highly recommended to put the host into Maintenance Mode first. Would you like to do that now?\u0026rdquo;; $MaintYes = new-Object System.Management.Automation.Host.ChoiceDescription \u0026ldquo;\u0026amp;Yes\u0026rdquo;,\u0026ldquo;Yes\u0026rdquo;; $MaintNo = new-Object System.Management.Automation.Host.ChoiceDescription \u0026ldquo;\u0026amp;No\u0026rdquo;,\u0026ldquo;No\u0026rdquo;; $Maintchoices = [System.Management.Automation.Host.ChoiceDescription[]]($MaintYes,$MaintNo); $Maintanswer = $host.ui.PromptForChoice($caption1,$message,$Maintchoices,0)\nswitch ($Maintanswer){ 0 {\u0026ldquo;You entered Yes\u0026rdquo;; break} 1 {\u0026ldquo;You entered NO\u0026rdquo;; break} } } ##put Host into maintenance mode If ($Maintanswer -eq 0) { Set-VMHost $Selectedhost -State Maintenance }\n##Find the Canonical Name of the disk to be converted. $canonical = $Disk.CanonicalName\n##Get the ESXCLI commands from the selectedhost $esxcli = Get-EsxCli -VMHost $selectedHost ##Get the Storage Array Type Plugin of the disk to be converted $SATP = ($esxcli.storage.nmp.device.list() | where {$_.Device -eq $canonical }).StorageArrayType\n##Find out if we\u0026rsquo;re converting from SSD to Spinning disk, or vice-versa $caption = \u0026ldquo;Choose Action\u0026rdquo;; $message = \u0026ldquo;Make the disk look like an SSD, or a Non-SSD?\u0026rdquo;; $SSD = new-Object System.Management.Automation.Host.ChoiceDescription \u0026ldquo;\u0026amp;SSD\u0026rdquo;,\u0026ldquo;SSD\u0026rdquo;; $NonSSD = new-Object System.Management.Automation.Host.ChoiceDescription \u0026ldquo;\u0026amp;NonSSD\u0026rdquo;,\u0026ldquo;NonSSD\u0026rdquo;; $choices = [System.Management.Automation.Host.ChoiceDescription[]]($SSD,$NonSSD); $answer = $host.ui.PromptForChoice($caption,$message,$choices,0)\nswitch ($answer){ 0 {\u0026ldquo;You entered SSD\u0026rdquo;; break} 1 {\u0026ldquo;You entered NONSSD\u0026rdquo;; break} }\n##Convert the disk If ($answer -eq 0){ $esxcli.storage.nmp.satp.rule.add($null,$null,$null,$disk.CanonicalName,$null,$null,$null,\u0026ldquo;enable_ssd\u0026rdquo;,$null,$null,$satp,$null,$null,$null) } Else{ $esxcli.storage.nmp.satp.rule.remove($null,$null,$null,$disk.CanonicalName,$null,$null,\u0026ldquo;enable_ssd\u0026rdquo;,$null,$null,$satp,$null,$null,$null) } ##Reclaim the disk $esxcli.storage.core.claiming.reclaim($Disk.CanonicalName) [/code]\nSummary I hope that you find this tool useful. It has saved me a lot of time in my home lab. As always, this should be tested thoroughly in a lab environment before using in a production site. I am not a powershell guru and make no guarantees that this script will work for all environments. I would love to hear your feedback in the comments. If you have any suggestions, please add them to the comments as I would like to build on this script in the future.\n","permalink":"https://theithollow.com/2014/07/07/vmware-drive-type-changer/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/07/nicubunu_Tools.png\"\u003e\u003cimg alt=\"nicubunu_Tools\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/07/nicubunu_Tools-150x150.png\"\u003e\u003c/a\u003e There are a ton of features now that VMware has that may require either an SSD or a Non-SSD to be available in your ESXi host. Host Caching requires an SSD and Partner products like PernixData also require an SSD to be available on the host. VMware\u0026rsquo;s Virtual SAN (VSAN) currently require both an SSD and a Non-SSD to be available.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve seen that many people want to try out these products in a lab environment, but don\u0026rsquo;t want to go out and buy another disk just to familiarize themselves with the product. In these cases, you can fool ESXi into thinking there is a device of the type you want. This can be done by using the esxcli commands on the host as \u003ca href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=2013188\"\u003edocumented here\u003c/a\u003e on VMware\u0026rsquo;s site.\u003c/p\u003e","title":"VMware Drive Type Changer"},{"content":"After a long day of working with Microsoft\u0026rsquo;s IPAM feature, I found that it might be possible to deploy my virtual servers with a static IP Address without going to look up an IP from an Excel spreadsheet or some other log.\nOK, let\u0026rsquo;s address the elephant in the room first. I know that there is this thing called DHCP and that I can already automatically assign an IP addresss, but with that solution, my IP Address could change from time to time. Typically, I create a DHCP Scope for servers that I\u0026rsquo;m just testing out, or need some dummy VMs with IP Addresses. This way I don\u0026rsquo;t have to worry about looking up stuff before deploying a VM that I\u0026rsquo;m going to destroy again shortly afterwards. I also use DHCP for PC\u0026rsquo;s, where I almost never care about the IP Address.\nMicrosoft\u0026rsquo;s IPAM feature, along with some handy powershell will keep my IP Addresses neatly organized, and I don\u0026rsquo;t have to hunt for new IP\u0026rsquo;s before deploying VMs.\nIPAM Setup First, you should deploy Microsoft IPAM on a Server 2012 R2 Server. (YES, this must be Server 2012 R2, or higher. Server 2012 is missing the required powershell commands, it\u0026rsquo;s not my fault). If you\u0026rsquo;re looking for setup instructions for IPAM check out this previous post.\nOnce your IPAM server is setup, make sure that you create an IP Address Range, like the one below.\nYou can see that I have one range that is 10.10.50.10-10.10.50.69 which is what I\u0026rsquo;ll be using for my static addresses. I also have a DHCP Scope of 10.10.50.70- 10.10.50.100.\nDeploying a New Server The new server has a few requirements as well. First, in order to connect to the IPAM server and get an IP Address, it\u0026rsquo;s going to have to have network connectivity. I got around this \u0026ldquo;chicken or the egg\u0026rdquo; scenario by having my server initially get a DHCP address, and then submit a request for a static IP. (Maybe this is hokey, but it worked).\nSecondly, and possibly most disappointing is that the new servers need to have the IPAM tools installed on them. I found that without these tools installed, the powershell scripts won\u0026rsquo;t work because they\u0026rsquo;re missing the IPAM cmdlets. You can either install this in your virtual machine template, or install the features as part of the script we\u0026rsquo;ll use later.\nEither use the powershell script seen below, or install it from the GUI. [code language=\u0026ldquo;powershell\u0026rdquo;] Install-WindowsFeature IPAM -IncludeManagementTools [/code] Now we need the script to call. The script gets the next available free IP address, assigns it to the server, updates the IPAM server to record the entry, and then update DNS. My exact script is locted below. You will want to change the Server names, IP Addresses\n[code language=\u0026ldquo;powershell\u0026rdquo;]\n##Check the following items: IPAM Server Name, Start and End IP Addresses of IPAM Range, ##Name of network adapter, Subnet Prefix Length, Default Gateway, DNS Server Addresses ## ##Requires Powershell 4.0 or higher ##Create a Common Information Model connection to IPAM Server $cim = new-cimsession -ComputerName IPAM ##Find a free IP Address from the IPAM Server. Be sure to use the addresses in your range from the IPAM Server $FreeIP = Get-IpamRange -StartIPAddress 10.10.50.10 -EndIPAddress 10.10.50.69 -CimSession $cim | Find-IpamFreeAddress | select-object -expandproperty IPAddress ##add the IP Address to the IPAM Server $servername = hostname Add-IpamAddress -CimSession $cim -IpAddress $FreeIP -devicename $servername ##Remove the CIM Session (logout) Remove-CimSession -CimSession $cim ##Get the Network Adapter named \u0026#34;Ethernet\u0026#34; $adapter = Get-NetAdapter -name Ethernet ##Disable DHCP $adapter | Set-NetIPinterface -dhcp disabled ##Set New IP AddressString $adapter | New-NetIPAddress -addressfamily IPv4 -IPAddress $FreeIP -PrefixLength 24 -type Unicast -DefaultGateway 10.10.50.254 ##Set DNS Server set-dnsclientserveraddress -InterfaceAlias ethernet -ServerAddresses 10.10.50.12, 10.10.50.9 ##Register DNS Ipconfig /registerdns [/code] Create a guest customization that will call the script we created at first run. I\u0026#39;ve chosen to save the script on a file share so that if I decide to change it later, I only have to update one file. My command is to call: [code language=\u0026#34;powershell\u0026#34;] start-process powershell -ver runas \\Servernamesharenamescriptname.ps1 [/code] Results When you deploy new VMs, IPAM will automatically get new records added so that you can keep track of them all in a single location. No need to update spreadsheets. No need to worry about IP Addresses changing due to a DHCP lease expiring. I think there are plenty of other ways to do this, including automatically creating a DHCP Reservation, but it is a solution that might be used in part for other designs. Pieces of the script may be useful to cannibalize for Orchestrator or vCAC deployments to manage IP Addresses. Maybe you can use some or all of this yourself. ","permalink":"https://theithollow.com/2014/06/30/dynamically-assigned-static-ip-addresses-huh/","summary":"\u003cp\u003eAfter a long day of working with Microsoft\u0026rsquo;s IPAM feature, I found that it might be possible to deploy my virtual servers with a static IP Address without going to look up an IP from an Excel spreadsheet or some other log.\u003c/p\u003e\n\u003cp\u003eOK, let\u0026rsquo;s address the elephant in the room first.  I know that there is this thing called DHCP and that I can already automatically assign an IP addresss, but with that solution, my IP Address could change from time to time.  Typically, I create a DHCP Scope for servers that I\u0026rsquo;m just testing out, or need some dummy VMs with IP Addresses.  This way I don\u0026rsquo;t have to worry about looking up stuff before deploying a VM that I\u0026rsquo;m going to destroy again shortly afterwards.  I also use DHCP for PC\u0026rsquo;s, where I almost never care about the IP Address.\u003c/p\u003e","title":"Dynamically Assigned Static IP Addresses...Huh?"},{"content":"If you\u0026rsquo;ve been in a situation where you need to test connectivity, you\u0026rsquo;ve probably used the ping command. But what do you do when you\u0026rsquo;re trying to test connectivity from an ESXi host? Luckily there is a command called vmkping that will allow you to test from the host.\nThe first thing that you need to do is to SSH into your ESXi host. Turn the SSH Service on from the Configuration \u0026ndash;\u0026gt; Security Profile Tab. Then you can use your favorite ssh client and remote into your host.\nOnce you\u0026rsquo;ve logged in, you can then issue a vmkping command with an associated IP Address.\nThe vmkping will attempt to use any path it can to reach the destination. Consider the following scenario. You have two VLANs that are both routed and you want to make sure that you have connectivity between two devices that are on the same VLAN. To illustrate, look at the example below which consists of an ESXi host with two interfaces, and a storage device that is presenting NFS Shares. You run vmkping to make sure that you have connectivity between the host and the storage device on the NFS Network. The example below shows you what you THINK you are testing, and is your desired result.\nBut, what if in this example we have a failure on the NFS network somewhere and we just run the vmkping command again?\nBelow, we see that the ESXi host has a network adapter link that is down. The VMkernal Port that is normally used for NFS Traffic is unable to communicate with the storage array. But, since the networks are routed, if you issued a vmkping command on this host, it might still show that there is connectivity between the two hosts. You can see that the traffic could be sent from the vmk0 port, out onto the management network, up to the router and over to the storage array on the NFS VLAN. This is probably not what you intended, but if you are just looking for a ping reply, you might not have noticed the issue on the NFS network.\nLuckily, starting with vSphere 5.1, the vmkping command can also specify a VMkernal port to be used for testing with. Now, we can run the same command with an additional switch to specify how the pings should be sent out on the wire.\nvmkping -I vmk1 [VIP1_IP_ADDRESS_HERE] would force the pings over the vmk1 port and would fail to reach the destination, allowing you to troubleshoot the issue correctly.\nSummary Ping is a great command and is by many as the first troubleshooting step in diagnosing a network problem. VMware ESXi has a similar command to test connections from an ESXi host as well. Be sure to use it correctly and to know your environment so that you don\u0026rsquo;t falsely read the data.\n","permalink":"https://theithollow.com/2014/06/23/test-connections-esxi-vmkping/","summary":"\u003cp\u003eIf you\u0026rsquo;ve been in a situation where you need to test connectivity, you\u0026rsquo;ve probably used the ping command.  But what do you do when you\u0026rsquo;re trying to test connectivity from an ESXi host?  Luckily there is a command called vmkping that will allow you to test from the host.\u003c/p\u003e\n\u003cp\u003eThe first thing that you need to do is to SSH into your ESXi host.  Turn the SSH Service on from the Configuration \u0026ndash;\u0026gt; Security Profile Tab.  Then you can use your favorite ssh client and remote into your host.\u003c/p\u003e","title":"Test Connections from an ESXi Host Using vmkping"},{"content":"VMware just announced their 2.0 version of Log Insight last week and for a logging product, it\u0026rsquo;s pretty cool. Let\u0026rsquo;s face it, most of us don\u0026rsquo;t get up every morning and rush to our computer to check out the newest logging software on the market, but VMware Log Insight is still neat.\nInstallation The VMware Log Insight 2.0 product was shockingly easy to install for log management system. In my experience, logging software makes you jump through so many hoops that you need to be a Parkour Ninja to do successfully, and some of which I would consider to be a \u0026quot; Cold Butter IT Solution\u0026quot;, but not in this case. The install comes in the form of an OVA and I won\u0026rsquo;t go through that process, but it\u0026rsquo;s very simple.\nOnce, the Appliance is installed you\u0026rsquo;ll need to go to the http://DNSNAME:5480 to start the configuration. You\u0026rsquo;ll be reminded of this if you go to the console of the appliance as shown below.\nI\u0026rsquo;m not going to go through all of the deployment screens, since they are very straight forward but the steps are outlined below. Don\u0026rsquo;t worry, when you log into the appliance for the first time, there is a step by step wizard asking you to fill in this information, so you won\u0026rsquo;t go hunting around for all the areas you need to configure.\nPick a deployment type. If you\u0026rsquo;ve already deployed one manager, you can scale out your deployment right from the deployment page. In my case we\u0026rsquo;re starting a new deployment. Enter Administrator Credentials and and email address for reporting alerting. Add a License Key Decide whether or not to send anonymous usage information to VMware. Specify a Time Server Configure your SMTP Server to relay emails sent from the appliance Configuration Once you\u0026rsquo;ve finished deploying the appliance and answering some questions, you\u0026rsquo;ll need to start listing the items you\u0026rsquo;ll be collecting logs from. The first and most obvious would be vCenter. When you configure vCenter, it will also try to add any hosts that vCenter is managing and all of the log forwarding will be configured for you. Pretty slick right?\nNext, you can add vCOps if you\u0026rsquo;re using it.\nNext, if you wish to use Log Insight to monitor your Windows Servers, you can download an Agent which comes in the form of an MSI. Neatly packaged for deployment!\nIf there are other things that you\u0026rsquo;d like to monitor with Log Insight, I suggest checking out the Log Insight Management Packs from the Solution Exchange.\nLog Insight Content Packs\nLog Management Now that we\u0026rsquo;ve configured everything, just take a look at the dashboards you can create and see how it can take a variety of logs and put them into a structured table, even if some of this log data is not a structured log. Build your own dashboards, so that you can see the information most important to you. Maybe you\u0026rsquo;re having a performance problem with one of your machines and want to closely monitor certain events. You can create a dashboard with just a filtered set of events, or across many hosts.\nPricing You can purchase Log Insight 2.0 in two flavors:\nPer Operating System - $242.00 Per Physical CPU - $$1,815 I have a hunch that if you go through sales, you might be able to get discounted pricing, but this is the list price and can be bought online without going through a sales person if you wish.\nSummary Log Insight is an amazingly easy product to install and get configured. Not only that, it allows for quick customization of reporting to allow you to gather the metrics you want and do reporting on them, even if it\u0026rsquo;s not in a structure log format. If you want more information, please check out the Log Insight Document Center.\n","permalink":"https://theithollow.com/2014/06/16/vmware-log-insight-2-0/","summary":"\u003cp\u003eVMware just announced their 2.0 version of Log Insight last week and for a logging product, it\u0026rsquo;s pretty cool.  Let\u0026rsquo;s face it, most of us don\u0026rsquo;t get up every morning and rush to our computer to check out the newest logging software on the market, but VMware Log Insight is still neat.\u003c/p\u003e\n\u003ch1 id=\"installation\"\u003e\u003cstrong\u003eInstallation\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eThe VMware Log Insight 2.0 product was shockingly easy to install for log management system.  In my experience, logging software makes you jump through so many hoops that you need to be a Parkour Ninja to do successfully, and some of which I would consider to be a \u0026quot; \u003ca href=\"/2014/06/cold-butter-solutions/\"\u003eCold Butter IT Solution\u003c/a\u003e\u0026quot;, but not in this case.  The install comes in the form of an OVA and I won\u0026rsquo;t go through that process, but it\u0026rsquo;s very simple.\u003c/p\u003e","title":"VMware Log Insight 2.0"},{"content":" The other day I was making a grilled cheese sandwich for my son and it got me thinking. The process of making one of these delicious morsels was similar to some of the IT solutions that I had worked with in the past. Let me explain.\nFirst and foremost, lets define the objective. The goal was to have a tasty grilled cheese sandwich for lunch. This is not a difficult process and the steps are also fairly straight forward.\nButter Bread Put Cheese between bread Grill It Ok, I\u0026rsquo;ve got my task list ready to go, lets move forward with the service delivery of this cheesy bread goody. I grab the butter out of the fridge and a couple of slices of fresh bread from the pantry. I go to spread the butter on the bread and find that my relatively straight forward task list just got a bit more challenging. You see the butter is very cold so it doesn\u0026rsquo;t spread easily, and since the bread is fresh, it tears easily.\nNeedless to say the first attempt at deploying this toasted cheese was a failure and I needed to go back to the drawing board. Lets face it, if my son has asked me to provide him with Grilled Cheese as a Service (GCaaS) he doesn\u0026rsquo;t want a poorly constructed product. So, this time I take my time and take smaller chucks of butter and able to spread the butter more evenly across the bread without tearing it, but will admit that I had to take extra time and be very careful with it as I\u0026rsquo;m going through the deployment process.\nI finally finished the sandwich and my son seems satisfied with the end result so everyone is happy, but perhaps the next time I build a grilled cheese sandwich I\u0026rsquo;ll wait for the butter to warm up a little bit before I go through with the service delivery.\nCan you see how this relates to IT solutions? How many times have you tried to deploy a solution that may work, and the customer may be happy with it in the end, but you had to massage the product into production? It\u0026rsquo;s my belief that if there are many complicated steps that must be done in a specific order and have many dependencies, and the installers aren\u0026rsquo;t managing this process, then the product might not be fully ready to be deployed.\nA good solution should be easily deployed without a lot of manual effort from the person installing it. If this isn\u0026rsquo;t the case then the results of that product may be inconsistent and ultimately hurt the reputation of the product itself.\n","permalink":"https://theithollow.com/2014/06/09/cold-butter-solutions/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/06/20140529_172343.jpg\"\u003e\u003cimg alt=\"20140529_172343\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/06/20140529_172343-168x300.jpg\"\u003e\u003c/a\u003e The other day I was making a grilled cheese sandwich for my son and it got me thinking.  The process of making one of these delicious morsels was similar to some of the IT solutions that I had worked with in the past.  Let me explain.\u003c/p\u003e\n\u003cp\u003eFirst and foremost, lets define the objective.  The goal was to have a tasty grilled cheese sandwich for lunch.  This is not a difficult process and the steps are also fairly straight forward.\u003c/p\u003e","title":"Cold Butter IT Solutions"},{"content":" When my Cisco 3750 finally died, I was bummed out but looking for a replacement. No sense in crying about my loss, or trying to decide IF I\u0026rsquo;m going to replace my switch since, my whole lab would be kind of useless without it.\nMy requirements for a new switch were pretty simple.\nLayer 3 Routing Capabilities 24 Gigabit Ports or better Cheap The HP v1910-24G (JE006A) seemed to meet my requirements so I ordered it from Amazon when I saw that it was under $300. I needed to get it in my lab fast, so I quickly made the purchase but I\u0026rsquo;ll admit I was skeptical. Timothy Carr eased my mind a bit when he tweeted me.\nTim was right. It not only met my expectations but exceeded them.\nThe Good OK, the best thing about this switch was that it was cheap inexpensive. At less than $300 this switch was really a steal for a home lab. Try to find yourself another Layer 3 with the capabilities of this one for a better price. Obviously, I\u0026rsquo;m going to put my other two requirements as Pros for this switch as well since that\u0026rsquo;s what I felt was important.\nThis switch had several capabilities that are pretty cool for an inexpensive home lab switch such as support for Link Level Discover Protocol (LLDP), Link Aggregation Control Protocol (LACP), Link Aggregation, Quality of Service (QOS), stacking and IPv6 support. Pretty neat that I can now play with all this stuff for my vSphere environment.\nOn top of some of the cool networking you can do with this switch, some additional items are available for security. The switch supports 802.1X (Port based Network Access Control) allowing me to set rules for what kinds of machines can connect to the network. In addition we can use Authentication Authorization and Accounting (AAA) and RADIUS. Yes, the switch does have a local user database as you would expect.\nThe Bad The biggest gripe I have about this switch is the lack of command line access. This switch was meant to be configured via the built in web interface, so the default command line has a bit left to be desired. I found an unsupported solution to this that I wrote in a previous article. Even after the CLI modification, you\u0026rsquo;re still running a version of Comware which is sturdy, but I imagine most people are used to the Cisco or Provision syntax and not Comware OS.\nAlso, this is not a true Layer 3 switch. It supports up to 32 static routes which is great for a lab,but for a production environment you may want more than 32 or possibly some routing protocols that this switch lacks.\nHollow Points I am over the moon, happy with this switch. While this might not be your core switch at your production datacenter, it is an amazing little switch for smaller workloads, especially a lab environment.\n","permalink":"https://theithollow.com/2014/06/03/hp-v1910-24g-switch-review/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/05/HP-v1910Review-2.png\"\u003e\u003cimg alt=\"HP-v1910Review-2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/05/HP-v1910Review-2.png\"\u003e\u003c/a\u003e When my Cisco 3750 finally died, I was bummed out but looking for a replacement.  No sense in crying about my loss, or trying to decide IF I\u0026rsquo;m going to replace my switch since, my whole lab would be kind of useless without it.\u003c/p\u003e\n\u003cp\u003eMy requirements for a new switch were pretty simple.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eLayer 3 Routing Capabilities\u003c/li\u003e\n\u003cli\u003e24 Gigabit Ports or better\u003c/li\u003e\n\u003cli\u003eCheap\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThe \u003ca href=\"http://www.amazon.com/gp/product/B003UL531W/ref=as_li_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B003UL531W\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\u0026amp;linkId=TX4SOQDKX64GXAQT\"\u003eHP v1910-24G (JE006A)\u003c/a\u003e seemed to meet my requirements so I ordered it from Amazon when I saw that it was under $300.  I needed to get it in my lab fast, so I quickly made the purchase but I\u0026rsquo;ll admit I was skeptical.  \u003ca href=\"http://twitter.com/timmycarr\"\u003eTimothy Carr\u003c/a\u003e eased my mind a bit when he tweeted me.\u003c/p\u003e","title":"HP v1910-24G Switch Review"},{"content":"theITHollow.com lab suffered an outage to the core switch a few weeks ago (an aging Cisco 3750) and I was looking for a replacement that wouldn\u0026rsquo;t break the bank. Luckily I found the HP v1910-24G (JE006A) to be more than adequate. One of my main gripes with this switch was that the Command Line Interface was very limited. See for yourself. While the cli out of the box is nice, and I would say necessary, there isn\u0026rsquo;t a lot that can be done with it. For basic configuration tasks, you\u0026rsquo;ll be stuck with the Web GUI. But after digging through some HP discussion boards I found out that you can enable the Comware operating system commands.\nAdd Comware CLI First things first, be sure to enable SSH or Telnet so that you can access the switch via a terminal emulator such as Putty. To do this, I used the web client. Next, use Putty to connect to the switch over SSH and login to the switch. Once you have a command line interface, run the following command:\n_cmdline-mode on You will then be asked if you want to allow all commands to be executed. Of course you do! Then you\u0026rsquo;ll be asked for a password which I found on this HP Discussion Board. The Password is 512900 Now that you\u0026rsquo;ve turned on the command line, you can use the normal Comware OS commands that you may be accustomed to running in order to configure things like VLANS and things.\nDISCLAIMER: Obviously you saw the warning in the previous screenshot that this should only be done with an Engineer\u0026rsquo;s guidance so I\u0026rsquo;m sure this is unsupported by HP, don\u0026rsquo;t use this in a production environment. I\u0026rsquo;m guessing this switch is used in a lot of labs which is why I\u0026rsquo;m posting the info. If you are a Cisco or Procurve Admin and aren\u0026rsquo;t familiar with the syntax, I recommend looking at this guide. I know that I did. One more thing I wanted to mention was that in Putty at least, the command line can be a bit frustrating because the backspace button on your keyboard won\u0026rsquo;t work. I don\u0026rsquo;t know about you but I mistype things all the time so this is a problem. To fix this, you can go to the Terminal \u0026ndash;\u0026gt; Keyboard section of Putty and change the Backspace Key from \u0026ldquo;Control-?(127)\u0026rdquo; to \u0026ldquo;Control-H\u0026rdquo; which should resolve that issue for you. Now you can use the HP v1910-24G CLI to play around and configure your switch. Have Fun!\n","permalink":"https://theithollow.com/2014/05/27/hp-v1910-24g-cli-goody/","summary":"\u003cp\u003etheITHollow.com lab suffered an outage to the core switch a few weeks ago (an aging Cisco 3750) and I was looking for a replacement that wouldn\u0026rsquo;t break the bank.  Luckily I found the \u003ca href=\"http://www.amazon.com/gp/product/B003UL531W/ref=as_li_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B003UL531W\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\u0026amp;linkId=6ZSLLGZA3FFQONMB\"\u003eHP v1910-24G (JE006A)\u003c/a\u003e to be more than adequate. One of my main gripes with this switch was that the Command Line Interface was very limited.  See for yourself. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/05/PUTTY-HPv1910-0.png\"\u003e\u003cimg alt=\"PUTTY-HPv1910-0\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/05/PUTTY-HPv1910-0.png\"\u003e\u003c/a\u003e While the cli out of the box is nice, and I would say necessary, there isn\u0026rsquo;t a lot that can be done with it.  For basic configuration tasks, you\u0026rsquo;ll be stuck with the Web GUI. But after digging through some HP discussion boards I found out that you can enable the Comware operating system commands.\u003c/p\u003e","title":"HP v1910-24G CLI Goody"},{"content":"For the past year I\u0026rsquo;ve been working as a Vice President for a startup consulting company that deals with distressed assets. Think debt collections type stuff. My role involved managing projects, providing technical consultations about things like PCI-DSS, HIPAA and infrastructure design. While this position was certainly challenging in its own ways, it was time for me to make a change. I\u0026rsquo;m very excited to be joining the team at Ahead. Ahead is a consulting company in downtown Chicago that offers a variety of services for IT delivery.\nAHEAD helps clients move to an optimized IT service delivery model by accelerating their transformation from a builder of components to a broker of services. With our deep expertise in today’s compelling technologies, we ensure clients can keep pace with business demands. -From the ThinkAheadIT.com website.\nIf Ahead sounds familiar to you, it might be because of some of the other employees you know there. Chris Wahl, Tim Curless and Brian Suhr are all VMware Certified Design Experts (VCDX) and Steve Pantol and Chris Wahl have recently written the Networking for VMware Administrators Book.\nI am very excited to join this team as a Sr. Datacenter Engineer next week and getting back to engineering technical solutions. I can\u0026rsquo;t wait to start.\n","permalink":"https://theithollow.com/2014/05/22/time-start-thinking-ahead/","summary":"\u003cp\u003eFor the past year I\u0026rsquo;ve been working as a Vice President for a startup consulting company that deals with distressed assets.  Think debt collections type stuff.  My role involved managing projects, providing technical consultations about things like PCI-DSS, HIPAA and infrastructure design. While this position was certainly challenging in its own ways, it was time for me to make a change.  I\u0026rsquo;m very excited to be joining the team at \u003ca href=\"http://thinkaheadit.com\"\u003eAhead\u003c/a\u003e. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/05/ahd-logo-hq.png\"\u003e\u003cimg alt=\"ahd-logo-hq\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/05/ahd-logo-hq.png\"\u003e\u003c/a\u003e Ahead is a consulting company in downtown Chicago that offers a variety of services for IT delivery.\u003c/p\u003e","title":"It\u0026#039;s time to start thinking Ahead!"},{"content":"I was recently approached to present the Keynote session for a few VMUG conferences and wanted to provide a perspective about the experience.\nPublic Speaking is clearly not one of my top 10 things I\u0026rsquo;d like to spend my time doing. Being a Systems Engineer, I don\u0026rsquo;t mind explaining things to a small group of people, but for the most part am a pretty quiet and reserved person who would prefer to stay in the shadows. Don\u0026rsquo;t get me wrong, if asked to weigh in, I have had no problem providing feedback or participate in a conversation, especially if it\u0026rsquo;s something I\u0026rsquo;m knowledgeable about, but for the most part, I\u0026rsquo;m pretty shy.\nSo when David Davis and Scott Lowe (no, not that one, the other Scott Lowe) asked me to do some of these presentations, I was excited but pretty apprehensive. Being a Chicago VMUG leader, I\u0026rsquo;ve spoken at local VMUG events before, but usually around a topic or design that I\u0026rsquo;ve been implementing in the field, not a bigger picture type discussion about the future direction of things such as this keynote.\nTo be honest, I think I would have preferred to stay in the shadows and not put myself out there in front of 300 or 600 people but thought back to a post I read from Duncan Epping about \u0026quot; Confessions of a VMUG speaker\u0026quot; and how you need to force yourself into uncomfortable situations in order to grow. I think that remembering this post is what really made me accept the challenge, even though it\u0026rsquo;s a scary thing to be a public speaker. Amy Lewis seems to share this sentiment in a recent Geek Whisperers podcast as well, since she mentions being more of an extrovert than she would prefer to be, because if you\u0026rsquo;re not a little uncomfortable, \u0026ldquo;You\u0026rsquo;re doing it wrong\u0026rdquo;.\nWhat I found out from this experience was almost the same as I found out about starting a blog, or being a VMUG leader. Giving a presentation about something, forces you to learn it better. You have to be prepared for questions, you don\u0026rsquo;t want to give people wrong information and as a side bonus, I get to practice some presenting skills. Every time you do a presentation, it gets a little easier to get back up there and present again. You\u0026rsquo;re uncomfortable with something until you do it enough to be comfortable with it.\nHaving the opportunity to do these presentations was a really great thing for myself, professionally. I met some great people that I otherwise wouldn\u0026rsquo;t have, and got to develop some skills that I don\u0026rsquo;t use as much. On top of that, I researched a topic that I knew about, but now I know it much better than I did. Every time I do one of these presentations I agonize over it and stress out for a few days while I\u0026rsquo;m preparing, but when I\u0026rsquo;m actually giving the speech, I feel right at home.\nI hope that by sharing these experiences with my readers, that I inspire at least one person to go and do something similar. It really is a rewarding experience both personally and professionally.\nAlso, thanks to the VMUG crews for putting on such great conferences.\n","permalink":"https://theithollow.com/2014/05/19/comfort-zone/","summary":"\u003cp\u003eI was recently approached to present the Keynote session for a few VMUG conferences and wanted to provide a perspective about the experience.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/05/ConnecticutVMUG-KeynoteMay2014-4.jpg\"\u003e\u003cimg alt=\"ConnecticutVMUG-KeynoteMay2014-4\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/05/ConnecticutVMUG-KeynoteMay2014-4-1024x317.jpg\"\u003e\u003c/a\u003e Public Speaking is clearly not one of my top 10 things I\u0026rsquo;d like to spend my time doing.  Being a Systems Engineer, I don\u0026rsquo;t mind explaining things to a small group of people, but for the most part am a pretty quiet and reserved person who would prefer to stay in the shadows.  Don\u0026rsquo;t get me wrong, if asked to weigh in, I have had no problem providing feedback or participate in a conversation, especially if it\u0026rsquo;s something I\u0026rsquo;m knowledgeable about, but for the most part, I\u0026rsquo;m pretty shy.\u003c/p\u003e","title":"Out of your Comfort Zone"},{"content":"A secured, remote connection to your data is a requirement for almost all network designs these days. Mobility, telecommuting and late night help desk calls have created an environment that needs to have access to the local network in a secure fashion. vCNS Edge can provide these services to your virtual infrastructure.\nIn previous posts, I’ve walked through installing vCNS Manager and installing vCNS Edge appliances. These are prerequisites to setting up SSL VPN on the VMware vCloud Network and Security appliance..\nvCNS Edge Logical Setup For the lab setup, there is a remote user machine, the vShield Edge Appliance, and the internal network that we\u0026rsquo;ll VPN into. We also create a pool of IP Address to be used by the VPN devices.\nvCNS Edge SSL VPN Configuration\nGo to your vCNS Edge appliance and navigate to the VPN Tab. From there click on the \u0026ldquo;SSL VPN-Plus\u0026rdquo; link.\nUnder the Configure menu, go to Server Settings. Click the Change Button to modify the settings. Select the external IP address of the appliance that you wish to use as the VPN endpoint. This doesn\u0026rsquo;t necessarily mean that this is the default external IP address of the gateway. You are allowed to change the port number if you wish, but the default is 443. You are also allowed to change the cipher list to something more secure. Since this is a lab, I\u0026rsquo;ve used RC4 and MD5. For production uses, it may be advisable to use AES and SHA. Lastly, you can use the default certificate or use a Trusted Certificate Authority. The latter would be recommended for a production environment.\nNext go to the IP Pools. Click the \u0026ldquo;+\u0026rdquo; sign to add a new Pool.\nAdd a new IP Pool. Enter a range of IP addresses that will be handed out to your VPN Clients. Enter a subnet mask and a default gateway address as well. The IP Range used should not be a range that is already on your vCNS appliance. Also, be sure to enter a subnet mask and a gateway. The gateway IP Address will be assigned as a virtual IP Address on the vCNS Edge appliance.\nLastly, it would be advisable to add a DNS Server and DNS Suffix here as well.\nNext, you will go to the \u0026ldquo;Private Networks\u0026rdquo; section to configure what networks will be allowed to connect to through the SSL VPN. As expected, click the \u0026ldquo;+\u0026rdquo; button to add a private network.\nHere, we add the internal network and the subnet mask. You\u0026rsquo;ll have the option to have the traffic encrypted or not by select the \u0026ldquo;Over Tunnel\u0026rdquo; option. The \u0026ldquo;Enable TCP Optimization\u0026rdquo; checkbox will enhance the performance of your TCP Packets. You can restrict the network access to specific port numbers by entering them into the \u0026ldquo;Ports\u0026rdquo; checkbox. If you don\u0026rsquo;t enter any, all ports are assumed.\nNext, we turn to authentication. Add an authentication method to your list of configurable items.\nYou can setup Active Directory, LDAP, local, Radius, etc. Again, since this is a lab, I\u0026rsquo;ve used a local authentication mechanism. If this is a production environment, AD or RADIUS might be more appropriate.\nNow we need to add an installation package.\nEnter a name and a description for the package. The more descriptive the better! The gateway IP Address should be the address of the outside IP address of the vCNS Edge appliance that you configured earlier. It should be noted that the IP address could be a public IP or public DNS address that has been NAT\u0026rsquo;d through another firewall on the edge of your physical network.\nThis package can be enabled for Windows, Linux, and Mac depending on your needs.\nYou will find additional parameters depending on what your organization is looking for. For instance, you want the VPN to fire up each time the machine starts up.\nMy last step is to add a local VPN user. If you\u0026rsquo;ve used AD or RADIUS, this may not be a necessary step.\nFirewall Nothing needs to be configured here, but I wanted to show that a firewall rule is automatically added to the vCNS Edge appliance.\nInstall the VPN Client Open a web browser and go to the outside IP address of the vCNS edge appliance that you configured. You should be presented with a logon screen.\nOnce you login, you should see the ability to download the client. The client will install with the configured connection information. You may also see more than one client available if you\u0026rsquo;ve created multiple connections.\nThe installation process will start.\nConnect to the VPN Launch the VPN Client, or if it was just installed, it should open immediately after install. Select the profile and Login.\nOnce you enter a valid username and password, you should see the following message box and you\u0026rsquo;ll know you\u0026rsquo;ve been connected.\nI ran an ipconfig command and it shows that I\u0026rsquo;ve got a 10.10.55.1 address as I should from the VPN Pool, and I can connect to the private network.\nSummary VPN is so important these days that any sort of edge appliance must have this capability. VMware has provided it in their virtual networking suite as well.\n","permalink":"https://theithollow.com/2014/05/13/vcns-edge-ssl-vpn/","summary":"\u003cp\u003eA secured, remote connection to your data is a requirement for almost all network designs these days.  Mobility, telecommuting and late night help desk calls have created an environment that needs to have access to the local network in a secure fashion.  vCNS Edge can provide these services to your virtual infrastructure.\u003c/p\u003e\n\u003cp\u003eIn previous posts, I’ve walked through \u003ca href=\"/2014/03/getting-started-vcns/\"\u003einstalling vCNS Manager\u003c/a\u003e and \u003ca href=\"http://wp.me/p32uaN-Tb\"\u003einstalling vCNS Edge appliances\u003c/a\u003e.  These are prerequisites to setting up SSL VPN on the VMware vCloud Network and Security appliance..\u003c/p\u003e","title":"vCNS Edge SSL VPN"},{"content":"I recently purchased the HP 9470m EliteBook and wanted to give it a quick review.\nThe Good The laptop has a pretty slim design as you would expect from an EliteBook. Be aware however that this is not as slim as a Mac b ook Air, or the Samsung Series 9 laptops. The good news though is that you don\u0026rsquo;t need to use a dongle just to plug in an Ethernet cable. The same goes for having a VGA output which I often use for presentations. It can be a pain to hunt down a dongle to connect to a wired network, or a projector so I give this Elitebook points for that.\nI also find it useful that even though this is a thin laptop, it can be used in conjunction with a docking station. What a pain to plug cables in every time you get to your office! What are we, Animals?\nThe version I purchased also has a 256 GB LiteOn SSD Drive. This drive coupled with 16GBs of memory and a Core i7 processor makes this laptop very capable, performance wise. The performance factors were the main reason I purchased this laptop.\nThere is a fingerprint reader as well as an HD Camera (720p) built into the EliteBook which are a couple of nice features, as well as having three USB 3 ports and built-in Bluetooth. It\u0026rsquo;s nice to be able to take a pair of Bluetooth headphones or a mouse and connect them right to the laptop without having to have a dongle around.\nOne of the things I liked best was the lid and bottom of the laptop are covered in soft-touch paint. This makes the laptop easy to grip, and reminds me of the silicon cases used for cell phones. It will make the laptop easier to hold than brushed aluminum which is the face of the keyboard.\nThe Bad Unfortunately, this EliteBook does have quite a few drawbacks.\nThe first and most troubling is the screen resolution. The laptop I purchased only has a 1366 X 768 maximum resolution which I didn\u0026rsquo;t think would be a big deal, but I was wrong. There isn\u0026rsquo;t much real estate available with this screen resolution. You can purchase a version of this EliteBook with 1600 X 900 but good luck getting one that is 1080p.\nThe laptop isn\u0026rsquo;t exactly loud, but it certainly isn\u0026rsquo;t quiet either. I guess with some horse power, also comes some fan requirements. You may notice the fan running quite a bit\nThere are no HDMI outputs on the device. I\u0026rsquo;m not sure why this is, due to how common these ports are now, and the fact that it is a pretty small connection. If there was room for a VGA output, one would think there would be room for an HDMI output.\nLastly, this laptop just doesn\u0026rsquo;t look as sexy as a Samsung Series 9 or a MacBook Pro. This laptop doesn\u0026rsquo;t look bad or anything, but it\u0026rsquo;s just not going to wow anyone from the appearances alone. It seems very much business like which is fairly typical for an HP laptop.\nOther I won\u0026rsquo;t say this is good or bad, but this model doesn\u0026rsquo;t come with a touchscreen. In my opinion a touchscreen is a waste of time for a laptop anyway. I\u0026rsquo;d rather use a keyboard and mouse with my laptop and will leave the touchscreen for my tablet. This is my own opinion, others may disagree with me but I\u0026rsquo;ll put it out there.\nEven though this laptop is thicker than a Macbook Pro, there is still no optical drive. Again, this may not be a big deal in this day and age. Try to remember the last time you used your DVD-Rom on your laptop and make the decision for yourself.\nHollowPoints All in all, this is a pretty good laptop if you can get past the resolution handcuffs and don\u0026rsquo;t need a totally silent machine. I think that the screen resolution is going to be a deal breaker for most people.\nOverall I\u0026rsquo;d give this laptop 3.5 stars out of 5.\n","permalink":"https://theithollow.com/2014/05/05/hp-9470m-laptop-review/","summary":"\u003cp\u003eI recently purchased the \u003ca href=\"http://www.amazon.com/gp/product/B00BNRKWMU/ref=as_li_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B00BNRKWMU\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\u0026amp;linkId=RBROHIQHKMJ2FELT%22%3E%3Cimg%20border=%220%22\"\u003eHP 9470m EliteBook\u003c/a\u003e and wanted to give it a quick review.\u003c/p\u003e\n\u003ch1 id=\"the-good\"\u003e\u003cstrong\u003eThe Good\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eThe laptop has a pretty slim design as you would expect from an EliteBook.  Be aware however that this is not as slim as a Mac \u003cstrong\u003eb\u003c/strong\u003e ook Air, or the \u003ca href=\"http://www.amazon.com/gp/product/B0098O6JSQ/ref=as_li_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B0098O6JSQ\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\u0026amp;linkId=PWLZWNK7OT5CJF77\"\u003eSamsung Series 9\u003c/a\u003e laptops.  The good news though is that you don\u0026rsquo;t need to use a dongle just to plug in an Ethernet cable.  The same goes for having a VGA output which I often use for presentations.  It can be a pain to hunt down a dongle to connect to a wired network, or a projector so I give this Elitebook points for that.\u003c/p\u003e","title":"HP 9470m Laptop Review"},{"content":"\nMicrosoft Dynamic Access Control is a new way to deploy access rules to your file shares. For many moons now, System Administrators have had a tedious task of managing tens, hundreds, or thousands of security groups to control how files are accessed.\nGroups of users have always needed to maintain different sets of security rules to prevent people from accessing confidential files. Human Resources obviously doesn\u0026rsquo;t want people outside their department to have access to personnel files, separate office locations may not want to share data with other offices in the same domain, and countries or cities might have different restrictions about sharing files with each other.\nTypically administrators would add users to many security groups to manage the permissions but was a tedious task prone to errors and even worse, what would happen if someone moved or changed roles. Usually this required removing and then adding different roles, often leading to mistakes. Dynamic Access Control is a method that uses properties to determine access instead of just security groups.\nDynamic Access Control (DAC) Prerequisites Before you get too far into configuring DAC, you should make sure the prerequisites are available:\nA Windows Server 2012 File Server At least one Windows Server 2012 domain controller accessible by the Windows client in the user\u0026rsquo;s domain A Windows 8 client (only needed if using device claims) As well as the software requirements that are needed, there are also a few policies that should created before the guts of the configuration is done.\nKerberos Authentication needs to be enabled on the domain controllers. Yes, I would think this was enabled by default, but it isn\u0026rsquo;t, so turn it on!\nCreate the GPO, attach it to the Domain Controllers OU and in the settings, look for \u0026ldquo;Computer ConfigurationPoliciesAdministrative TemplatesSystemKDC\u0026rdquo;\nThe setting should be \u0026ldquo;KDC support for claims, compound authentication and Kerberos armoring\u0026rdquo;; change this setting to enabled and supported.\nNext, On the Windows Server 2012 File Server, be sure to install the File Server Resource Management (FSRM) feature either by going through the Server Manager and adding the role, or using powershell by running the following command:\nInstall-WindowsFeature –Name FS-Resource-Manager –IncludeManagementTools Once the FSRM feature is installed, you should be ready to start claims which are discussed in part 2.\nDynamic Access Control Series Initial Configuration Steps for Microsoft Dynamic Access Control- Part 1 Claims – Part 2 Resource Properties – Part 3 Access Rules and Policies Part 4\n","permalink":"https://theithollow.com/2014/04/28/microsoft-dynamic-access-control-part-1/","summary":"\u003cp\u003e\u003cimg alt=\"Locked\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/01/Locked-300x242.png\"\u003e\u003c/p\u003e\n\u003cp\u003eMicrosoft Dynamic Access Control is a new way to deploy access rules to your file shares.  For many moons now, System Administrators have had a tedious task of managing tens, hundreds, or thousands of security groups to control how files are accessed.\u003c/p\u003e\n\u003cp\u003eGroups of users have always needed to maintain different sets of security rules to prevent people from accessing confidential files.  Human Resources obviously doesn\u0026rsquo;t want people outside their department to have access to personnel files, separate office locations may not want to share data with other offices in the same domain, and countries or cities might have different restrictions about sharing files with each other.\u003c/p\u003e","title":"Microsoft Dynamic Access Control (Part 1)"},{"content":"In part 1 of the series we covered some generalities about Microsoft Dynamic Access Control and a few steps needed to prepare the domain and file servers. Now let\u0026rsquo;s look at creating claims.\nA claim is a user, device or resource property. A user in Active Directory will have properties such as Location, Department, manager, etc. Each of these properties is a claim but for any actions to be utilized by Direct Access, they have to be defined.\nThe example that I use below is creating a claim and then using that claim to assign permissions to a folder.\nCreate a User Claim Setting up claims is done from the Active Directory Administrative Center (ADAC). Click the Dynamic Access Control (DAC) tile and you\u0026rsquo;ll notice a series of steps. Create a claim type is the first step. Click the hyperlink and you\u0026rsquo;ll be taken to the correct window, or use the Dynamic Access Control link on the menu and choose claim type.\nThis example uses the Active Directory property \u0026ldquo;department\u0026rdquo;. In the list of attributes typing in department will help to filter the available properties. Give the claim type a name, such as \u0026ldquo;department\u0026rdquo;, on the right side, and select what kind of claim it will be. In our case this is a \u0026ldquo;User\u0026rdquo; claim.\nNOTE: Device claims require Windows 8 devices (or higher) to work properly.\nAt the bottom of this window, add some suggested values. For Example in my lab, there are a few departments including IT, Communications, Human Resources, etc.\nAdd a claim\u0026rsquo;s suggested Values\nWhen the claim has been added, the ADAC will show a display name, ID, Source Type, etc as a claim. Multiple claims can be created at any time, or modified.\nSecuring a folder by Using a Claim Now that we have a claim created, lets modify the security of a folder to allow access to anyone in the IT Department.\nMy folder security on \u0026ldquo;ClassifiedFiles\u0026rdquo; looks like the below screenshot. Notice that \u0026ldquo;Authenticated Users\u0026rdquo; does not have read permissions on the folder.\nTo allow access to anyone in the IT department we\u0026rsquo;ll select the Advanced button.\nHere in the Advanced Security Settings, we can add a new permission entry.\nClick the link to add a principal. In our case we added \u0026ldquo;Authenticated Users\u0026rdquo;. This will mean that any users that are authenticated to the domain and are in the IT Department will have access.\nChange the type to \u0026ldquo;Allow\u0026rdquo; and set the permission. I\u0026rsquo;ve added Read and Execute permissions.\nThe part of this screen that deals with the claim we setup earlier is listed at the bottom. Click the \u0026ldquo;Add a condition\u0026rdquo; link to add our claim. Here we\u0026rsquo;ve added the information for the User Claim, department, and which value we want to grant access. Here I\u0026rsquo;ve added IT.\nNOTE: It is possible to add multiple conditions to limit user access. If you have two claims and want both of them to be true, you can add a second condition with an \u0026lsquo;AND\u0026rsquo; operator. An example might be only allowing the IT department in the Chicago Office to access these file, but limit access to the IT department in San Francisco.\nWhen you\u0026rsquo;re done an additional permission entry will be added to the advanced security settings for the folder. Shown below.\nTesting Access If it hasn\u0026rsquo;t been tested, it doesn\u0026rsquo;t work. So here is my test. I used two employees, one in the IT department and the other not in the IT department and tried to access my classified folder.\nWhen I log as Mr. Gretzky, he does not have access to the folder and receives an access denied message. We can also verify this user\u0026rsquo;s claims by utilizing the whoami /claims command form a powershell window. I\u0026rsquo;ve included this in the screenshot to prove out our example.\nI then login as Mr. Kane (Go Hawks!!!) and have access to the folder as expected.\nClaims are an important thing to know for securing files going forward because they can significantly reduce the number of groups needed to be created.\nIn future posts, we\u0026rsquo;ll see how Resource Properties can also be added to further reduce manual tasks.\nDynamic Access Control Series Initial Configuration Steps for Microsoft Dynamic Access Control- Part 1 Claims – Part 2 Resource Properties – Part 3 Access Rules and Policies Part 4 File Server Resource Manager Auto Classification - Part 5\n","permalink":"https://theithollow.com/2014/04/28/microsoft-dynamic-access-control-part-2-claims/","summary":"\u003cp\u003eIn \u003ca href=\"http://wp.me/p32uaN-NX\"\u003epart 1 of the series\u003c/a\u003e we covered some generalities about Microsoft Dynamic Access Control and a few steps needed to prepare the domain and file servers.  Now let\u0026rsquo;s look at creating claims.\u003c/p\u003e\n\u003cp\u003eA claim is a user, device or resource property.  A user in Active Directory will have properties such as Location, Department, manager, etc.  Each of these properties is a claim but for any actions to be utilized by Direct Access, they have to be defined.\u003c/p\u003e","title":"Microsoft Dynamic Access Control (Part 2 - Claims)"},{"content":"So far we\u0026rsquo;ve covered:\nInitial Setup of Dynamic Access Control Claims In this post we\u0026rsquo;ll look at Resource Properties.\nResource Properties A resource property is a claim that describes the characteristics of an object in the file system. A claim is a descriptor of a user or a device whereas a resource property is a characteristic of a file or folder.\nAs an example, we have a folder with HIPPA related information in it. A description can be added to this folder to indicate that it has Protected Health Information (PHI) contained in that folder. This PHI description is a resource property.\nEnable and Create Resource Properties There are lots of properties that Windows has available to use right out of the box on a File Server with FSRM installed, but they aren\u0026rsquo;t available to be used immediately. They have to be enabled first. To do so, we go back into the Active Directory Administration Center (ADAC) and go to the resource properties settings for dynamic access control.\nHere you\u0026rsquo;ll see that there are plenty of properties already available. Select the properties you\u0026rsquo;d like to use and click \u0026ldquo;Enable\u0026rdquo;. In the example below, I\u0026rsquo;ve chosen to use \u0026ldquo;Compliancy\u0026rdquo; for no specific reason.\nYou may also find that your organization has different requirements for resources and wants to add some custom properties. No problem there, just click the New link on the right side and add your own properties.\nHere I\u0026rsquo;ve created a rule called \u0026ldquo;Hollow-Classified\u0026rdquo; and made it a single-valued Choice. I added two suggested values of NotSecret and UberSecret.\nAdd Resource Properties to FoldersFiles Now we need to go to our folder that we\u0026rsquo;re going to add a property on. Open the properties of the select folder and look for the Classification tab. Here you will see the resource properties that are enabled, and you can assign them a value based on the characteristics of the folder.\nIn the below example, I\u0026rsquo;ve left the \u0026ldquo;Compliancy\u0026rdquo; property blank but changed the \u0026ldquo;Hollow-Classified\u0026rdquo; property to \u0026ldquo;UberSecret\u0026rdquo;. You may set multiple properties for the same folder. NOTE: If you open the folder properties and the characteristic tab is empty, either wait for the classifications to update, or run the powershell code below to force this update to happen.\nUpdate-FSRMClassificationPropertyDefinition Assign Permissions using Resource Properties Now that a folder has a property on it, we can use those properties to assign permissions.\nSimilarly to how we added claims permissions to a folder, we open the Advanced Properties of the security tab of the folder.\nMy test user is named \u0026ldquo;Patrick Sharp\u0026rdquo; so I\u0026rsquo;ve selected him as the principal, granted him the Read and Execute permissions on the folder and then added a condition to the bottom of the entry. I chose Resource, selected the \u0026ldquo;Hollow-Classified\u0026rdquo; property where the value was \u0026ldquo;UberSecret\u0026rdquo;.\nThis means that anytime Patrick Sharp tries to access a file in this folder that has the UberSecret Classification he will be allowed access (assuming there isn\u0026rsquo;t another Deny permission added on the filefolder negating this configuration).\nOnce the setting is added, the permissions will reflect the new access.\nLastly, I log in as Patrick Sharp and test the permissions. I find that I do have access to this folder as long as it\u0026rsquo;s classified as \u0026ldquo;UberSecret\u0026rdquo;.\nSummary Resource Properties are much like user or device claims, except they are used on a file or folder. These might not seem incredibly useful to you yet, but we\u0026rsquo;ll see in future posts that claims and resource properties can be combined to be very powerful in managing document security.\nDynamic Access Control Series Initial Configuration Steps for Microsoft Dynamic Access Control- Part 1 Claims – Part 2 Resource Properties – Part 3 Access Rules and Policies Part 4 File Server Resource Manager Auto Classification - Part 5\n","permalink":"https://theithollow.com/2014/04/28/microsoft-dynamic-access-control-part-3-resource-properties/","summary":"\u003cp\u003eSo far we\u0026rsquo;ve covered:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003ca href=\"http://wp.me/p32uaN-NX\"\u003eInitial Setup of Dynamic Access Control\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003ca href=\"http://wp.me/p32uaN-O2\"\u003eClaims\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIn this post we\u0026rsquo;ll look at Resource Properties.\u003c/p\u003e\n\u003ch1 id=\"resource-properties\"\u003eResource Properties\u003c/h1\u003e\n\u003cp\u003eA resource property is a claim that describes the characteristics of an object in the file system.  A claim is a descriptor of a user or a device whereas a resource property is a characteristic of a file or folder.\u003c/p\u003e\n\u003cp\u003eAs an example, we have a folder with HIPPA related information in it.  A description can be added to this folder to indicate that it has Protected Health Information (PHI) contained in that folder.  This PHI description is a resource property.\u003c/p\u003e","title":"Microsoft Dynamic Access Control (Part 3 – Resource Properties)"},{"content":"We\u0026rsquo;ve discussed Initial configuration steps, Claims, and Resource Properties and we\u0026rsquo;re starting to see the power of Microsoft\u0026rsquo;s Dynamic Access Control, but we need a better way to manage these and that\u0026rsquo;s why we\u0026rsquo;ve come to \u0026ldquo;Rules and Policies\u0026rdquo;.\nA Central Access Rule can be used to take claims such as users in a department and match them up with permissions on a filefolder with specific resource properties. This is where the real power comes into play because now we don\u0026rsquo;t have to go through and map these for each individual file. We\u0026rsquo;re setting a general policy for the entire organization all at once.\nCreate a Central Access Rule We again go into the Active Directory Administration Center and this time add a new Central Access Rule.\nI\u0026rsquo;ve given my rule a very descriptive name like \u0026ldquo;CentralAccessRule01\u0026rdquo;.\nWe map target resources. Here we can use the resource property that we created in part 3 of this series where we label some resources as \u0026ldquo;UberSecret\u0026rdquo;.\nThis screen shows how we added the target resource.\nLastly, we configure the permissions so that anyone in the \u0026ldquo;Goalies\u0026rdquo; security group has read and execute permissions as long as their department also equals IT.\nCreate Access Policy The Central Access Rule has been created but it isn\u0026rsquo;t available to be deployed anywhere yet. To get the rule ready to be deployed we need a Central Access Policy.\nAgain, from the ADAC we now create a new Central Access Policy and give it a name.\nGive the Policy a name and then add a central access rule that you\u0026rsquo;ve already created to this new Central Access Policy. Notice that a Central Access Policy may contain one to many different central access rules.\nChoose the central access rule created earlier and move it to the right side using the double arrows.\nRules have been created, and added to a policy. Now that we\u0026rsquo;re on the subject of policies, we can now add this Central Access Policy through Group Policy. (I know, a lot of policies right?)\nIn your Group Policy Management Editor, create a new GPO or modify an existing GPO. This Group Policy should be placed on an Organizational Unit that houses your File Servers.\nNavigate to: Computer ConfigurationPoliciesWindows SettingsSecurity SettingsFile SystemCentral Access Policy.\nHere you will right click and choose new.\nAny Central Access Policies that you\u0026rsquo;ve created will now be available for you to add to the GPO.\nPolicies are in effect on the domain. One more step.\nAssign Policy to Folder\nCentral access policies will be available for use on any folders where the server is bound by the GPO created earlier. If you look at the security properties on one of your file server you should now see a new tab for Central Policy. (If you don\u0026rsquo;t see this tab, try running a GPUpdate /force from a command line and try again.)\nFrom this screen, you only need to select the policy that matches your goals and all of the configuration is now done. Any resources in that folder that have resource properties should have the proper permissions set.\nNOTE: Be sure that the File System Permissions still allow access to the users. File System permissions are checked first, and then Central Access Policies are checked second. If a user is missing file system permissions, they don\u0026rsquo;t have access.\nResults I ran a quick test with a new user named Corey Crawford. He is a member of the \u0026ldquo;Goalies\u0026rdquo; security group, and he is also in the IT Department. (What like you can\u0026rsquo;t be a goalie and an IT guy at the same time)\nAs you can see from his whoami info and bginfo, he does have access to my folder.\nWe can also look at the permissions from the folder itself. Look in the Effective Access Tab and we can see the permissions Mr. Crawford is granted.\nAlternatively, we see that Mr. Kane is not a member of the Goalies Security Group so therefore he does not have access.\nDynamic Access Control Series Initial Configuration Steps for Microsoft Dynamic Access Control- Part 1 Claims – Part 2 Resource Properties – Part 3 Access Rules and Policies Part 4 File Server Resource Manager Auto Classification - Part 5\n","permalink":"https://theithollow.com/2014/04/28/microsoft-dynamic-access-control-part-4-rules-policies/","summary":"\u003cp\u003eWe\u0026rsquo;ve discussed \u003ca href=\"http://wp.me/p32uaN-NX\"\u003eInitial configuration steps\u003c/a\u003e, \u003ca href=\"http://wp.me/p32uaN-O2\"\u003eClaims\u003c/a\u003e, and \u003ca href=\"http://wp.me/p32uaN-Oi\"\u003eResource Properties\u003c/a\u003e and we\u0026rsquo;re starting to see the power of Microsoft\u0026rsquo;s Dynamic Access Control, but we need a better way to manage these and that\u0026rsquo;s why we\u0026rsquo;ve come to \u0026ldquo;Rules and Policies\u0026rdquo;.\u003c/p\u003e\n\u003cp\u003eA Central Access Rule can be used to take claims such as users in a department and match them up with permissions on a filefolder with specific resource properties.  This is where the real power comes into play because now we don\u0026rsquo;t have to go through and map these for each individual file.  We\u0026rsquo;re setting a general policy for the entire organization all at once.\u003c/p\u003e","title":"Microsoft Dynamic Access Control (Part 4 – Rules and Policies)"},{"content":"In the first four parts of the Dynamic Access Control Series we covered Initial Configurations, Claims, Resource Properties and Rules Policies. These are working great in our environment but we still have to go through and manage the classification tags. Wouldn\u0026rsquo;t it be easier to have some files automatically tagged with a certain resource classification?\nEnter File Server Resource Manager to the rescue!\nClassification Rules From within File Server Resource Manager (FSRM) go to Classification Rules and choose to \u0026ldquo;Create Classification Rule\u0026hellip;\u0026rdquo;\nAs usual, give the rule a name and a description.\nSelect what kind of files or folders the rule will be run on. In my example we\u0026rsquo;re only looking at User files. I\u0026rsquo;ve also limited the classification rule to run on the \u0026ldquo;ClassifiedFiles\u0026rdquo; folder, but you could select entire drives if you\u0026rsquo;d prefer.\nChoose a classification method. In my example I\u0026rsquo;ve used a content classifier, which looks at the actual data inside of a file, but you could also use a powershell script or folder classifier.\nIn the properties, I\u0026rsquo;ve selected the Hollow-Classified resource property that we created in part 3 of this series.\nThen we configure the paramaters. This is the logic behind the classification. In my example, I\u0026rsquo;m looking for any files that have the string \u0026ldquo;Private\u0026rdquo; in them two times. In a corporate file store this might not work, but a suitable expression could be found to fit for almost any situation.\nThe last step of the configuration is to set an evaluation type. This is a way to handle any files who already have a classification. What should happen to those files? Should you overwrite their classification, add to their classifications or do nothing?\nOnce the classification rule is configured, you can either setup a schedule, or run the classification process any time from the FSRM console.\nRun the Classification Rules Here I\u0026rsquo;ve created a file with the word \u0026ldquo;Private\u0026rdquo; in it three times. This file should get reclassified as UberSecret.\nClassification process runs and spits out a report. It looks like one file was affected. I look at the test .txt file that we used and it has a classification listed now. Summary Microsoft Dynamic Access Control has many moving parts that can all be used in concert to ease the burden of managing files and folders. It is well worth the initial setup time to eliminate constant updates to file permissions that come with day to day IT routines. Plan it out, and use the automation and this could be a wonderful set of tools.\nMicrosoft Dynamic Access Control Series Initial Configuration Steps for Microsoft Dynamic Access Control- Part 1 Claims – Part 2 Resource Properties – Part 3 Access Rules and Policies Part 4 File Server Resource Manager Auto Classification - Part 5\n","permalink":"https://theithollow.com/2014/04/28/microsoft-dynamic-access-control-part-5-auto-classification/","summary":"\u003cp\u003eIn the first four parts of the Dynamic Access Control Series we covered \u003ca href=\"http://wp.me/p32uaN-NX\"\u003eInitial Configurations\u003c/a\u003e, \u003ca href=\"http://wp.me/p32uaN-O2\"\u003eClaims\u003c/a\u003e, \u003ca href=\"http://wp.me/p32uaN-Oi\"\u003eResource Properties\u003c/a\u003e and \u003ca href=\"http://wp.me/p32uaN-Ox\"\u003eRules Policies\u003c/a\u003e.  These are working great in our environment but we still have to go through and manage the classification tags.  Wouldn\u0026rsquo;t it be easier to have some files automatically tagged with a certain resource classification?\u003c/p\u003e\n\u003cp\u003eEnter File Server Resource Manager to the rescue!\u003c/p\u003e\n\u003ch1 id=\"classification-rules\"\u003eClassification Rules\u003c/h1\u003e\n\u003cp\u003eFrom within File Server Resource Manager (FSRM) go to Classification Rules and choose to \u0026ldquo;Create Classification Rule\u0026hellip;\u0026rdquo;\u003c/p\u003e","title":"Microsoft Dynamic Access Control (Part 5 - Auto Classification)"},{"content":" There are two terms used in IT that are often used in conjunction when learning about how technologies are built. These two terms are \u0026ldquo;Control Plane\u0026rdquo; and \u0026ldquo;Data Plane\u0026rdquo;. A quick and dirty definition of these two terms would be:\nControl Plane - The decision making part of any system. Usually considered the brains of the system. Data Plane - The part of a system that carries out an operation. This would be the routine tasks needed to make the system work. Just as a sidebar, if you are looking for me to site my source on those definitions you\u0026rsquo;re out of luck. These are my basic definitions that I\u0026rsquo;ve made up for purposes of this post. If for some reason these definitions become common place, then I want some royalties. :)\nLet\u0026rsquo;s look at a few examples:\nVMware Distributed Switch Control Plane - Stored on the vCenter and is used to create port groups, assign uplinks, configure VLANS, set Ingress or Egress policies, etc. Data Plane - Stored on the individual ESXi hosts and is responsible for forwarding frames based on how the control plane told it to do so. VMware Virtual SAN Control Plane - Comes from the Virtual SAN Software that is installed and is responsible for configuring which host is responsible for data in the scale out storage design and how the data will be protected in the case of a host failure. Data Plane - The Solid State Disks and Hard disks utilized to store data on the drives and later recall it when the OS asks for it. A Bus Route I threw this one in, just to show that a \u0026ldquo;Control Plane\u0026rdquo; and a \u0026ldquo;Data Plane\u0026rdquo; don\u0026rsquo;t have to be just for the IT world.\nControl Plane - A set plan that shows the route that buses will travel throughout a city. Data Plane - The buses that stop at the routes defined in the control plane and move passengers between these stops. No More Data Plane Administrators On to the thesis of my post. \u0026ldquo;With all of the new Software Designed Everythings that are out in the Data Center space, Systems Administrators need to rethink the way they approach their jobs.\u0026rdquo; In the past, it may have been common place for Admins to monitor the performance of their servers and modify settings to accommodate changes in workloads to keep their users happy. This may have included doing things like migrating a VM to a new host, or provisioning a new server. Similarly, a network administrator might spend his days provisioning new access switches with VLANs Spanning-Tree etc for handling new workloads.\nI would consider the types of administration I just described as \u0026ldquo;Data Plane Administration\u0026rdquo;. These Admins are doing the heavy lifting necessary for the infrastructure to hum right along and keep some happy users. The Software Defined Data Center allows us to replace the Data Plane Administrators with software.\nSummary SDDC is changing the way we should look at this administration. Products like VMware vCloud Hybrid Service (vCHS), vCenter Operations Manager (vC OPS), VMware NSX for virtual networking, and the vCloud Suite can be used to move the Systems and Network Admins from the Data Plane up to the Control Plane. These tools allow for the automation of tasks such as performance monitoring, and building new virtual machines. Systems Admins now can spend their time making decisions about how the infrastructure should handle different situations and the software will be the data plane that does the heavy lifting.\n","permalink":"https://theithollow.com/2014/04/22/data-plane-administrators/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/04/download.jpg\"\u003e\u003cimg alt=\"download\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/04/download.jpg\"\u003e\u003c/a\u003e There are two terms used in IT that are often used in conjunction when learning about how technologies are built.  These two terms are \u0026ldquo;Control Plane\u0026rdquo; and \u0026ldquo;Data Plane\u0026rdquo;.  A quick and dirty definition of these two terms would be:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eControl Plane -\u003c/strong\u003e The decision making part of any system.  Usually considered the brains of the system.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eData Plane -\u003c/strong\u003e The part of a system that carries out an operation.  This would be the routine tasks needed to make the system work.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cblockquote\u003e\n\u003cp\u003eJust as a sidebar, if you are looking for me to site my source on those definitions you\u0026rsquo;re out of luck.  These are my basic definitions that I\u0026rsquo;ve made up for purposes of this post.  If for some reason these definitions become common place, then I want some royalties.  :)\u003c/p\u003e","title":"No More Data Plane Administrators"},{"content":"VMware vCloud Networking and Security (vCNS) can provide Network Address Translation (NAT) services from the vCNS Edge appliance.\nThere are two types of NAT that the edge appliance can provide.\nDestination NAT (DNAT) is used to provide access to a private IP Address from a (usually) public IP Address for incoming traffic.\nSource NAT (SNAT) is used to translate a private IP Address into a (usually) public IP Address for outgoing traffic. This type of NAT can also be called \u0026ldquo;masquerading\u0026rdquo;. (It\u0026rsquo;s a subtle difference that we won\u0026rsquo;t go into in this post.)\nPlease take note that when you see SNAT or DNAT this is not referring to Static NAT vs. Dynamic NAT or PAT. This caught me a bit off guard the first time I saw it but once you realize the distinction this process is very easy to understand.\nDestination NAT Just to give a quick overview of DNAT let\u0026rsquo;s look at an example. In the below diagram we have an machine (10.10.10.2) that is attempting to communicate with a web server behind the vCNS Edge appliance. The request from 10.10.10.2 is sent to the outside IP Address of the vCNS Edge appliance. When the request reaches the \u0026ldquo;Edge\u0026rdquo;, vCNS translates that request and forwards it to the internal server running the web server. It does this by looking at the DNAT rules which are listed to the left of the appliance. We see that requests for 10.10.10.1 (uplink on edge appliance) on port 443 (https) are to be sent to the 10.10.110.10 server.\nFrom this example we can see that the server that originally made the request doesn\u0026rsquo;t know that there are multiple servers behind the NAT Device. If you notice there is a second NAT entry for port 25 on the same outside interface. That NAT goes to a completely different server on the inside of the Edge appliance. Sometimes this is called \u0026ldquo;Port Forwarding.\u0026rdquo;\nSource NAT As the name implies, this type of translation is initiated from the vCNS protected network. Here we want to have multiple servers to have access to the outside network. This is usually done to limit the number of Public IP Addresses that are in use by an organization, but it doesn\u0026rsquo;t have to be. The example diagram below illustrates that all three servers on the inside of the Edge appliance can make requests to their gateway (vCNS Edge) and the Edge appliance will translate that IP into a public address that is routable on the new network.\nYou may be asking \u0026ldquo;What happens when this traffic is returned? How does the appliance know where to send that return traffic to?\u0026rdquo;\nThe Edge appliance keeps track of all of the requests made and adds the source, destination and port information to a table. When return traffic comes back, it can match the response to the table and know how to send the traffic back. See the following post for more information: NAT vs. PAT vCNS Edge Network Address Translation Configurations Before you can configure NAT on the vCNS Edge appliance, you must first deploy vCNS Manager as well as deploy vCNS Edge. I\u0026rsquo;ve written about this process before and you can review the posts if you haven\u0026rsquo;t done this already.\nLog into your vShield Manager and click on the Datacenter. Click the \u0026ldquo;Network Virtualization\u0026rdquo; Tab where you\u0026rsquo;ll find the Edge appliance you\u0026rsquo;ve already deployed. Go to Actions and click \u0026ldquo;Manage\u0026rdquo;. From here, we can get tot he NAT tab where the magic happens.\nClick the \u0026ldquo;+\u0026rdquo; sign to add a new NAT Rule.\nSetup DNAT Select the uplink interface for the rule to be applied on. Enter the uplink IP Address that will be used. This is the IP address that other machines will use to connect to the NAT\u0026rsquo;d (hidden) device. Also, you\u0026rsquo;ll need to enter the Protocol and port information. In my example, I\u0026rsquo;m testing with a web server on Port 80. Lastly, enter the IP Address of the machine that will be presented to the other machines. E.g. the web server IP Address.\nSetup SNAT Before I configure SNAT my virtual machine had no access to the network on the uplink side of the Edge device. You can see this since I cannot ping the 10.10.10.4 IP address which is on my uplink side of my edge device.\nSNAT is a requires a bit less information. Remember that we\u0026rsquo;re just going to show the network that needs access to the outside world and the IP Address on the uplink that is used to do this.\nOnce we\u0026rsquo;ve set the SNAT Configuration (and published the changes) we can see that I can now ping the device on the uplink side of my Edge Device. Voila!\nWhen you\u0026rsquo;re all done it should look similar to this in your NAT tab.\nSummary VMware vCloud Networking and Security can provide Network Address Translation to your virtualized infrastructure. This is just another example of the edge services that VMware is now able to handle, and it wouldn\u0026rsquo;t be an edge if it didn\u0026rsquo;t provide NAT\u0026hellip; well, until maybe IPv6 is the norm.\n","permalink":"https://theithollow.com/2014/04/15/vcns-edge-network-address-translation/","summary":"\u003cp\u003eVMware vCloud Networking and Security (vCNS) can provide Network Address Translation (NAT) services from the vCNS Edge appliance.\u003c/p\u003e\n\u003cp\u003eThere are two types of NAT that the edge appliance can provide.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\n\u003cp\u003eDestination NAT (DNAT) is used to provide access to a private IP Address from a (usually) public IP Address for incoming traffic.\u003c/p\u003e\n\u003c/li\u003e\n\u003cli\u003e\n\u003cp\u003eSource NAT (SNAT) is used to translate a private IP Address into a (usually) public IP Address for outgoing traffic. This type of NAT can also be called \u0026ldquo;masquerading\u0026rdquo;.  (It\u0026rsquo;s a subtle difference that we won\u0026rsquo;t go into in this post.)\u003c/p\u003e","title":"vCNS Edge Network Address Translation"},{"content":"\nIf you have an MCITP or similar certification from Microsoft on Server 2008 and want to keep your certifications up to date, chances are you will need to take the 70-417 exam. I recently sat this test and wanted to share some of my experiences with you.\nMy certification background in Information Systems started with my journey to become an MCSE 2003 so Microsoft is kind of my first love when it comes to certs. I deal a little bit less with the day to day configuration and maintenance of Windows, but Windows Server will always have a certain place in my heart and I try to keep up to date with my credentials.\nExpectations My initial expectations with the exam were that I would read up a bit on the new features of Server 2012 (and 2012 R2) and rely on my prior knowledge of things like DNS, DHCP and Active Directory to carry myself through the exam. Uh\u0026hellip;.well\u0026hellip;.. let\u0026rsquo;s just say that this isn\u0026rsquo;t a good strategy.\nOn a non-related note, you might check out the Second Shot opportunity offered by Prometric and Microsoft. NOTE: You must purchase this second shot by May 31st and take the exams before the end of 2014.\nYes, as you might have guessed, I failed my first attempt at 70-417 and had to rely on my second shot. The second chance at the exam I nailed after actually doing some studying.\nI found out that there are a lot of new features of Server 2012 that I really wasn\u0026rsquo;t fluent with. During my studying I wrote about many of them after building them in my lab, such as Direct Access, Offline Domain Join, IP Address Management, and Central Access Policies to name a few.\nExam Format The exam consisted of three sections. Each of these sections was timed separately and you were not allowed to go back and review previous sections once you completed it.\nThe three sections really covered the non upgrade exams:\n70-410 - Installing and Configuring Windows Server 2012 70-411 - Administering Windows Server 2012 70-412 - Configuring Advanced Window Server 2012 Services Each of the three sections was scored separately and the lowest score was used as your exam score. So if you got a 680 on any of the three sections and aced the other two, you still fail the exam.\nSummary If you haven\u0026rsquo;t been staying up to date with the newer Windows Server services, don\u0026rsquo;t plan to rely on your previous technical skills to coast you through this exam. Since this was an upgrade exam, I thought that I would breeze through it, but it turned out to be a pretty challenging cert. Use the second shot opportunity and study for it.\n","permalink":"https://theithollow.com/2014/04/12/mcsa-2012-upgrade-exam-70-417/","summary":"\u003cp\u003e\u003cimg alt=\"SolAssoc_WinServ2012_Blk\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/04/SolAssoc_WinServ2012_Blk.png\"\u003e\u003c/p\u003e\n\u003cp\u003eIf you have an MCITP or similar certification from Microsoft on Server 2008 and want to keep your certifications up to date, chances are you will need to take the 70-417 exam.  I recently sat this test and wanted to share some of my experiences with you.\u003c/p\u003e\n\u003cp\u003eMy certification background in Information Systems started with my journey to become an MCSE 2003 so Microsoft is kind of my first love when it comes to certs.  I deal a little bit less with the day to day configuration and maintenance of Windows, but Windows Server will always have a certain place in my heart and I try to keep up to date with my credentials.\u003c/p\u003e","title":"MCSA 2012 Upgrade Exam 70-417"},{"content":"One of the most basic tasks that happens on a network is assigning IP Addresses. Once a VMware vCNS Edge appliance has been deployed, you can now hand out IP address through Dynamic Host Control Protocol (DHCP).\nIn previous posts, I\u0026rsquo;ve walked through installing vCNS Manager and installing vCNS Edge appliances. These are prerequisites to setting up DHCP on the VMware vCloud Network and Security appliance.\nvCNS Edge DHCP Setup Log into your vShield Manager and click on the Datacenter. Click the \u0026ldquo;Network Virtualization\u0026rdquo; Tab where you\u0026rsquo;ll find the Edge appliance you\u0026rsquo;ve already deployed. Go to Actions and click \u0026ldquo;Manage\u0026rdquo;.\nGo to the DHCP Tab of the edge appliance. Click the \u0026ldquo;+\u0026rdquo; sign under DHCP Pools.\nConfigure your DHCP Pool here. This is synonymous with a DHCP Scope in Windows. Enter the IP Address to start handing out addresses at, and the ending IP. If you\u0026rsquo;re going to use static IP\u0026rsquo;s in this range, be sure to leave yourself some unassigned IP addresses.\nAlso, enter the domain name, wins servers and a default gateway if you need to have a next hop router.\nWhen you\u0026rsquo;re done adding your DHCP Pools, you will want to enable the DHCP Service. Lastly, be sure to click the \u0026ldquo;Publish Changes\u0026rdquo; so that these configuration options are activated.\nWhen you\u0026rsquo;re done, the DHCP Screen should look similar to the screenshot below. Keep in mind that you can have more than one DHCP Pool, especially if you have multiple networks attached to the Edge device.\nWhen I log into my test VM, and run IPConfig, I can see that I\u0026rsquo;ve received the first IP address in the pool - 10.10.111.10 as we expect.\nSummary Assigning IP Addresses is a first step in utilizing your shiny new vCNS Edge appliance, but totally necessary if you want to have an easy to use network. In future posts, we\u0026rsquo;ll look at other things you can do with your vCNS Edge deployment.\n","permalink":"https://theithollow.com/2014/04/10/vcns-edge-dhcp/","summary":"\u003cp\u003eOne of the most basic tasks that happens on a network is assigning IP Addresses.  Once a VMware vCNS Edge appliance has been deployed, you can now hand out IP address through Dynamic Host Control Protocol (DHCP).\u003c/p\u003e\n\u003cp\u003eIn previous posts, I\u0026rsquo;ve walked through \u003ca href=\"/2014/03/getting-started-vcns/\"\u003einstalling vCNS Manager\u003c/a\u003e and \u003ca href=\"http://wp.me/p32uaN-Tb\"\u003einstalling vCNS Edge appliances\u003c/a\u003e.  These are prerequisites to setting up DHCP on the VMware vCloud Network and Security appliance.\u003c/p\u003e\n\u003ch1 id=\"vcns-edge-dhcp-setup\"\u003e\u003cstrong\u003evCNS Edge DHCP Setup\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eLog into your vShield Manager and click on the Datacenter.  Click the \u0026ldquo;Network Virtualization\u0026rdquo; Tab where you\u0026rsquo;ll find the Edge appliance you\u0026rsquo;ve already deployed.  Go to Actions and click \u0026ldquo;Manage\u0026rdquo;.\u003c/p\u003e","title":"vCNS Edge DHCP"},{"content":"vCloud Networking and Security has the capabilities to provide edge services inside of your virtual environment. Edge firewalls, network address translation, DHCP, routing are all things that vCNS Edge can do for you. This post goes into the steps necessary to deploy vCNS Edge.\nI should mention that vCNS and the previous name vShield may be used interchangeably in this article.\nLogical Diagram The picture below is a diagram of what our environment will look like when we\u0026rsquo;re done. We have production VMs as you might expect, and our new vCNS Edge VM. We\u0026rsquo;ve also got our new Edge network and a Shielded VM which will not be connected to the production vSwitch directly.\nDeploy vCNS Edge The first step is to have your vCNS Manager deployed. I\u0026rsquo;ve done this in a previous post, so if you haven\u0026rsquo;t done this yet, you\u0026rsquo;ll need to do this first. Remember that Edge is a component of vCNS so vCNS Manager needs to be installed first.\nNow that the vCNS services are ready to go, we can deploy the edge services. Go to your vCNS portal, select your datacenter, and then go to the \u0026ldquo;Network Virtualization\u0026rdquo; tab.\nClick the \u0026ldquo;+\u0026rdquo; sign to add a new edge device.\nGive your new Edge Appliance a name, and description. You can also enable HA but in my environment I\u0026rsquo;m only using a single host anyway for this deployment so I\u0026rsquo;ve left it off. Set a password and a username for cli logins and check whether or not you want the communication to be SSH or not.\nSelect the size of the appliance.\nThe sizes of the appliance makes a difference on the capabilities. I urge you to take a look at the following VMware Blog post discussing the differences.\nAlso, if you\u0026rsquo;re doing this in the lab, I highly recommend the \u0026ldquo;Enable auto rule generation\u0026rdquo; to help you set your rules.\nNext you\u0026rsquo;ll need to enter information about the edge appliance virtual machine. Much like and OVA deployment, you\u0026rsquo;ll need a cluster, datastore, host etc.\nWe can add up to 10 interfaces. These interfaces typically come in two forms.\n1. An uplink\n2. An Internal Link\nClick Add\nMy first Nic I create my uplink. This uplink is like the default gateway and connects to my production network. Mileage may vary but this is how I\u0026rsquo;ve configured my lab. Give it a name, and select the port group this nic will be connected to.\nI\u0026rsquo;ve chosen the Prod port group to connect my uplink to.\nNext, you\u0026rsquo;ll need to give it an IP Address. I\u0026rsquo;ve given it an IP Address and entered a subnet mask as well.\nNext, I add a second nic and this time it\u0026rsquo;s an internal network. You\u0026rsquo;ll go through the same process of adding an IP Address here as well. I recommend a standardization such as x.x.x.254 for all the internal IP Addresses so that you\u0026rsquo;ll have a x.x.x.254 as the default gateway on each of the internal networks.\nI\u0026rsquo;ve only added 1 uplink and 1 internal nic, so when I\u0026rsquo;m finished it looked something like this.\nNext we configure a default gateway. I\u0026rsquo;ve selected my uplink and the IP Address of the gateway. If you\u0026rsquo;re using jumbo frames, you can change the MTU size here. Otherwise leave this alone.\nThis is a lab and I\u0026rsquo;m keeping things simple. Therefore, I\u0026rsquo;m allowing all traffic through the appliance at this point. If this is a production environment, you\u0026rsquo;ll likely change this to deny and manually enter each rule for the traffic you\u0026rsquo;re going to allow.\nReview the summary and click finish.\nOnce you click finish you\u0026rsquo;ll see the edge device being deployed.\nAfter some time, you\u0026rsquo;ll see that it has been deployed.\nSummary Deploying the vShield Edge appliance is just the first step. Now that the edge device is available, a wealth of new opportunities are available to you which I\u0026rsquo;ll discuss further in new posts.\n","permalink":"https://theithollow.com/2014/04/07/deploy-vcns-edge/","summary":"\u003cp\u003evCloud Networking and Security has the capabilities to provide edge services inside of your virtual environment.  Edge firewalls, network address translation, DHCP, routing are all things that vCNS Edge can do for you.  This post goes into the steps necessary to deploy vCNS Edge.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eI should mention that vCNS and the previous name vShield may be used interchangeably in this article.\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003ch1 id=\"logical-diagram\"\u003eLogical Diagram\u003c/h1\u003e\n\u003cp\u003eThe picture below is a diagram of what our environment will look like when we\u0026rsquo;re done.  We have production VMs as you might expect, and our new vCNS Edge VM.  We\u0026rsquo;ve also got our new Edge network and a Shielded VM which will not be connected to the production vSwitch directly.\u003c/p\u003e","title":"Deploy vCNS Edge"},{"content":"We\u0026rsquo;ll be giving away a pair of the Sony MDR-X10 headphones courtesy of Veeam. If you\u0026rsquo;re in the market for a stylish set of shiny new headphones and don\u0026rsquo;t want to shell out hard earned cash for them, this is your lucky day. During the vsphere-land.com top virtualization blog contest I was fortunate enough to win a pair of these to give away on my site. Here is how you can win a pair of these headphones for yourself.\nTO ENTER:\nTweet a link to your favorite theITHollow.com blog post with the hashtag #HollowGives OR\nPost a picture of yourself on Instagram with a piece of theITHollow.com swag (T-shirt, stickers, buttons) and the hashtag #HollowGives. Be sure to copy @eshanks16 so that I see it. OR\nTweet a picture of yourself with a piece of theITHollow.com swag on twitter with the hashtag #HollowGives A random winner will be chosen on April 11th and notified via Twitter or Instagram. Good Luck!\n","permalink":"https://theithollow.com/2014/03/31/sony-mds-x10-giveaway-courtesy-veeam/","summary":"\u003cp\u003eWe\u0026rsquo;ll be giving away a pair of the Sony MDR-X10 headphones courtesy of \u003ca href=\"http://veeam.com\"\u003eVeeam\u003c/a\u003e.  If you\u0026rsquo;re in the market for a stylish set of shiny new headphones and don\u0026rsquo;t want to shell out hard earned cash for them, this is your lucky day. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/03/SONYMDR.png\"\u003e\u003cimg alt=\"SONYMDR\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/SONYMDR-260x300.png\"\u003e\u003c/a\u003e\u003cimg alt=\"veeam\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/veeam-300x141.png\"\u003e\u003c/p\u003e\n\u003cp\u003eDuring the vsphere-land.com top virtualization blog contest I was fortunate enough to win a pair of these to give away on my site.  Here is how you can win a pair of these headphones for yourself.\u003c/p\u003e","title":"Sony MDS-X10 Giveaway courtesy of Veeam"},{"content":"Each year Eric Siebert at vsphere-land.com has a voting process where you can cast your ballot for your favorite virtualization blogs. He lists those blogs on his vlaunchpad site if you\u0026rsquo;re curious to see who made the list.\nLuckily again this year my friends over at whateverinspires.com were kind enough to provide a logo for any bloggers who made this prestigious list. This year there are Gold, Silver, and Bronze badges depending on your status. If you made the list, please feel free to download the image and use it on your site.\nCongratulations to all of the bloggers!\n","permalink":"https://theithollow.com/2014/03/27/website-badges-top-50-vsphere-land-bloggers-2014/","summary":"\u003cp\u003eEach year \u003ca href=\"https://twitter.com/ericsiebert\"\u003eEric Siebert\u003c/a\u003e at \u003ca href=\"http://vsphere-land.com\"\u003evsphere-land.com\u003c/a\u003e has a voting process where you can cast your ballot for your favorite virtualization blogs.  He lists those blogs on his \u003ca href=\"http://vlp.vsphere-land.com/\"\u003evlaunchpad\u003c/a\u003e site if you\u0026rsquo;re curious to see who made the list.\u003c/p\u003e\n\u003cp\u003eLuckily again this year my friends over at \u003ca href=\"http://whateverinspires.com\"\u003ewhateverinspires.com\u003c/a\u003e were kind enough to provide a logo for any bloggers who made this prestigious list.  This year there are Gold, Silver, and Bronze badges depending on your status.  If you made the list, please feel free to download the image and use it on your site.\u003c/p\u003e","title":"Website Badges for Top 50 vsphere-land Bloggers -2014"},{"content":"If you\u0026rsquo;re a vSphere Administrator and have compliance regulations to deal with, vShield Endpoint might save you a lot of hassle. From my own experience with PCI-DSS, it was important to limit the cardholder data environment scope. The fewer devices that touch credit card data, the fewer items that had to be protected. In the same breath, it was important to have Anti-Virus, malware protection, firewall rules and file integrity monitoring. vShield Endpoint allows for all of these things to be handled in a single package. This post looks specifically at Trend Micro\u0026rsquo;s Deep Security Product.\nPreparing for vShield Endpoint To start we need to understand that we need to have the vshield environment ready to go first. I\u0026rsquo;ve written a getting started post that might help you get vShield App and the management appliance installed and working, so I won\u0026rsquo;t go into detail on that part again.\nOnce vShield Manager and App are all set, we need to deploy the vShield Endpoint Driver to the hosts that we\u0026rsquo;ll be protecting. Again, I\u0026rsquo;ve mentioned it a few times in my previous posts, \u0026ldquo;Avoid installing vShield on the hosts that vCenter and the vShield Manager are installed on.\u0026rdquo; I\u0026rsquo;ve found that a fairly common design is to have a Management Cluster for your vCenter and vShield Manager, and have them manage a separate cluster. This makes it expensive for a home lab, so consider nesting VMs if you want to try this out for yourself.\nInstalling the endpoint host driver is fairly simple, just open the vShield Console, go to the host that you want to deploy endpoint and click the check box.\nYour next step should be to build a Windows VM that will run your Trend Micro Management Console. Again, this is a good VM to have on your management cluster.\nInstall the Trend Micro Deep Security Manager Trend Micro was gracious enough to allow me to register and download a trial version of their software. You can do likewise if you\u0026rsquo;d like to poke around on your own. If you\u0026rsquo;re not in the mood to go the whole nine yards, hopefully this post has enough screenshots to give you a good feeling of the experience.\nFirst we install the Security Manager on the new Windows VM we just built. The installer is a straight forward wizard.\nWe have the option of using SQL, Oracle or an Embedded database to house our configuration data. I\u0026rsquo;m a SQL guy. ;)\nIf you\u0026rsquo;ve registered with Trend Micro for the downloads, you should receive a license in your email for this step.\nThe installer would like to know the DNS name or IP address of the host you\u0026rsquo;re installing on as well as the ports. I\u0026rsquo;ve left all ports as defaults and entered the name of my Windows VM [Endpoint.hollow.local]\nIt would be pretty difficult to sell a security product that didn\u0026rsquo;t require some sort of authentication. Here we enter a new password for the MasterAdmin account. Make sure you have a special character!\nLike many (I have to assume ALL) Anti-Virus and Malware solutions, you have the ability to update over the Internet for new virus definitions. No difference here.\nNow we have the Trend Micro Deep Security Manager deployed to our environment. The next post will explain what happens when we login.\nvShield Endpoint Part 2 vShield Endpoint Part 3\n","permalink":"https://theithollow.com/2014/03/24/vshield-endpoint-trend-micro-deep-security/","summary":"\u003cp\u003eIf you\u0026rsquo;re a vSphere Administrator and have compliance regulations to deal with, vShield Endpoint might save you a lot of hassle.  From my own experience with PCI-DSS, it was important to limit the cardholder data environment scope.  The fewer devices that touch credit card data, the fewer items that had to be protected.  In the same breath, it was important to have Anti-Virus, malware protection, firewall rules and file integrity monitoring.  vShield Endpoint allows for all of these things to be handled in a single package.  This post looks specifically at Trend Micro\u0026rsquo;s Deep Security Product.\u003c/p\u003e","title":"vShield Endpoint - Trend Micro Deep Security (Part 1)"},{"content":"In the first post in this series, we deployed the vShield Endpoint host driver and installed the Trend Micro Deep Security Manager on a Windows VM.\nTrend Micro Deep Security Appliance Deployment First, we need to login to the Deep Security Manager which is conveniently accessed as a web page. Go the the DNS name of the Manager that you entered during the setup wizard in part 1 of this series. Log in with the username and password that you specified.\nGo to the Computers tab. You\u0026rsquo;ll notice that there aren\u0026rsquo;t a computers listed yet. We\u0026rsquo;ll need to add them by adding our vCenter. Choose New \u0026ndash;\u0026gt; New Computer and then select add VMware vCenter\u0026hellip;\nFill out the required information for your Deep Security Manager to connect to your vCenter Server.\nThe next step is to put in the vShield Manager login information. This is so that the Deep Security software can leverage the vShield Endpoint APIs.\nOnce done, you\u0026rsquo;ll see the datacenters, hosts, and virtual machines that were imported.\nNow, if everything worked out, we\u0026rsquo;ll see computers listed in the console. From here, we want to choose the ESXi hosts we\u0026rsquo;re going to manage (not the management cluster) and Prepare ESX\u0026hellip;\nThis operation installs the Trend Micro Filter Driver into the hypervisor. This will require putting the host into Maintenance mode so if DRS isn\u0026rsquo;t setup, you may need to manually enter maintenance mode first, otherwise this will be done for you once your VMs are moved off of the host that is being prepared.\nPreparing the ESXi host requires you to install some software onto the ESXi host. This may require you to download the driver from the Trend Micro Site and import it into the Management console first. If you don\u0026rsquo;t have the driver available when you attempt to deploy the software, a warning will pop up and allow you to import the software.\nBelow I show how you simply select the .zip file you downloaded, and import it into the manager.\nVerify the fingerprint.\nOnce the ESXi server is \u0026ldquo;prepared\u0026rdquo; the next step is to \u0026ldquo;Deploy Appliance\u0026hellip;\u0026rdquo; which you can do by again right clicking the Host and navigating to Actions just as you did to Prepare the host.\nThis wizard deploys a virtual appliance to the selected host which will be responsible for the actual firewalling, and scanning operations that have to happen. It\u0026rsquo;s kind of a pain to deploy a VM to each host you\u0026rsquo;re protecting, but if you think about what it would normally take to install an agent on each host, this is much more efficient than running multiple agents on the same host that might all scan at the same time, or update at the same time.\nSimilar settings to deploying an OVA file from vCenter will need to be set. Name, Datastore, Folder and Network will still need to be setup.\nNext enter a DNS name and the IP Settings to fit your needs. I\u0026rsquo;ve chosen a static IP Address.\nThin provisioned disks for me for sure!\nWhen you\u0026rsquo;re done with the deployment of the appliance, the wizard will ask you if you would like to activate the appliance. If you decide not to, for whatever reason, you can always do this from the Management console at a later time.\nHere, you can select a policy to publish to the virtual machines you\u0026rsquo;ll be protecting. Policies can be edited from the Management Console and we\u0026rsquo;ll look at them more in depth in a future post.\nSelect any of the virtual machines that you would like to activate as well. Again, you can do this from the management console if you don\u0026rsquo;t want to do this right now.\nThe preparation of the environment is now over. In the next post we\u0026rsquo;ll get more into how Trend Micro Deep Security Manager can help you manage your compliance.\nvShield Endpoint Part 1 vShield Endpoint Part 3\n","permalink":"https://theithollow.com/2014/03/24/vshield-endpoint-trend-micro-deep-security-part-2/","summary":"\u003cp\u003eIn the \u003ca href=\"http://wp.me/p32uaN-QT\"\u003efirst post\u003c/a\u003e in this series, we deployed the vShield Endpoint host driver and installed the Trend Micro Deep Security Manager on a Windows VM.\u003c/p\u003e\n\u003ch1 id=\"trend-micro-deep-security-appliance-deployment\"\u003eTrend Micro Deep Security Appliance Deployment\u003c/h1\u003e\n\u003cp\u003eFirst, we need to login to the Deep Security Manager which is conveniently accessed as a web page.  Go the the DNS name of the Manager that you entered during the setup wizard in \u003ca href=\"http://wp.me/p32uaN-QT\"\u003epart 1\u003c/a\u003e of this series.  Log in with the username and password that you specified.\u003c/p\u003e","title":"vShield Endpoint - Trend Micro Deep Security (Part 2)"},{"content":"The first parts of this series focused mainly on how to install the Trend Micro Deep Security product and how to prepare your environment. This post shows you a bit more of what can be accomplished with the product.\nvShield Endpoint Part 1 vSheidl Endpoint Part 2\nPolicies This is the guts of the product. All the configurations you\u0026rsquo;ve done up to this point have been leading up to a solution that can help secure your environment and possibly make it comply with a regulatory body.\nIf we look at the Policies tab, we\u0026rsquo;ll see a number of Out Of The Box policies that are ready to go for a variety of Operating Systems. These base policies will give you a great start on securing your environment, but can always be tweaked to a configuration of your liking.\nI opened the \u0026ldquo;Windows Server 2012\u0026rdquo; policy to get started. You\u0026rsquo;ll notice that you can change Firewall, IPS, Malware and log inspection policies to meet your organization\u0026rsquo;s security policies. I dove into the Integrity Monitoring section as an example.\nI found the \u0026ldquo;Network Configuration files modified\u0026rdquo; rule as my test. The rule can be found below, but basically it states that if I change my hosts file, I should be alerted. As a Systems Administrator this is a very valuable thing to know about because changes to host files are good sign of an attack.\nOnce I\u0026rsquo;ve reviewed all the policies and modified what I needed to, the next step is to send the policy to your protected computers. Here, I\u0026rsquo;ve built a virtual machine called \u0026ldquo;ShieldedVM\u0026rdquo; for testing purposes. I have then pushed down the Server 2012 Policy to the VM.\nTo test things, out, I make a modification to the hosts file on the \u0026ldquo;SheildedVM\u0026rdquo; and within 5 minutes I have an alert in the \u0026ldquo;Events \u0026amp; Reports\u0026rdquo; tab. Pretty slick stuff here!\nThere are additional policies that can be added depending on your security requirements. The Intrusion Prevention policies allow you to use heuristics to determine traffic flows in your environment. The example below looks for dropbox traffic. NO SHADOW IT HERE!\nSUMMARY I\u0026rsquo;ve used the Trend Micro Deep Security product as an example in this series, but endpoint is available for a variety of different vendors now. As companies continue to virtualize more and more workloads, including desktops, products will continue to find new ways to protect them. Endpoint is a great way to secure traffic before it ever reaches an end user operating system.\n","permalink":"https://theithollow.com/2014/03/24/vshield-endpoint-trend-micro-deep-security-part-3/","summary":"\u003cp\u003eThe first parts of this series focused mainly on how to install the Trend Micro Deep Security product and how to prepare your environment.  This post shows you a bit more of what can be accomplished with the product.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://wp.me/p32uaN-QT\"\u003evShield Endpoint Part 1\u003c/a\u003e \u003ca href=\"http://wp.me/p32uaN-RD\"\u003evSheidl Endpoint Part 2\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"policies\"\u003ePolicies\u003c/h1\u003e\n\u003cp\u003eThis is the guts of the product.  All the configurations you\u0026rsquo;ve done up to this point have been leading up to a solution that can help secure your environment and possibly make it comply with a regulatory body.\u003c/p\u003e","title":"vShield Endpoint - Trend Micro Deep Security (Part 3)"},{"content":" CloudPhysics continually surprises me with their innovation when providing cards for simulation purposes. I\u0026rsquo;ve posted a couple of times already about how I really like their AWS pricing calculator (they also have vCHS as well). Having a good idea about how much your existing environment will cost if you make modifications is a pretty big win for a CIO.\nWhile I was at GestaltIT\u0026rsquo;s Virtualization Field Day 3, two weeks ago, Irfan Ahmed showed us a new card that would simulate how much SSD Cache you should buy based on your current workloads.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\nIf you\u0026rsquo;ve been living under a rock for the last 18 months, solid state drives are transforming how we look at storage. An entire industry has been spawned with the sole goal of speeding up traditional spinning disks by putting some form of SSD in front of them in the IO path. PernixData has a product called FVP and VMware has released vFlash just to name a couple of products that use host based caching with SSDs. Tintri, Nimble Storage, and Coho Data are among some of the newer storage arrays that were built with SSD Caching in mind right from the start. The big storage arrays like Netapp and EMC have also added SSD caching to their arrays as well.\nClearly writing data to the SSD is faster than waiting for a mechanical disk, but the question then becomes, \u0026ldquo;How much cache do we need?\u0026rdquo; The idea of cache is to use small amounts of SSD for performance but to keep the spinning disks in your environment for large bulk storage.\nIrfan explained that CloudPhysics uses your own workloads to determine where that sweet spot is. He used a graph similar to the crudely built graph below that I mocked up for an example.\nIf we look at IOPS on the left and GB\u0026rsquo;s of cache along the bottom, we\u0026rsquo;ll notice that in most cases just adding more cache doesn\u0026rsquo;t help. This was a wake up moment for me. Cache is good, but more cache is better right? Well, not necessarily.\nThis crudely drawn graph is just an example and in no way should reflect upon CloudPhysics. :) You can see in the graph that as the size of the cache increases, the IOPS increase, but there are jumps where the amount of cache really helps. For instance the difference between having 30 GB of cache vs 40 GB of cache only improves the IOPS slightly. However, going from 45 GB to 50GB dramatically increases performance. This is due to the fact that the working set may be thrashing and evicting data from your SSD. If you can fit the whole working set into the SSD performance should dramatically increase.\nSummary This seems like a pretty simple concept, but until Irfan mentioned it, I\u0026rsquo;d never thought about it before. If you\u0026rsquo;re considering buying cache for your environment, CloudPhysics might have a tool that can help you determine how much cache to get without paying for useless space.\n","permalink":"https://theithollow.com/2014/03/19/use-cloudphysics-determine-much-ssd-cache-buy/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/03/cloudphysicslogo.png\"\u003e\u003cimg alt=\"cloudphysicslogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/cloudphysicslogo.png\"\u003e\u003c/a\u003e CloudPhysics continually surprises me with their innovation when providing cards for simulation purposes. I\u0026rsquo;ve posted a couple of times already about how I really like their AWS pricing calculator (they also have vCHS as well).  Having a good idea about how much your existing environment will cost if you make modifications is a pretty big win for a CIO.\u003c/p\u003e\n\u003cp\u003eWhile I was at GestaltIT\u0026rsquo;s \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e, two weeks ago, \u003ca href=\"http://twitter.com/virtualirfan\"\u003eIrfan Ahmed\u003c/a\u003e showed us a new card that would simulate how much SSD Cache you should buy based on your current workloads.\u003c/p\u003e","title":"Use CloudPhysics to Determine How Much SSD Cache to Buy"},{"content":"VMware has a very nice solution for managing network access between virtual machines. In a physical environment, blocking access between servers would require routing network traffic through a firewall. This might mean several vlans, subnets and routes. Luckily now that many infrastructures are virtual we have an alternative. vCloud Networking and Security (vCNS) is a solution that can be used to block traffic between virtual machines.\nvCNS can be a bit intimidating so this is a quick, getting started, guide on how you can test it out in your environment.\nBrian Suhr has some VirtualizeTips on his site that I recommend taking a peek at. I highly advise paying attention to this one specifically, not because it totally blew up my lab or anything.\nDo not deploy vShield manager appliance to a cluster that it will be protecting, can cause connection to itself and vCenter to be lost.\nDeploy vCNS The initial installation is a typical OVA file that can just be deployed to vSphere. LOVE OVAs!\nConfigure vCNS General Settings You can access the vshield manager by going to the web address of the IP you configured in your OVA file deployment. Enter the information to register your vShield Manager with the vCenter. Also, it\u0026rsquo;s a good idea to set your NTP settings.\nOnce the OVA has been deployed, you must deploy the vShield App to the hosts.\nAGAIN\u0026hellip;.WARNING\u0026hellip;.. DO NOT DEPLOY VSHIELD APP TO A CLUSTER THAT IT WILL BE PROTECTING.\nGo to the ESXi host in the vCNS app and click the \u0026ldquo;Install\u0026rdquo; link for vShield App. This will add a new VMware standard switch on the host as well as deploy a virtual machine as a service VM, to name a few things.\nYou\u0026rsquo;ll be asked to enter some added information for the service VM to be deployed. Oh, and if you didn\u0026rsquo;t notice before, there is a warning on this page about deploying vShield App on a cluster with virtual center.\nOnce it\u0026rsquo;s deployed, I would recommend adding your vCenter to the list of excluded VMs, just in case you didn\u0026rsquo;t pay attention to the warnings\u0026rsquo;s I mentioned above about deploying vShield to the cluster you\u0026rsquo;re protecting.\nAdd a Firewall Rule Now that the vShield app is deployed, go to your VMware DataCenter and go to the App Firewall tab. Add a standard rule such as blocking HTTP. Be sure to Publish your changes when you\u0026rsquo;re done.\nProve IT! Just to prove that it works. We can see the web server works fine before the rule is added.\nAfter the rule is added.\nSummary vCNS is an important tool for vSphere administrators. It is a necessary component for vCAC and vCloud Director to separate traffic. This is also very important for environments that are concerned about their PCI-DSS in-scope networks. Virtual firewalls don\u0026rsquo;t have to be complicated to be effective!\n","permalink":"https://theithollow.com/2014/03/17/getting-started-vcns/","summary":"\u003cp\u003eVMware has a very nice solution for managing network access between virtual machines.  In a physical environment, blocking access between servers would require routing network traffic through a firewall.  This might mean several vlans, subnets and routes.  Luckily now that many infrastructures are virtual we have an alternative.  vCloud Networking and Security (vCNS) is a solution that can be used to block traffic between virtual machines.\u003c/p\u003e\n\u003cp\u003evCNS can be a bit intimidating so this is a quick, getting started, guide on how you can test it out in your environment.\u003c/p\u003e","title":"Getting started with vCNS"},{"content":"\nLast week I attended the Virtualization Field Day 3 put on by the amazing staff at GestaltIT. One of the sessions was hosted by the folks at Atlantis Computing and they were giving us an overview of their Atlantis USX product.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\nAtlantis USX Overview Before we get to the crux of the issue, a quick refresher on what Atlantis USX does. The USX product will utilize existing SAN, NAS and DAS and combine it with a server ram to do caching. The value proposition is that Atlantis USX can carve out RAM to be used as either a whole datastore (SUPER FAST) or combine it with existing storage and have it act as a cache. Keeping the cache so close to the processor without having to go across a bus or HBA, which can add additional latency, can be an important addition to a performance strapped storage solution.\nAtlantis USX also offers some additional feature such as In-Line Deduplication, Real-Time Compression, Replication (you better be using this if your whole datastore is in RAM), Thin Provisioning, and fast cloning.\nThe USX product really is interesting, I think possibly more useful with VDI where the persistent disk is less important. Atlantis actually has a similar product for VDI called Diskless VDI. With your disks all in RAM, you can get great performance without worrying about the persistant storage. Also, it was mentioned that you can decrease the amount of provisioned RAM on the virtual machine, because if it needs to page, it still pages to memory which would be roughly the same thing.\nVMware VSAN With all of the buzz on VMware VSAN in the past few months, I\u0026rsquo;m not sure I need to review it, but essentially VSAN takes a local SSD and local spinning disks and creates shared storage out of them. You need to have a minimum of three ESXi hosts running VSAN for it to work, but this is likely a real help to small businesses can want shared storage but don\u0026rsquo;t want to spend money on a monolithic storage array like a Netapp. The SSD is used as a cache to accelerate the storage performance while still using cheaper, but slower, spinning disks. I won\u0026rsquo;t go into more detail on this, but if you\u0026rsquo;d like to learn more please check out Duncan Eppings site where he has TONS of info on VSAN.\nCombine the two???? One of the questions at VFD3 was how Atlantis USX was positioned to compete with VSAN. The answer we received from the Atlantis Computing crew was that they don\u0026rsquo;t compete with VSAN, they work with it. [Crickets Chirping, Engineers scratching heads.]\nIt\u0026rsquo;s not too difficult to understand how Atlantis USX COULD work with VMware VSAN, the real question is WHY would you use them together? Both products take DAS and accelerate it. You could use Atlantis to cache IO in memory, and then cache it again in VSAN before finally writing to spinning disk, but would anyone really PAY for both products?\nNot only did I hear this at VFD3, but it\u0026rsquo;s also in a techtarget article.\nThis is a legit question that I don\u0026rsquo;t know the answer to. If anyone knows why this should be done, or has a use case on why it would be necessary to run both, I\u0026rsquo;d like to hear it. The mention of running them together is also in a techtarget.com article suggesting that these two products are complementary.\nPlease feel free to post comments below, I\u0026rsquo;d love to hear some other opinions about running these together.\n","permalink":"https://theithollow.com/2014/03/13/atlantis-usx-vmware-vsan/","summary":"\u003cp\u003e\u003cimg alt=\"atlantis_logo2012\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/atlantis_logo2012.png\"\u003e\u003c/p\u003e\n\u003cp\u003eLast week I attended the \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e put on by the amazing staff at \u003ca href=\"http://gestaltit.com/\"\u003eGestaltIT\u003c/a\u003e.  One of the sessions was hosted by the folks at \u003ca href=\"http://www.atlantiscomputing.com/\"\u003eAtlantis Computing\u003c/a\u003e and they were giving us an overview of their \u003ca href=\"http://www.atlantiscomputing.com/products/usx\"\u003eAtlantis USX\u003c/a\u003e product.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\u003c/p\u003e\n\u003c/blockquote\u003e\n\u003ch1 id=\"atlantis-usx-overview\"\u003eAtlantis USX Overview\u003c/h1\u003e\n\u003cp\u003eBefore we get to the crux of the issue, a quick refresher on what Atlantis USX does.  The USX product will utilize existing SAN, NAS and DAS and combine it with a server ram to do caching.  The value proposition is that Atlantis USX can carve out RAM to be used as either a whole datastore (SUPER FAST) or combine it with existing storage and have it act as a cache.  Keeping the cache so close to the processor without having to go across a bus or HBA, which can add additional latency, can be an important addition to a performance strapped storage solution.\u003c/p\u003e","title":"Atlantis USX with VMware VSAN?"},{"content":" One of the companies I was most interested in seeing at the GestaltIT Virtualization Field Day 3, was CloudPhysics. I was already a little familiar with the company because I\u0026rsquo;d written a post on my experience in the lab. While my original post was obviously good, you can\u0026rsquo;t really get a more passionate and knowledgeable explanation of the solution than from the Co-founder and CTO Irfan Ahmad. The presentations can be found online.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\nThe Big Data Solution From my experience in the financial industry, I can tell you that financial decisions are not usually made by a guess. A lot of time and money goes into building solutions that will allow analysts to make very informed predictions about what will happen over time. Every minute data point is tracked, put into data cubes, dissected and forecast so the best resolution can be made.\nThink about how we make IT design decisions now. Sure, we\u0026rsquo;ll do some analysis and load testing to see how to size an environment, but is that really the same thing? Wouldn\u0026rsquo;t it be great to have a way to tell us what would happen if we made certain decisions about how we designed our infrastructure? Is it cheaper to own our own hardware, or migrate it to Amazon Web Services? Should we buy some flash for caching and if so, how much? How should I configure my High Availability Cluster?\nCloudPhysics is empowering administrators to use large data sets to determine a course of action.\nData Security In order to make these kinds of decisions, CloudPhysics is using metrics from all of their customers. The larger the data set, the more accurate the forecast will be.\nThis is a great idea, but some organizations may be concerned about giving up their configuration data for fear that they\u0026rsquo;ll lose confidential or intellectual property. Rest assured that CloudPhysics isn\u0026rsquo;t stealing your confidential information.\nCloudPhysics is collecting configuration and performance information for:\nVMs Hosts Clusters Resource Pools Datastores In addition this information is transferred via SSL encrypted sessions and under no circumstances are passwords, IP Addresses or personal data transferred.\nObviously there is an ID that has to be transferred so that you can login to the portal and view your infrastructure, but even this data is housed separately to further ensure privacy. To learn more please check out their website.\nSummary I think you\u0026rsquo;ll be seeing much more from this company down the road, but their vision to treat IT design just like any other analytical process is refreshing.\n","permalink":"https://theithollow.com/2014/03/11/vsphere-design-based-big-data-analytics/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/03/SAM_0366.jpg\"\u003e\u003cimg alt=\"SAMSUNG CSC\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/SAM_0366-300x200.jpg\"\u003e\u003c/a\u003e One of the companies I was most interested in seeing at the GestaltIT \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e, was \u003ca href=\"http://Cloudphysics.com\"\u003eCloudPhysics\u003c/a\u003e.  I was already a little familiar with the company because I\u0026rsquo;d written a \u003ca href=\"/2013/12/cloud-physics/\"\u003epost\u003c/a\u003e on my experience in the lab.  While my original post was obviously good, you can\u0026rsquo;t really get a more passionate and knowledgeable explanation of the solution than from the Co-founder and CTO \u003ca href=\"https://twitter.com/virtualirfan\"\u003eIrfan Ahmad\u003c/a\u003e.  The presentations can be found \u003ca href=\"http://techfieldday.com/appearance/cloudphysics-presents-at-virtualization-field-day-3/\"\u003eonline\u003c/a\u003e.\u003c/p\u003e","title":"CloudPhysics vSphere Design Based on Big Data Analytics"},{"content":"When there is a big event that I\u0026rsquo;m affiliated with, I like to do some quick analysis on the twitter statistics, just to put things into some perspective. For this query, we\u0026rsquo;ve looked at all tweets with the hashtag #VFD3 from March 5th - 7th specifically. This should take into account most of the Virtualization Field Day 3 Twitter Statistics.\nEnjoy. Oh and some of you tweet WAAAAAY to much. :)\n","permalink":"https://theithollow.com/2014/03/10/virtualization-field-day-3-twitter-statistics/","summary":"\u003cp\u003eWhen there is a big event that I\u0026rsquo;m affiliated with, I like to do some quick analysis on the twitter statistics, just to put things into some perspective.  For this query, we\u0026rsquo;ve looked at all tweets with the hashtag #VFD3 from March 5th - 7th specifically.  This should take into account most of the Virtualization Field Day 3 Twitter Statistics.\u003c/p\u003e\n\u003cp\u003eEnjoy.  Oh and some of you tweet WAAAAAY to much. :)\u003c/p\u003e","title":"Virtualization Field Day 3 Twitter Statistics"},{"content":"\nVMTurbo was kind enough to come to the GestaltIT Virtualization Field Day 3, and present to a group of technical bloggers about their product \u0026ldquo;Operations Manager\u0026rdquo;. I was familiar (or thought that I was) with this product so I expected to see a presentation about some software that would give you alarms when virtual workloads started to misbehave. I found out that my perception about this product was misguided so I wanted to clear it up for anyone else who was under the same impression as I was.\nIf you would like to check out the recordings for VMTurbo and others at VFD3 you can find them online.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\nVMTurbo Goals VMTurbo President Shmuel Kliger kicked off his presentation by explaining that VMTurbo\u0026rsquo;s Operations Manager\u0026rsquo;s main goal is to keep your virtual infrastructure environment in the \u0026ldquo;Goldielocks\u0026rdquo; zone by ensuring that you\u0026rsquo;ve provisioned enough resources to keep the applications working well, but no over provisioned because that would be wasting money.\nShmuel goes on to explain the law of diminishing returns when it comes to resources. When a resource gets constrained, performance suffers, A small amount of additional strain for resources makes the performance behave much worse. So it\u0026rsquo;s not hard to explain to Systems Administrators not to get to the point where resources are on the verge of being constrained. But at the same time, CFO\u0026rsquo;s don\u0026rsquo;t want over provisioned resources either.\nVMTurbo Methodology This was the super interesting thing that I didn\u0026rsquo;t understand fully. VMTurbo Operations Manager looks at all of the resources in your clusters, including compute, storage, network, memory, etc and makes recommendations to you similar to how the stock market works. Let me explain using an example.\nTwo virtual machines are both fighting for storage resources. As Systems Administrators we know that resource contention is a problem. VMTurbo looks at this not necessarily as a problem, but as a high price transaction. Just like in a Market Economy, two people fighting over the same thing will drive up the price of the item. VMTurbo Operations Manager would make a recommendation to move one of the workloads to a different datastore in order to lower the cost for both virtual machines. Conversely, if a high priced resource like solid state disks aren\u0026rsquo;t being utilized, then this resource might be considered a bargain and Operations Manager would recommend moving a virtual machine to utilize the SSD in order to take advantage of the bargain. Make sense?\nResources If you\u0026rsquo;re interested in taking a peek at this for yourself, there is a 30 day eval version to download. VMTurbo -Operations Manager Eric Wright - Turbo Charging your IT Operations Aaron Delp - Goldilocks \u0026amp; Supply Chains with VMTurbo\n","permalink":"https://theithollow.com/2014/03/08/vmturbo-market-economy/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/03/VMTurboLogo.jpg\"\u003e\u003cimg alt=\"VMTurboLogo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/VMTurboLogo.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eVMTurbo was kind enough to come to the GestaltIT \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e, and present to a group of technical bloggers about their product \u0026ldquo;Operations Manager\u0026rdquo;.  I was familiar (or thought that I was) with this product so I expected to see a presentation about some software that would give you alarms when virtual workloads started to misbehave.  I found out that my perception about this product was misguided so I wanted to clear it up for anyone else who was under the same impression as I was.\u003c/p\u003e","title":"VMTurbo as a Market Economy"},{"content":"\nI was fortunate enough to have spent some time at the Coho Data headquarters this week for the announcement that their new product, DataStream 1000, is now generally available.\nThe announcement was made at the GestaltIT Virtualization Field Day 3, which was streamed live and the recordings can be found online.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\nIf you\u0026rsquo;re not familiar with them already, Coho Data has developed a very flexible scale-out architecture for a storage platform. If you\u0026rsquo;re running out of storage or performance, you add a new node and move on. Each one of these Datastream chassis has two controllers called \u0026ldquo;microarrays\u0026rdquo; that manage the data.\nEach microarray runs a hypervisor that virtualizes access to:\ntwo Intel 910 Series 800GB PCIe Flash Cards six 3TB SATA disks two 10Gb NICs two Intel Xeon E5-2620 Processors As with any new storage platform the performance is always something people want to hear about. I\u0026rsquo;ll show you the advertised performance metrics with the caveat that I\u0026rsquo;ve not performed any tests on this myself. What I do like about this graphic is that it shows what the scale out looks like.\nMy focus is to look at how the NFS microarrays scale out with a single IP Address. The thing that REALLY caught my attention during Andrew Warfield\u0026rsquo;s presentation was that each of the 10Gb Nics have the same IP Address. What!???\nIf we had two datastream 1000\u0026rsquo;s there would be 8 10Gb NICS that all have the same IP address, as shown below in my crude diagram.\nIt\u0026rsquo; took a bit for me to understand exactly how this solution was made to work, but the great thing about Tech Field Day is that you are around a variety of great minds in different disciplines and Tom Hollingsworth was able to fill in some blanks for me. Coho is using an Arista 7050 switch which is an OpenFlow switch. The magic here is that this switch is a Software Defined Switch in the sense that the data plane is taking instructions from a control plane that can manage the layer 2 traffic.\nLet\u0026rsquo;s walk through a quick example.\nThe host in the diagram below will submit an NFS request over the network to the storage array at the 1.1.1.1 IP address. The switch knows that one of the ports with a 1.1.1.1 IP address needs to get the traffic, but NOT all of them. The Control Plane of the switch can decide which port is least heavily utilized and push an entry into the TCAM of the switch where then the data plane will forward the layer 2 frame out the desired port. This is how Coho can have a single IP Address across all of their devices so adding a new chassis to the configuration is no big deal.\nCoho Data has a really unique way of handling storage and seems as though they can do it very quickly by utilizing PCIe Flash as well as low cost disks. It\u0026rsquo;s also very helpful to be able to right size your environment and be able to scale to the size of your requirements.\nPlease tune in and keep an eye on this company. It\u0026rsquo;s a really interesting design and a bit different solution than you\u0026rsquo;ve seen from other vendors.\nIf you want to read more, please check out the following related posts, or contact Coho Data. I\u0026rsquo;m sure they\u0026rsquo;d love to talk to you.\nTech Field Day - Coho Data Presentation at Virtualization Field Day 3\nChris Wahl - Coho Data Unveils Hybrid Flash Storage Combined With Software-Defined Networking\nEric Wright - Tech Field Day VFD3 – Coho Data – Coho-ly Moly this is a cool product!\nJeff Wilson - Kicking it with Coho\n","permalink":"https://theithollow.com/2014/03/07/initial-musing-coho-data-scale-networking/","summary":"\u003cp\u003e\u003cimg alt=\"COHOLogo2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/COHOLogo2.png\"\u003e\u003c/p\u003e\n\u003cp\u003eI was fortunate enough to have spent some time at the Coho Data headquarters this week for the announcement that their new product, DataStream 1000, is now generally available.\u003c/p\u003e\n\u003cp\u003eThe announcement was made at the GestaltIT \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e, which was streamed live and the recordings can be found \u003ca href=\"http://www.youtube.com/playlist?list=PLinuRwpnsHaeHlBfPhawM3jl9oZH3R2sq\u0026amp;feature=view_all\"\u003eonline\u003c/a\u003e.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\u003c/p\u003e","title":"Initial Musing about Coho Data Scale Out Networking"},{"content":"\nIf you are coming up on a storage refresh cycle soon, Pure Storage is worth taking a look at as your new storage array. I was fortunate enough to see them present their solution at Virtualization Field Day 3 this year and got a good look at their storage.\nAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\nBefore seeing \u0026ldquo;Pure\u0026rdquo; at VFD3, I was under the assumption that it was a really fast array, that was probably out of the price range of most customers in the SMB space that I\u0026rsquo;ve dealt with. After seeing them present at VFD3, I realized that my assumptions were really off.\nI won\u0026rsquo;t get into the speed and performance of the array because I haven\u0026rsquo;t been able to test it. Brian Suhr makes a good point on his blog about not regurgitating marketing material on things unless you\u0026rsquo;ve been able to test them in the lab yourself and I\u0026rsquo;d like to try to adhere to this logic as well. But knowing that Pure is an all flash array that was built from the ground up with flash in mind, it has to be pretty fast right?\nThe things that Pure Storage is doing that really impressed me were soft metrics that don\u0026rsquo;t show up on a datasheet.\nForever Flash \u0026ldquo;Pure\u0026rdquo; has an alliterative program called \u0026quot; Forever Flash\u0026quot; that helps to alleviate maintenance costs of your array. Traditionally, an array has a life span around three years, after which you need to renew your maintenance contract (usually at an increased rate to pay for aging technology support) or you replace the array with another expensive piece of equipment.\nPure Storage will actually allow you to either:\nRenew your first year contract pricing on your existing array as long as its growing (you\u0026rsquo;re adding disksshelves) Get a free upgrade to your controllers after three years if your array isn\u0026rsquo;t growing This is a pretty big deal to a smaller sized company where the maintenance contracts can typically be harder to swallow than the price of the actual array.\nMLC Drives Pure uses the lower cost MLC Solid State Drives in their arrays in an effort to try to keep their price point low. Generally using MLC drives vs the SLC drives will cause you to have a higher Mean-Time to Failure where the SSD\u0026rsquo;s will just not last as long. \u0026ldquo;Pure\u0026rdquo; however, claims that they\u0026rsquo;ve only lost 5 drives in the time they\u0026rsquo;ve been operating, across all of their customers. THIS IS AN AMAZING FEET. I\u0026rsquo;ll take them at their word that this is true, but is pretty impressive if accurate.\nPure Storage would attribute this ability to how their controllers write data to the disks after doing inline deduplication and compression before bothering to write to disk.\nSimplicity We were able to see a demo of the array software and it\u0026rsquo;s very simple. Most of the settings that you might have with a traditional storage array are missing because the mindset is, \u0026ldquo;It\u0026rsquo;s Fast, and you don\u0026rsquo;t need to tweak stuff\u0026rdquo;. Another example of this is the block size is only 512 bytes. A byte size of this size removes the problems of having misaligned blocks. If it\u0026rsquo;s a 4K block (pretty standard size) you can have misalignment which will then also hurt deduplication if this is a capability.\nNo silos for your applications. Put all of your workloads on this array. This is an all flash array so their isn\u0026rsquo;t a need to consider where to put your workloads. No tiering of your workloads should be necessary.\nProfessional Services are not required to do the initial setup which will also help lower the cost of ownership. It\u0026rsquo;s not really needed anyway because there are only a few things to configure to setup the array anyway.\nAlso, a very nice \u0026ldquo;feature\u0026rdquo; was that all of the capabilities of the array are available to you without going through an A La Carte licensing scenario. There isn\u0026rsquo;t any \u0026ldquo;oh the array is $ but if you want the software capabilities we\u0026rsquo;ll bill you extra for the ones that you want. If you buy a Pure Array, you\u0026rsquo;ve got the capabilities including encrypting all the data that is on the drives.\nConclusion I was pleasantly surprised at some of the things that Pure Storage was doing to give customers a better experience with their array and it\u0026rsquo;s not just a Flash Array. As always, don\u0026rsquo;t take my word for it, do your research.\nScott Lowe wrote a nice article about Pure Storage as well that I invite you to check out, as well as the Virtualization Field Day 3 videos.\nLastly, there is a guarantee! ","permalink":"https://theithollow.com/2014/03/06/consider-pure-storage-next-array/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/03/PURE.png\"\u003e\u003cimg alt=\"PURE\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/03/PURE.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIf you are coming up on a storage refresh cycle soon, \u003ca href=\"http://purestorage.com\"\u003ePure Storage\u003c/a\u003e is worth taking a look at as your new storage array.  I was fortunate enough to see them present their solution at \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e this year and got a good look at their storage.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eAll travel expenses and incidentals were paid for by Gestalt IT to attend Virtual Field Day 3. This was the only compensation given.\u003c/p\u003e","title":"Should You Consider Pure Storage as your Next Array?"},{"content":"\nWatch the Virtualization FIeld Day 3 Sessions live!\nPlease feel free to check out the live stream and live tweets from the event.\nwww.techfieldday.com Tweets about \u0026ldquo;#VFD3\u0026rdquo;\n","permalink":"https://theithollow.com/2014/03/05/virtualization-field-day-3-live-stream/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/01/VFD-Logo-400x398.png\"\u003e\u003cimg alt=\"VFD-Logo-400x398\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/01/VFD-Logo-400x398.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eWatch the Virtualization FIeld Day 3 Sessions live!\u003c/p\u003e\n\u003cp\u003ePlease feel free to check out the live stream and live tweets from the event.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003ewww.techfieldday.com\u003c/a\u003e \u003ca href=\"https://twitter.com/search?q=%23VFD3\"\u003eTweets about \u0026ldquo;#VFD3\u0026rdquo;\u003c/a\u003e\u003c/p\u003e","title":"Virtualization Field Day 3 Live Stream"},{"content":"High Availability is a great reason to virtualize your servers. It can help reduce downtime by automatically rebooting virtual machines in the case of a host failure. But, a relatively minor host issue should not cause the reboot of all of your virtual machines. This is where vCenter HA datastore heartbeats are useful. Let\u0026rsquo;s first look at a basic example of HA. Below is our normal environment with no failures. We have a few VMs on each host and the hosts are connected to a pair of datastores and a network switch. Now assume we have a host failure, we now need to have HA kick in and reboot the virtual machines on the failed host, over on the still working hosts. HA is working great and is a great feature, but lets take a look at what happens if the Management network were to fail. Without datastore heartbeats involved, the two hosts wouldn\u0026rsquo;t be able to communicate with each other over the network so the two of them would assume that the other was failed. But by looking at the example below we can see that even though the Management network is down, the virtual machines and their network is working just fine. This means that no outages are being noticed by end users so we DON\u0026rsquo;T want HA to kick in because the virtual machines will restart. Enter Datastore Heartbeats In the event that the management network is not available and the hosts are isolated from one another, the hosts can still look to the shared datastores. Since the storage is available they can use the VMFS Locking capability to see if the other host is still actively using storage. If you look at your datastores, you may see files named \u0026ldquo;host-XX-hb\u0026rdquo;. These are heartbeat files and one per host should be visible. If the ESXi hosts are isolated, they can see if a lock is still placed on these files to determine if HA needs to kick in. Configure Datastore Heartbeats If you would like more insight on what is happening with your datastore heartbeats, look at the HA Settings of your cluster. By default, vCenter will automatically select two datastores to use, but you can select them yourself if you\u0026rsquo;re so inclined to do so. ","permalink":"https://theithollow.com/2014/03/03/vcenter-ha-datastore-heartbeats/","summary":"\u003cp\u003eHigh Availability is a great reason to virtualize your servers.  It can help reduce downtime by automatically rebooting virtual machines in the case of a host failure.  But, a relatively minor host issue should not cause the reboot of all of your virtual machines.  This is where vCenter HA datastore heartbeats are useful. Let\u0026rsquo;s first look at a basic example of HA.  Below is our normal environment with no failures.  We have a few VMs on each host and the hosts are connected to a pair of datastores and a network switch. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/02/Heartbeats1.png\"\u003e\u003cimg alt=\"Heartbeats1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/02/Heartbeats1.png\"\u003e\u003c/a\u003e     Now assume we have a host failure, we now need to have HA kick in and reboot the virtual machines on the failed host, over on the still working hosts. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/02/Heartbeats2.png\"\u003e\u003cimg alt=\"Heartbeats2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/02/Heartbeats2.png\"\u003e\u003c/a\u003e     HA is working great and is a great feature, but lets take a look at what happens if the Management network were to fail.  Without datastore heartbeats involved, the two hosts wouldn\u0026rsquo;t be able to communicate with each other over the network so the two of them would assume that the other was failed.  But by looking at the example below we can see that even though the Management network is down, the virtual machines and their network is working just fine.  This means that no outages are being noticed by end users so we DON\u0026rsquo;T want HA to kick in because the virtual machines will restart. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/02/Heartbeats3.png\"\u003e\u003cimg alt=\"Heartbeats3\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/02/Heartbeats3.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"vCenter HA Datastore Heartbeats"},{"content":" Every year Eric Seibert puts together a list of the top virtualization blogs on his site vsphere-land.com. This year there are about 300 different sites to vote for including theITHollow.com! Last year, this site was reaching a year old and was voted number 49 on the list of top 50 blogs. This was a great feeling, knowing the amazing content that is out on the web, and I appreciate everyone who voted.\nIf theITHollow.com has helped you in any way this year, I urge you to login and vote it as one of your favorites. There are very few rewards for the hard work, time and money it takes to keep a blog running with a consistent flow of new material. Recognition from the readers and feedback about how the content is helpful are the small rewards that make the process worthwhile. If you this site isn\u0026rsquo;t worthwhile, you don\u0026rsquo;t care for the writing style or just don\u0026rsquo;t think it compares to the rest of the field, then no hard feelings. Hopefully you\u0026rsquo;ll show your support for some of the other great bloggers that have devoted their time and energy to help out the community.\nVeeam has sponsored the voting, so some random voters will actually receive some gifts as a small token of their appreciation.\nWhat are you waiting for!?\nhttp://www.surveygizmo.com/s3/1553027/Top-VMware-virtualization-blogs-2014 ","permalink":"https://theithollow.com/2014/02/27/2014-virtualization-blog-voting-now-open/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/01/vote.png\"\u003e\u003cimg alt=\"vote\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/01/vote.png\"\u003e\u003c/a\u003e Every year Eric Seibert puts together a list of the top virtualization blogs on his site \u003ca href=\"http://vsphere-land.com\"\u003evsphere-land.com\u003c/a\u003e.  This year there are about 300 different sites to vote for including theITHollow.com!   Last year, this site was reaching a year old and was voted number 49 on the list of top 50 blogs.  This was a great feeling, knowing the amazing content that is out on the web, and I appreciate everyone who voted.\u003c/p\u003e","title":"2014 Virtualization Blog Voting Now Open"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/locations/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"Locations"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/my-bookings/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"My Bookings"},{"content":"CONTENTS\n","permalink":"https://theithollow.com/tags/","summary":"\u003cp\u003eCONTENTS\u003c/p\u003e","title":"Tags"},{"content":"\nHP Recently announced OneView which looks to be poised to manage their converged infrastructure, and datacenter products. As the name might suggest it can be used to manage all your HP products from one console. One of the things that grabbed me was that it is deployed to a vSphere environment with an OVA file which makes it super easy to deploy. In the past some of the HP Management tools like Insight Control required a whole slew of prerequisites before the product could actually be installed. Once that was installed, there was a tedious process of configuring it with all of your network devices and if you didn\u0026rsquo;t configure them in the right order, they wouldn\u0026rsquo;t relate to each other correctly.\nIf you\u0026rsquo;d like to read more about this please check out Luigi Tiano\u0026rsquo;s post as well as Chris Wahl\u0026rsquo;s post about it.\nI installed HP Oneview in a semi-lab environment to test it out. It\u0026rsquo;s true, my home lab does not include a C7000 blade chassis (well, not yet anyway). The install was a breeze and the interface seemed very responsive and nicely laid out.\nUnfortunately this is about where my tests ended. My first configuration item was going to be my HP Onboard Administrator on the C7000 Blade Chassis. In the past with ICE, the Onboard Administrator (OA) needed to be discovered first and then the blades and interconnects afterwards. But when I discovered the Onboard Administrator, I got a troubling message.\nIn the screenshot below, you\u0026rsquo;ll notice that is says the Enclosure is already claimed by another management system. The IP Address it mentions is the HP Virtual Connect Interconnect Modules! Ok, that sort of makes sense, the HP Virtual Connect modules manage the networking of the blade chassis. It gives me the option to force the install, so I did.\nI click the nicely placed \u0026ldquo;Learn more\u0026hellip;\u0026rdquo; link which takes you to the help files.\nOn the next screen, my concerns are realized. HP OneView wants to be the management point for my blade enclosure and Virtual Connect Networking. This is where I canceled out of the configuration. As you can see, you are asked to configure uplinks to manage the Virtual Connect domain and Enclosure.\nIf you\u0026rsquo;ve decided to migrate your own Virtual Connect Manager over to OneView the following document might help.\nhttp://www8.hp.com/h20195/v2/GetPDF.aspx%2F4AA5-0351ENW.pdf\nFinal Thoughts So from what I can tell, HP OneView could be a really great tool to manage a the datacenter, but it concerned me a bit that management is being taken away from my Virtual Connect Manager. In my situation (and many of my customers) they would have the OneView Appliance deployed in their vSphere environment that is run on the HP Blades that it is managing. If there is a serious issue that brings down the enclosure, you might lose access to the appliance that would allow you to troubleshoot the enclosure. If there is a machine outside of the blade enclosure that is running OneView, this changes things significantly and I\u0026rsquo;d be on board with deploying OneView. This product looks promising and I think will have a place in the datacenter but for right now I\u0026rsquo;m going to stick with my Virtual Connect Manager and Onboard Administrator to configure my blade chassis since it\u0026rsquo;s the only server infrastructure in my environment at the moment.\nREQUESTS FOR COMMENT\nObviously this was a truncated test in my semi-test lab and would love to get some feedback from anyone with OneView experience or an HP Engineer to discuss this further.\n","permalink":"https://theithollow.com/2014/02/24/hp-oneview-initial-thoughts/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/02/ONEVIEW.jpg\"\u003e\u003cimg alt=\"ONEVIEW\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/02/ONEVIEW-300x204.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eHP Recently announced OneView which looks to be poised to manage their converged infrastructure, and datacenter products.  As the name might suggest it can be used to manage all your HP products from one console.  One of the things that grabbed me was that it is deployed to a vSphere environment with an OVA file which makes it super easy to deploy.  In the past some of the HP Management tools like Insight Control required a whole slew of prerequisites before the product could actually be installed.  Once that was installed, there was a tedious process of configuring it with all of your network devices and if you didn\u0026rsquo;t configure them in the right order, they wouldn\u0026rsquo;t relate to each other correctly.\u003c/p\u003e","title":"HP OneView Initial Thoughts"},{"content":" I\u0026rsquo;m am very excited and honored to be voted in as a delegate for the Virtualization Field Day 3 event in Silicon Valley on March 5th through the 7th.\nThis is an event that gets a group of independent delegates together and reviews, provides feedback and comments on different types of technology that are entering or shaping the virtualization segment of the Information Technology industry. Great companies with new products can come and give demonstrations on their solutions. If they have merit these delegates will likely tout how impressive they are through their social media channels, but if they have deficiencies are likely to point them out.\nFrom the GestaltIT Site:\nVirtualization Field Day (VFD) focuses on server and desktop virtualization and management technologies. VFD brings together the best independent thought leaders in virtualization to discuss pressing issues and technology advancements with key companies in the space.\nThis year the presenter include Atlantis Computing, Cloud Physics, COHO Data, Spirent and VMTurbo. Additional Companies may be added before March 5th.\nThe presentations are streamed live from the TechFieldDay Site. If you can I highly recommend tuning in and learning something new. Please try not to snicker when I ask a dumb question in a room full of very bright people. :)\nIn addition to watching the live stream you can follow any of the delegates including myself on twitter. And be sure to follow GestaltIT\u0026rsquo;s mastermind @sfoskett for more great sessions in addition to virtualization.\nDelegates: Alastair Cooke - BLOG http://www.demitasse.co.nz/ TWITTER: @DmitasseNZ Andrew Mauro - BLOG vinfrastructure.it/en/ TWITTER: @Andrew_Mauro David Davis - BLOG vmwarevideos.com TWITTER: @DavidMDavis Eric Wright - BLOG discoposse.com TWITTER: @DiscoPosse James Green - BLOG virtadmin.com TWITTER: @JDGreen Marko Broeken - BLOG vclouds.nl TWITTER: @MBroeken Paul Meehan - BLOG paulpmeehan.com TWITTER: @PaulPMeehan Rick Schlander - BLOG vmbulletin.com TWITTER: @ VMRick Scott Lowe - BLOG virtualizationadmin.com TWITTER: @otherscottlowe\n","permalink":"https://theithollow.com/2014/02/17/virtualization-field-day-3/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/01/VFD-Logo-400x398.png\"\u003e\u003cimg alt=\"VFD-Logo-400x398\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/01/VFD-Logo-400x398.png\"\u003e\u003c/a\u003e  I\u0026rsquo;m am very excited and honored to be voted in as a delegate for the \u003ca href=\"http://techfieldday.com/event/vfd3/\"\u003eVirtualization Field Day 3\u003c/a\u003e event in Silicon Valley on March 5th through the 7th.\u003c/p\u003e\n\u003cp\u003eThis is an event that gets a group of independent delegates together and reviews, provides feedback and comments on different types of technology that are entering or shaping the virtualization segment of the Information Technology industry.  Great companies with new products can come and give demonstrations on their solutions.  If they have merit these delegates will likely tout how impressive they are through their social media channels, but if they have deficiencies are likely to point them out.\u003c/p\u003e","title":"Virtualization Field Day 3"},{"content":"If you\u0026rsquo;ve got a home lab to play around in, it\u0026rsquo;s great to have remote access so that you can try things out from the road. This might mean purchasing an expensive firewall or VPN appliance but openvpn has a nice 2 user appliance that can be downloaded as an OVF file, right into your vSphere environment.\nInstallation I mentioned that this is an OVF file, so you know the installation is going to be a snap. Download the bits from OpenVPN.net and deploy into your vSphere cluster. I\u0026rsquo;m not going to go through the entire OVF deployment, I think you\u0026rsquo;ll find it very simple even if you haven\u0026rsquo;t done it before.\nInitial Setup Once the OVF has been deployed, some initial setup tasks need to be hammered out before the fun stuff happens. Open your vSphere console so you can see what\u0026rsquo;s going on with the appliance.\nYou\u0026rsquo;ll be asked to login to the appliance with the default credentials:\nUser: root Password: openvpnas\nAccept the License Agreement\nThe rest of the information comes at you in a wizard. If you need to redo this wizard later, you can run ovpn-init.\nThis is likely the only Access Server node if you\u0026rsquo;re reading this post. :) Leave this setting the default. Pick the interface that you plan to use for the VPN. You probably want to change this to 1 - All Interfaces to keep the installation simple.\nA few more questions relating to port numbers and how you want traffic to flow through your appliance when it\u0026rsquo;s all setup. I\u0026rsquo;ve left the defaults for the ports and the routing for now. The appliance should pick up the local subnet. Do you want to allow access to this subnet by default? I\u0026rsquo;ve chosen the defaults.\nAlso, you can also make changes to the default user login, but I\u0026rsquo;ve left this as the default as well.\nSince we\u0026rsquo;re using a version of OpenVPN that only allows two users, we can leave the license blank.\nOnce you\u0026rsquo;ve answered all the questions, you\u0026rsquo;ll see some information that might be useful to log into the Web GUI to finish your configurations.\nBefore I log out, I change the default password for the default login. (It seems like a good step to take since this appliance is allowing access to your lab from outside your LAN.)\npasswd openvpn is the command to allow you to set your own password.\nVPN Configurations You probably saw the URL that can be used to administer your VPN appliance in the summary screen above. Go to https://[ipaddress]:943 to access the administration console.\nLogin as openvpn and the password you changed earlier.\nNow that you\u0026rsquo;ve logged into the admin console, you can begin making modifications to the setup, much more easily than using the UNIX prompts from the command line.\nI don\u0026rsquo;t want to go over all the settings, but will point out two changes that I made.\nThe first change is to modify the IP Address or DNS name of the appliance. The appliance wants the public facing name or IP address that clients will be using to connect to it on.\nNOTE: after making changes, be sure to click save changes and then also Update running configurations. This appliance can save the configurations but not apply them until you\u0026rsquo;re ready.\nThe second change I made was to use LDAP Authentication. This is so I can tie my logins directly to my Active Directory. This is completely optional and there is a way to also just use local users, but I find this way to be the easiest.\nBefore you can connect remotely, be sure to setup your port forwarding, access lists andor network address translations to point to the new Open VPN appliance.\nThe only ports necessary to connect remotely are:\nTCP 443 UDP 1194\nAlso, if you haven\u0026rsquo;t done it already be sure to remember to point your public DNS name at the public IP address of the openVPN appliance.\nConnecting Remotely\nOpen a web browser and go to the public name or IP of your Open VPN appliance so that you can connect to your home lab.\nYou\u0026rsquo;ll need to login with a username and a password. My appliance is setup with LDAP so my Active Directory Credentials will work. Make sure that the \u0026ldquo;CONNECT\u0026rdquo; drop down is selected and click Go.\nYou\u0026rsquo;ll be prompted about an untrusted SSL Certificate unless you\u0026rsquo;ve added one (not shown in this post)\nA webpage will show you that you are now connected. Once this happens, you should be able to start connecting to your services that are hosted on the private network.\nFrom the initial connections screen, you can change the \u0026ldquo;CONNECT\u0026rdquo; dropdown box to instead say \u0026ldquo;LOGIN\u0026rdquo;. If you do this, you\u0026rsquo;ll have the option to download a VPN Client so that you don\u0026rsquo;t have to open the broweser and login each time you want to VPN into your Home Lab.\nThis is what the VPN client looks like.\nSummary This might not be the most robust solution ever designed, but it is a very inexpensive way to connect to your home equipment and do some work while you\u0026rsquo;re on the road. Check it out if you are in the need for a remote access solution.\n","permalink":"https://theithollow.com/2014/02/10/open-vpn-home-labs/","summary":"\u003cp\u003eIf you\u0026rsquo;ve got a home lab to play around in, it\u0026rsquo;s great to have remote access so that you can try things out from the road.  This might mean purchasing an expensive firewall or VPN appliance but \u003ca href=\"http://openvpn.net/\"\u003eopenvpn\u003c/a\u003e has a nice 2 user appliance that can be downloaded as an OVF file, right into your vSphere environment.\u003c/p\u003e\n\u003ch1 id=\"installation\"\u003eInstallation\u003c/h1\u003e\n\u003cp\u003eI mentioned that this is an OVF file, so you know the installation is going to be a snap.  Download the \u003ca href=\"http://swupdate.openvpn.org/as/OpenVPN-AS-Appliance-2.0.1.ova\"\u003ebits from OpenVPN.net\u003c/a\u003e and deploy into your vSphere cluster.  I\u0026rsquo;m not going to go through the entire OVF deployment, I think you\u0026rsquo;ll find it very simple even if you haven\u0026rsquo;t done it before.\u003c/p\u003e","title":"OPEN VPN for Home Labs"},{"content":"Microsoft IPAM (IP Address Management) is a feature that was released in Windows Server 2012 to help administrators manage decentralized DHCP and DNS Servers. Previously administrators may have needed to use spreadsheets to keep track of DHCP Scopes, IP Addresses DNS Names etc but with IPAM installed, a single server can refresh all of this data and put it in a single, always up to date place.\nDeployment Guidelines\nThere are a few things you should know before installing IPAM.\nDO: Install on a Server that is joined to the domain.\nDO: Install on a Server that has network connectivity to your DNS, DHCP and Domain Controllers. The IPAM Server needs to be able to directly communicate with the services that they provide.\nDON\u0026rsquo;T: Install on a Domain Controller. This is not supported.\nDON\u0026rsquo;T: Install on a DHCP Server. This will prevent IPAM from discovering other DHCP Servers and is not supported.\nInstall IPAM Role The IPAM Server role is added like all the server roles in Server 2012. From the Server Manager go through the Add Roles and Features wizard. Make sure to select the IPAM Server under features.\nSetup IPAM Once IPAM has been installed, use Server manager and go through the steps which are neatly ordered 1-6. Connect Server Manager to the IPAM server you just installed.\nStep 2 is to provision IPAM. A wizard will pop up and give you some instructions. On the second page of the wizard, you need to make a decision about whether you will manually configure all of your security groups, firewall rules, etc. on each of your DHCP Servers, DNS Servers and Domain Controllers. I chose to forgo this method and choose the default option of using Group Policy. Notice that you\u0026rsquo;ll be required to put in a GPO Prefix.\nReview the Summary and take notice to the fact that three new GPOs will be configured, each starting with your GPO Prefix (in my case hollow_)\nNow we move on to Step 3. Which is doing the server discovery. What Servers do you plan on managing with this IPAM Server? I\u0026rsquo;ve chosen all of the server types.\nStep 4 will attempt to discover the server types that you\u0026rsquo;ve selected. In the Server Inventory will show your servers listed, but will have an alarm about the server manageability status. Before you can set the manageability status the GPOs have to be deployed. The GPOs that you created in the wizard earlier haven\u0026rsquo;t been deployed yet and need to be invoked from PowerShell.\nNote: I\u0026rsquo;m not sure exactly why this is a separate step, and furthermore not sure why this couldn\u0026rsquo;t have been done from the same Server Manager window you\u0026rsquo;ve been running through all along. RANT OVER.\nIn order to deploy the GPOs, the \u0026ldquo;Invoke-IpamGpoProvisioning\u0026rdquo; cmdlet needs to be run from PowerShell.\nRun from a PowerShell prompt.\nInvoke-IpamGpoProvisioning -Domain DOMAINNAME -GpoPrefixName GPOPREFIX -IpamServerFQDN IPAMSERVERNAME.DOMAINNAME When finished you should see your GPOs listed in Group Policy Management.\nGo back to Server Manager and look at your inventory again. Click Edit Server.\nChose the server types you plan to manage and choose \u0026ldquo;Managed\u0026rdquo; as the manageability status.\nOnce this is complete you may see a Red X indicating an error. This is likely due to the GPO not being applied yet. If this happens you can login to the server in question and run the \u0026ldquo;GPupdate /Force\u0026rdquo; command to get the server to re-read the GPOs assigned to it.\nWhen finished your Server inventory should look something like this.\nIPAM USAGE When you\u0026rsquo;ve finished your setup, you can use IPAM to do things like manage your IP Addresses, manage DNS Zones and review auditing and logs.\n","permalink":"https://theithollow.com/2014/02/04/microsoft-ipam-ip-address-management/","summary":"\u003cp\u003eMicrosoft IPAM (IP Address Management) is a feature that was released in Windows Server 2012 to help administrators manage decentralized DHCP and DNS Servers.  Previously administrators may have needed to use spreadsheets to keep track of DHCP Scopes, IP Addresses DNS Names etc but with IPAM installed, a single server can refresh all of this data and put it in a single, always up to date place.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eDeployment Guidelines\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eThere are a few things you should know before installing IPAM.\u003c/p\u003e","title":"Microsoft IPAM (IP Address Management)"},{"content":" Good news for all of you eagerly awaiting the next iteration of the PernixData FVP software. Version 1.5 is now in Beta and you can request the download for your own testing from the following link http://info.pernixdata.com/Betaprogram.\nDisclosure: At the time of this writing I am a PernixPro which entitles me to early access to software, licenses or other merchandise. The thoughts expressed in this post are my own and have not been vetted by PernixData.\nReview Just in case you aren\u0026rsquo;t familiar with PernixData FVP we\u0026rsquo;ll provide a quick review. PernixData is a software based solution used in a VMware vSphere environment that can reduce latency and increase throughput by leveraging host based SSDs. The software utilizes local Solid State disks to lower latencies to traditional storage arrays. Think about it, the closer the disks are to the processor, the lower the latency should be, or so the theory goes.\nIf you\u0026rsquo;d like to understand this better, please check out my first post on PernixData.\n","permalink":"https://theithollow.com/2014/01/28/pernixdata-fvp-1-5-beta/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2014/01/pernixdata.jpg\"\u003e\u003cimg alt=\"pernixdata\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2014/01/pernixdata-300x166.jpg\"\u003e\u003c/a\u003e Good news for all of you eagerly awaiting the next iteration of the PernixData FVP software.  Version 1.5 is now in Beta and you can request the download for your own testing from the following link  \u003ca href=\"http://info.pernixdata.com/Betaprogram\"\u003ehttp://info.pernixdata.com/Betaprogram\u003c/a\u003e.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eDisclosure:  At the time of this writing I am a PernixPro which entitles me to early access to software, licenses or other merchandise.  The thoughts expressed in this post are my own and have not been vetted by PernixData.\u003c/p\u003e","title":"PernixData FVP 1.5 Beta"},{"content":"General Disclosure This blog is a personal blog written and edited by me. This blog accepts forms of cash advertising, sponsorship, paid insertions or other forms of compensation.\nThe compensation received will never influence the content, topics or posts made in this blog. All advertising is in the form of advertisements generated by a third party ad network. Those advertisements will be identified as paid advertisements.\nThe owner(s) of this blog is not compensated to provide opinion on products, services, websites and various other topics. The views and opinions expressed on this blog are purely the blog owners. If we claim or appear to be experts on a certain topic or product or service area, we will only endorse products or services that we believe, based on our expertise, are worthy of such endorsement. Any product claim, statistic, quote or other representation about a product or service should be verified with the manufacturer or provider.\nThis blog does contain content which might present a conflict of interest. This content will always be identified.\nDisclaimer The content of this blog has been developed based on personal experiences and any technical information from this blog should be tested in a lab environment prior to being deployed in production. theITHollow is not responsible for any damages, outages or misconfigurations of any environments and the information here should be considered a helpful guide only. Information on this blog is not to be considered official reference material for any vendor including anyone who employee me. My employer is not responsible for any of the content of this blog and has no input on what is published herein.\nTravel and Expenses From time to time sponsors or other affiliates will pay for travel an expenses including airfare, lodgings, meals, registrations and entertainment during the course of blogging related events.\nAny posts created as a direct result of these events will be noted in the posts created.\nAmazon Services LLC Associates Program theITHollow.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com\nASmallOrange theITHollow.com participates in the ASmallOrange.com affiliates program. New registrants signed up from my site will generate revenue for theITHollow.com http://asmallorange.com/affiliate/legal/\n","permalink":"https://theithollow.com/about/disclosures/","summary":"\u003ch2 id=\"general-disclosure\"\u003eGeneral Disclosure\u003c/h2\u003e\n\u003cp\u003eThis blog is a personal blog written and edited by me. This blog accepts forms of cash advertising, sponsorship, paid insertions or other forms of compensation.\u003c/p\u003e\n\u003cp\u003eThe compensation received will never influence the content, topics or posts made in this blog. All advertising is in the form of advertisements generated by a third party ad network. Those advertisements will be identified as paid advertisements.\u003c/p\u003e\n\u003cp\u003eThe owner(s) of this blog is not compensated to provide opinion on products, services, websites and various other topics. The views and opinions expressed on this blog are purely the blog owners. If we claim or appear to be experts on a certain topic or product or service area, we will only endorse products or services that we believe, based on our expertise, are worthy of such endorsement. Any product claim, statistic, quote or other representation about a product or service should be verified with the manufacturer or provider.\u003c/p\u003e","title":"Disclosures"},{"content":"Mobility is no longer a challenge to traditional IT environments, it\u0026rsquo;s the standard. Users work from home to save office space, need to be connected during sales trips and are consistently not in the corporate office connected to the local area network (LAN). Combine this demand for a mobile workforce with the ever increasing security requirements put forth such as HIPPA and PCI-DSS etc make this a significant hurdle for IT departments. Microsoft Direct Access may be a solution that eases this hardship.\nIn the past, if you wanted to work remotely a Virtual Private Network (VPN) was probably used to connect to the office. The VPN would create an encrypted tunnel to secure and allow access to machines outside the network. This worked well but required the software to be installed, and the user needed to initiate a connection. Microsoft Direct Access allows users to access a corporate network through an encrypted connection and it\u0026rsquo;s always on!\nDirect Access Requirements DIrect Access was released with Server 2008 R2 so you\u0026rsquo;ll need a server running this OS version or later. My examples below run Server 2012.\nYou\u0026rsquo;ll also need to be running one of the following flavors of Windows 7 or later:\nWindows 7 Enterprise Windows 7 Ultimate Windows 8 Enterprise All Clients MUST be members of the domain that they are connecting to in order to work successfully. This can be a pain for administrators who may be setting up remote access for a user that will never come into the office, but it is possible to use the Offline Domain Join along with Direct Access. THIS IS AWESOME!\nAdditional Remote Access Requirements are listed on Technet.\nInstalling Microsoft Direct Access The install of Direct Access was surprisingly simple. Install the Remote Access Module from Server Manager Configure and deploy the Client and Server Group Policies from a wizard Allow port 443 to your Direct Access Server Install and Configure Walkthrough Open the Add Roles and Features through Server Manager. Choose the Remote Access Role and accept the additional features that will be installed.\nThis can also be done via an elevated powershell command:\nInstall-WindowsFeatures RemoteAccess -IncludeManagementTools When the installer is finished, Server Manager will notify you that additional Post-deployment configurations are available. This configuration wizard can be started right from Server manager, or you could open the Remote Access Utility which will be installed on the server.\nChoose your deployment option. Note that the VPN is available so that older clients can still connect to the network. For instance if you have Windows XP machines that need remote access this same server can be utilized for both purposes. In my case I\u0026rsquo;ve only chosen to configure the second option \u0026ldquo;Deploy DirectAccess only.\u0026rdquo;\nThe best part about this for me is that you can install with a single network adapter behind a Network Address Translation (NAT). In the past you might have needed to have two network adapters (outside and inside) to turn the server into a router. In my lab I used a single adapter behind my firewall. Be sure that your routerfirewall is also allowing port 443 (HTTPS) traffic as well. This is the only required port that needs to be open on the incoming direction.\nOne important configuration option is to enter the public DNS name of the Direct Access server. This DNS entry needs to be available on the Internet so be sure to add your \u0026lsquo;A records\u0026rsquo; to the publicly accessible DNS Server. The setup wizard will also create a self-signed certificate used to encrypt the connections. This can be modified to use a public certificate if you wish. I recommend this if using for anything other than a lab.\nYou are then given the option to modify the settings before they are applied. For this example, click finish.\nYou\u0026rsquo;ll notice some settings being configured and many of these will be configured as Group Policy Objects in Active Directory.\nWhen the configuration is finished, you can view the settings from the GPO Management tool in Active Directory. Two GPOs will be created \u0026ldquo;DirectAccess Client Settings\u0026rdquo; and DirectAccess Server Settings\u0026quot;.\nNOTE: During the configuration only mobile computers will be enabled. You can change this setting in Remote Access or modify the WMI Filter on the GPO itself.\nOpen the Remote Access Management Utility and the operational status should look like the one below. If it doesn\u0026rsquo;t, don\u0026rsquo;t panic. Remember that these settings were applied via Group Policy which may take up to 15 minutes in some cases. You can speed this up by running GPupdate /force from a command line on the server and refreshing this page. Below is an example of the gpupate /force command. This might also be useful on the clients that you\u0026rsquo;re testing with. They will also be configured via Group Policy.\nVerify Functionality Once the GPOs have been successfully deployed, the next step is to verify that the machines can use Direct Access correctly. There are two parts to the connectivity that should be checked. The first is what happens when the client is connected to the local LAN and the other is when the client is outside the network.\nRunning the powershell command:\nget-daconnectionstatus should show either \u0026ldquo;ConnectedLocally\u0026rdquo; or \u0026ldquo;ConnectedRemotely\u0026rdquo; depending on the location of the client.\nYou will also be able to see the Direct Access Connection in your network Connections (seen below).\nWhen clients are connected remotely, you can via them from the Direct Access Server as well.\nTroubleshooting If you are looking for some troubleshooting information, a great command to run on clients is the\nnetsh interface httpstunnel show interface\nThis will give additional information on any errors that might be encountered when connecting to the Direct Access Server.\nAn additional Thank you to Richard Hicks for some very useful assistance in writing this post. He has some great information on Direct Access on his website that is worth taking a look at.\n","permalink":"https://theithollow.com/2014/01/22/microsoft-direct-access-vpn/","summary":"\u003cp\u003eMobility is no longer a challenge to traditional IT environments, it\u0026rsquo;s the standard.  Users work from home to save office space, need to be connected during sales trips and are consistently not in the corporate office connected to the local area network (LAN).  Combine this demand for a mobile workforce with the ever increasing security requirements put forth such as HIPPA and PCI-DSS etc make this a significant hurdle for IT departments.  Microsoft Direct Access may be a solution that eases this hardship.\u003c/p\u003e","title":"Is Microsoft Direct Access the new VPN?"},{"content":"These days, companies are dealing more with mobility, than ever before. IT infrastructure is now spread out in the cloud, and users may be working from the road, remote offices or from home. This is making it more difficult to manage a secure IT Infrastructure.\nMicrosoft is taking steps to allow IT Administrators to start controlling machines even when they aren\u0026rsquo;t connected to a corporate infrastructure. Microsoft Offline Domain Join was released as a new feature with Windows Server 2008 R2. This feature allows a machine that is not directly connected to a network with Active Directory, to be joined anyway.\nPerforming an Offline Domain Join Prerequisites Client Operating System Required: Windows 7 or later\nServer Operating System Required: Windows Server 2008 R2 or later\nCreate an offline domain account on the Domain Controller The first step in an offline domain join is to provision the machine account in Active Directory. This cannot be done the old fashioned way of using Active Directory Users and Computers. This can be done via a powershell window by utilizing the Djoin command. Be sure that you are running this command on a domain joined server and that it is Server 2008 R2 or higher.\ndjoin /domain DOMAINNAME /machine MACHINENAME /rootcacerts /savefile FILEPATH /REUSE You may notice that I\u0026rsquo;ve added the /policyname switch to add my Direct Access GPO during my offline join. After the command completes successfully, you should be able to see a new machine object created in Active Directory. (I added the description myself, after the provisioning was done)\nAlso, notice that we created a text file named \u0026ldquo;Provision.txt\u0026rdquo;. This text file needs to be transferred to the offline machine to be joined to the domain.\nJoin the Offline Machine to the Domain Transfer the provisioning file that was created on the domain server, to the offline client. (Be sure that this client is running Windows 7 or later.) Open a powershell console again and use the djoin command yet again to add the machine to the domain.\ndjoin /requestodj /loadfile FILEPATH /windowspath c:windows /LocalOS ","permalink":"https://theithollow.com/2014/01/20/microsoft-offline-domain-join/","summary":"\u003cp\u003eThese days, companies are dealing more with mobility, than ever before.  IT infrastructure is now spread out in the cloud, and users may be working from the road, remote offices or from home.  This is making it more difficult to manage a secure IT Infrastructure.\u003c/p\u003e\n\u003cp\u003eMicrosoft is taking steps to allow IT Administrators to start controlling machines even when they aren\u0026rsquo;t connected to a corporate infrastructure.  Microsoft \u003ca href=\"http://technet.microsoft.com/en-us/library/ff793312.aspx\"\u003eOffline Domain Join\u003c/a\u003e was released as a new feature with Windows Server 2008 R2.  This feature allows a machine that is not directly connected to a network with Active Directory, to be joined anyway.\u003c/p\u003e","title":"Microsoft Offline Domain Join"},{"content":" Microsoft has a new file system designed to increase data integrity, scalability and availability called the Resilient File System (ReFS). This file system has leveraged many of the NTFS file system goodies and expanded them to make it more scalable and prevent corruptions. ReFS was released with Server 2012 and at the moment is designed for use with file shares. It cannot be used as a boot volume at the present time, but this file system seems poised to replace NTFS down the road.\nTorn Writes One of the new things that ReFS helps with is a \u0026ldquo;torn write\u0026rdquo;. In NTFS the file system overwrites metadata and data structures during a modify operation. During this modify operation data is read off the disk and then new data is written on top of the old data, becoming the new data. Torn writes occur when this process can\u0026rsquo;t complete fully. Think about what could happen if a power failure occurred during the write operation. Maybe some of the new data is there, and some of the old data is available, but neither are consistent and there is no way to roll this back. With ReFS new data will be written to a different location and when the write is complete, the file system then references the new location as the up to date data. This process is known as \u0026ldquo;Copy on Write\u0026rdquo;. Now during a power failure, if the write doesn\u0026rsquo;t complete fully the original data is still consistent.\nBit Rot Storage devices fail on a pretty regular basis. Solid State drives slowly leak electrical charges due to insulation issues, spinning disks lose their magnetic fields over time, etc. This expected decay of storage devices is known as \u0026ldquo;bit rot\u0026rdquo; and ReFS isn\u0026rsquo;t going to get rid of it, but now requires a 64-bit checksum on metadata optionally uses a checksum on user data (known as an Integrity Stream) to determine if the data still has integrity. This checksum will allow the file system to know if something has changed that wasn\u0026rsquo;t supposed to, eg from a physical component change.\nDetecting Bit Rot with a checksum is only a first step. Obviously the data is not valid if the checksums don\u0026rsquo;t match, but detecting an error is an important first step. What ReFS does is use a new feature in Windows called \u0026ldquo;Storage Spaces\u0026rdquo;. I\u0026rsquo;ve written a previous article about storage spaces here, but to generalize, Storage Spaces are similar to RAID where multiple disks are used in concert to provide fault tolerance or additional disk capacity.\nAssuming an ReFS volume is on a \u0026ldquo;Mirrored\u0026rdquo; storage space (mirrored meaning copies on two or more disks), the checksum can then be compared across the disks. If the checksums match the data there is no reason to do anything. If they checksums don\u0026rsquo;t match, the disk with the non-matching checksum can have the data replaced by the disk with the matching checksum. This process will help to fix the \u0026ldquo;Bit Rot\u0026rdquo; problem.\nSalvage There are still times even when using RAID or Storage Spaces that data corruption can exist on the disk. In an NTFS file system this required a CHKDSK to try to fix any logical corruptions on the disk. If you\u0026rsquo;ve ever run a CHKDSK on a system, you know that the volume must be offline to run and if it\u0026rsquo;s a large volume, this may take quite some time. ReFS uses B+ Trees for managing data and can now isolate the corrupted part of the file system as opposed to the entire volume.\nBelow we see a typical B+ Tree with no corruption.\nThe example below shows corruption. Notice that any data below the corruption in the B+ Tree is then considered corrupted as well. While ReFS is attempting to fix this corruption, the rest of the file system is still running and available. Even though ReFS is supposed to be \u0026ldquo;Always On\u0026rdquo; it is possible that the root B+ Tree could get corrupted and at that point the entire volume would then be offline.\nSummary It might not be ready for primetime yet, but ReFS looks poised to be Microsoft\u0026rsquo;s flagship file system very soon. Many of these features are a great improvement to NTFS, but probably won\u0026rsquo;t be found widespread until boot partitions are supported. Check it out for yourself.\nAdditional Reading If you would like to learn more about ReFS, take a look at the following articles which may help to understand this new file system better.\nhttp://blogs.msdn.com/b/b8/archive/2012/01/16/building-the-next-generation-file-system-for-windows-refs.aspx http://blogs.technet.com/b/askpfeplat/archive/2013/01/02/windows-server-2012-does-refs-replace-ntfs-when-should-i-use-it.aspx\n","permalink":"https://theithollow.com/2014/01/13/microsofts-resilient-file-system-refs/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/12/2551.jpg\"\u003e\u003cimg alt=\"BankerBox\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/12/2551.jpg\"\u003e\u003c/a\u003e Microsoft has a new file system designed to increase data integrity, scalability and availability called the Resilient File System (ReFS).  This file system has leveraged many of the NTFS file system goodies and expanded them to make it more scalable and prevent corruptions.  ReFS was released with Server 2012 and at the moment is designed for use with file shares.  It cannot be used as a boot volume at the present time, but this file system seems poised to replace NTFS down the road.\u003c/p\u003e","title":"Microsoft\u0026#039;s Resilient File System (ReFS)"},{"content":"Microsoft Storage Spaces feature used to handle data redundancy, scalability and performance. Storage Spaces takes a set of Just a Bunch of Disks (JBOD) and pairs them together to allow for; either failures of a disk, gaining the performance of multiple spindles, or gaining the space of multiple disks. Traditionally this has all been handled by creating a Redundant Array of Independent Disks (RAID) group. Some examples of RAID would be:\nStriping (RAID 0) Mirroring (RAID 1) Parity (RAID 5 or 6) Storage Spaces create a similar type of RAID Group but then throw a virtual disk on top of them so that multiple types of stripes can be used on the same disks. For example, three physical disks can be put into a storage space. From there, three separate types of VDISKs can be created, Mirrored, Spanned and Parity can then be placed on the same set of disks with no issue. The diagram below shows an example.\nSetting Up Storage Spaces\nSetting up Storage Spaces in Server 2012 is fairly simple. Here, we have a system drive already in use, but there are also three additional unused drives attached to the server.\nMake sure the File Server Role is installed on the server. Then use the Server Manager to create a \u0026ldquo;Storage Pool\u0026rdquo; from the tasks drop down menu.\nThe Storage Pool Wizard will begin. Give the pool a name. Select the physical disks that will be used for the pool. Notice that you can use them as hot spares if needed. A hot spare would be useful if a failure occurred because it will automatically rebuild the data from the failed disk.\nConfirm your selections and click Finish.\nThe build process may take a few minutes depending on the size of the disks. Notice what happened to our physical disks in the Disk Management Console. THEY\u0026rsquo;RE GONE!!!! (don\u0026rsquo;t worry, we\u0026rsquo;ll see them again. Create a VDISK From the Server Manager, go to the task section of the VDISKs and choose new VDISK. From there the VDISK wizard will open. Choose which storage pool this VDISK should be built on.\nGive the VDISK a Name.\nPick the storage layout. Here is where we can choose Mirrored, Simple or Parity. A description of the layout types are listed next to the type if you\u0026rsquo;re unsure.\nPick a provisioning type. Thin only allocates space as needed allowing you to over provision your storage. Fixed is synonymous with Thick provisioning.\nWhat is the size of the VDISK? The storage pool will only have a certain amount of space available to be used. You can create a subset of this total space.\nWait for the VDISK to be created.\nOnce the VDISKS have been created, they still need to be formatted as either NTFS or ReFS, given a drive letter etc, just like a normal volume would be created. You\u0026rsquo;re allowed to start the volume setup wizard automatically when you\u0026rsquo;re done creating the VDISK.\nOnce you\u0026rsquo;re done, check Disk Management again. You\u0026rsquo;ll now see the VDISKs listed as though they are physical disks. I\u0026rsquo;ve added a screenshot of my server that demonstrates Spanned, Mirrored, and Parity Disks are all available on the Storage Spaces and are using different file systems.\nSummary I\u0026rsquo;m not saying that getting rid of your RAID Controllers is the thing to do, but Microsoft has added a pretty nice new feature into their OS to make your data more resilient. Check them out if you have time.\n","permalink":"https://theithollow.com/2014/01/06/microsoft-storage-spaces/","summary":"\u003cp\u003eMicrosoft \u003ca href=\"https://social.technet.microsoft.com/wiki/contents/articles/15198.storage-spaces-overview.aspx\"\u003eStorage Spaces\u003c/a\u003e feature used to handle data redundancy, scalability and performance.  Storage Spaces takes a set of \u003ca href=\"http://en.wikipedia.org/wiki/Non-RAID_drive_architectures\"\u003eJust a Bunch of Disks (JBOD)\u003c/a\u003e and pairs them together to allow for; either failures of a disk, gaining the performance of multiple spindles, or gaining the space of multiple disks.  Traditionally this has all been handled by creating a \u003ca href=\"http://en.wikipedia.org/wiki/RAID\"\u003eRedundant Array of Independent Disks\u003c/a\u003e (RAID) group.  Some examples of RAID would be:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eStriping (RAID 0)\u003c/li\u003e\n\u003cli\u003eMirroring (RAID 1)\u003c/li\u003e\n\u003cli\u003eParity (RAID 5 or 6)\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eStorage Spaces create a similar type of RAID Group but then throw a virtual disk on top of them so that multiple types of stripes can be used on the same disks.  For example, three physical disks can be put into a storage space.  From there, three separate types of VDISKs can be created, Mirrored, Spanned and Parity can then be placed on the same set of disks with no issue.  The diagram below shows an example.\u003c/p\u003e","title":"Microsoft Storage Spaces"},{"content":"It has been an exciting year and I wanted to take a second to thank the sponsors of theITHollow.com. Maintaining a blog and putting out new content on a regular basis is a time-consuming activity and also costs money. Luckily, I\u0026rsquo;ve got some great sponsors and I look forward to working with them again next year.\nThank you to: Also a BIG THANK YOU to Erik and Carolyn Schonsett for the awesome graphics on my site. If you\u0026rsquo;d like to see more of their work, or need graphics of your own, check out:\nerikschonsett.com and whateverinspires.com\nTop Blog posts of 2013: VMware Site Recovery Manager 5.5 Guide Understanding Raid Penalty VMware Ballooning Explained Baby Dragon Home Lab Network Load Balancing with vSphere\n","permalink":"https://theithollow.com/2013/12/30/2013-thank/","summary":"\u003cp\u003eIt has been an exciting year and I wanted to take a second to thank the sponsors of theITHollow.com.  Maintaining a blog and putting out new content on a regular basis is a time-consuming activity and also costs money.  Luckily, I\u0026rsquo;ve got some great sponsors and I look forward to working with them again next year.\u003c/p\u003e\n\u003ch1 id=\"thank-you-to\"\u003e\u003cstrong\u003eThank you to:\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"http://www.infinio.com/\"\u003e\u003cimg alt=\"infinio\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/12/infinio.jpg\"\u003e\u003c/a\u003e\u003ca href=\"http://www.veeam.com\"\u003e\u003cimg alt=\"veeam\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/12/veeam.jpg\"\u003e\u003c/a\u003e\u003ca href=\"http://www.zerto.com/\"\u003e\u003cimg alt=\"zerto\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/12/zerto.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eAlso a BIG THANK YOU to  Erik and Carolyn Schonsett for the awesome graphics on my site.  If you\u0026rsquo;d like to see more of their work, or need graphics of your own, check out:\u003c/p\u003e","title":"2013 Thank You"},{"content":"Active Directory (AD) is the base of most enterprise level infrastructures and has been for some time. We have become accustomed to seeing this structure and depending on it. But AD has been a thorn in our side since virtualization has become popular due to the inability to take snapshots. This is no longer the case if your shop is running Windows Server 2012 with Active Directory.\nWith the release of Active Directory 2012, Microsoft has added a new object called the VM GenerationID that allows us to snapshot AD Servers.\nWhy was Restoring from Snapshots of AD Servers bad? Active Directory keeps track of what data has been replicated to fellow Domain Controllers by tagging changes with a Update Sequence Numner (USN). When a restore from snapshot occurs, this USN gets reset.\nLook at what happens when an AD Server is restored from snapshot prior to Server 2012?\nIn the above example the server on the left is replicating changes to the server on the right as part of normal AD replication operations. During step 1. updates are replicated to the second DC and the USN is updated incrementally. Step 2, the same thing happens. Now between step 2 and 3 the server is restored from snapshot, meaning that we\u0026rsquo;ve rolled back the USN to 100 again. The problem is that the DC on the right is still only waiting for USNs that are greater than 200 meaning it will ignore any changes from the DC on the left leaving us with a whole mess of problems.\nHow did Server 2012 Fix these USN Rollbacks? In Server 2012 Microsoft added a new identifier called the VM-Generation ID. This is an identifier used specifically to stop the USN Rollback from occurring. The VM-Generation ID is exposed to the virtual machine for the hypervisor to read. When creating a new snapshot, the hypervisor increments this VM-Generation ID, and that is where the magic happens.\nLets look at the new example. Everything is the same in the first two steps except there is a VMGenerationID. When the Active Directory snapshot restore happens, the VMGenerationID is compared to the old VMGenerationID and if they don\u0026rsquo;t match, the virtual machine basically performs a Non-Authoritative Restore. During this type of restore, the Active Directory Partition only receives copies of AD changes until it\u0026rsquo;s back \u0026ldquo;in sync\u0026rdquo; with the other DCs. Once that\u0026rsquo;s finished it can participate is sending updates again.\nObviously, my explanation of this process s very rudimentary, so if you\u0026rsquo;d like a more detailed description please check out this technet article for a more detailed explanation.\nThere are only certain Hypervisors that will work with this new VM-GenerationID, Hyper-V 3.0 and vSphere 5.0 U2 or higher should be fine. If you want to see for sure, check the event log on your Domain Controller to see if you see EventID 2168 listed.\nJust to prove that this works, I ran it in my home lab. I wanted to see how easy this was to do and if I noticed any hiccups with this process. After all, Active Directory is a pretty important thing to keep from corrupting.\nI built an AD Server and made sure it was replicated correctly. Then I created a snapshot of the server.\nOnce the snapshot was done, I created a new user in Active Directory on this server. I waited a bit and then performed a restore on the server. I noticed several events in the event viewer after the restore.\nWe see a warning message about the GenerationID change being detected. That\u0026rsquo;s a good thing in our case!\nAD has realized that a non-authoritative restore must occur. Also good news for us. This means it\u0026rsquo;s fixing Active Directory replication for us.\nThe last message that I noticed was that AD has been restored and the USNs have been adjusted. I\u0026rsquo;ve seen the horror stories about USN Rollback on Active Directory after snapshotting and restoring an Active Directory Server, and would hesitate to do this in a live environment. Even though it seems like a scary thing to do, it does work and can be trusted in your environment. Just make sure it\u0026rsquo;s supported on your servers first!\n","permalink":"https://theithollow.com/2013/12/16/active-directory-snapshot/","summary":"\u003cp\u003eActive Directory (AD) is the base of most enterprise level infrastructures and has been for some time.  We have become accustomed to seeing this structure and depending on it.  But AD has been a thorn in our side since virtualization has become popular due to the inability to take snapshots.  This is no longer the case if your shop is running Windows Server 2012 with Active Directory.\u003c/p\u003e\n\u003cp\u003eWith the release of Active Directory 2012, Microsoft has added a new object called the VM GenerationID that allows us to snapshot AD Servers.\u003c/p\u003e","title":"Active Directory Snapshot"},{"content":"My brand new shiny HP Microserver arrived in the mail and I was excited to try it out. I had four 480GB OCZ SSDs to add to this baby server and wanted to get it up and running. Unfortunately, the HP Microserver is built for 3.5 inch drives. Luckily I found great solution.\nNewer Technology AdaptaDrive 2.5\u0026quot; to 3.5\u0026quot; Drive Converter Bracket. Attach your 2.5 inch SSD to this bracket, then attach the bracket to the HP MicroServer Drive Trays and you\u0026rsquo;re good to go.\nI was worried that with the additional bracket, that the drives wouldn\u0026rsquo;t easily fit into the server but the solution works great and the drives slide smoothly into place.\n","permalink":"https://theithollow.com/2013/12/11/add-ssds-hp-microserver/","summary":"\u003cp\u003eMy brand new shiny \u003ca href=\"http://www.amazon.com/gp/product/B00DDXS936/ref=as_li_tf_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=9325\u0026amp;creativeASIN=B00DDXS936\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eHP Microserver\u003c/a\u003e arrived in the mail and I was excited to try it out.  I had four \u003ca href=\"http://www.amazon.com/gp/product/B00566FEUO/ref=as_li_tf_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=9325\u0026amp;creativeASIN=B00566FEUO\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003e480GB OCZ SSDs\u003c/a\u003e to add to this baby server and wanted to get it up and running.  Unfortunately, the HP Microserver is built for 3.5 inch drives.  Luckily I found great solution.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.amazon.com/gp/product/B005PZDVF6/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B005PZDVF6\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eNewer Technology AdaptaDrive 2.5\u0026quot; to 3.5\u0026quot; Drive Converter Bracket.\u003c/a\u003e\u003cimg loading=\"lazy\" src=\"http://ir-na.amazon-adsystem.com/e/ir?t=theithollowco-20\u0026l=as2\u0026o=1\u0026a=B005PZDVF6\"\u003e \u003ca href=\"http://www.amazon.com/gp/product/B005PZDVF6/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B005PZDVF6\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003e\u003cimg alt=\"adapterBay\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/12/adapterBay.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eAttach your 2.5 inch SSD to this bracket, then attach the bracket to the HP MicroServer Drive Trays and you\u0026rsquo;re good to go.\u003c/p\u003e","title":"Add SSDs to HP Microserver"},{"content":"\nCloud Physics generated a lot of buzz during the 2012 VMworld in San Francisco. I remember them sharing a booth with Fusion-IO and having a large crowd most of the time. They had a little different idea about how to get viability into vSphere environments and it was through the concept of cards.\nThe Installation of Cloud Physics may be one of the simplest ever done.\n1. Go to the CloudPhysics site and sign up for their service. (There is a 30 day free trial available as well).\n2. Download the Observer as either an OVA or OVF and deploy it in your vSphere environment.\n3. Login to your Cloud Physics account on their site and review your configs.\nOnce your install is done you\u0026rsquo;ll notice some very cool information being available right away in your deck. Specifically I\u0026rsquo;m look to the HA Cluster Health Card, Datastore Utilization, and Host NTP Settings (yeah, I\u0026rsquo;ve got an issue already). Clicking on these cards will give you more than just an overview though.\nWhen i click on my Snapshots Gone WIld card I can see all my snapshots, how much space they\u0026rsquo;re chewing up, and how old they are. This makes it pretty easy to go fix whatever errors I have. Other cards may show different information, such as datastore usage, CPU over-commitment, etc.\nOne of the cards I immediately noticed was a cost calculator for Amazon Web Services (AWS) as well as vCloud Hybrid Service (vCHS). The AWS card is shown below and can take your current environment and give you a breakdown of costs for deploying the same thing on AWS. Very Cool! What if the information I\u0026rsquo;m looking for doesn\u0026rsquo;t exist in my deck? Then check out the card store. The store has a variety of different cards with additional metrics to use, but even better than that, there is a way to access the entire community of cards. I downloaded a card built by Josh Coen that shows the number of VM Nics vs VM Nics that are connected. A useful card that can show me if I\u0026rsquo;ve got virtual machines without network connectivity.\nIf we dive into Josh\u0026rsquo;s card we see that in my environment this occurs in a couple of places.\nIf you\u0026rsquo;ve got some time, check them out for a 30 day trial. Especially if you\u0026rsquo;re considering moving your environment to AWS or vCHS!\n","permalink":"https://theithollow.com/2013/12/09/cloud-physics/","summary":"\u003cp\u003e\u003ca href=\"cloudphysics.com\"\u003e\u003cimg alt=\"CloudPhysicsBooth\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/11/CloudPhysicsBooth.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eCloud Physics generated a lot of buzz during the 2012 VMworld in San Francisco.  I remember them sharing a booth with Fusion-IO and having a large crowd most of the time.  They had a little different idea about how to get viability into vSphere environments and it was through the concept of cards.\u003c/p\u003e\n\u003cp\u003eThe Installation of Cloud Physics may be one of the simplest ever done.\u003c/p\u003e\n\u003cp\u003e1.  Go to the \u003ca href=\"http://cloudphysics.com\"\u003eCloudPhysics site\u003c/a\u003e and sign up for their service.  (There is a 30 day free trial available as well).\u003c/p\u003e","title":"Cloud Physics has cards forfrom everyone"},{"content":"\nAs many of you know I\u0026rsquo;ve been a fan of the VMware Product called Site Recovery Manager (SRM) for a disaster recovery solution, I even wrote an SRM 5.5 Guide for using it.\nBut many people I talked to told me to check out Zerto as a DR solution because of how simple it was to use and setup. I figured that I owed it to them to at least try them out, and so they are now in my lab.\nDisclaimer- As of the time of this writing Zerto is a sponsor of theITHollow.com but did not pay for any reviews or posts based on their software.\nMy first thoughts about the Zerto solution really was how easy it was to get up and running. It required the Zerto Virtual Manager to be installed on a windows instance for each of my sites. The managers can then deploy a Zerto Replication Appliance (Z-VRA) for each ESXi host in my site. The Z-VRA does the all of the heavy replication workload so it\u0026rsquo;s a good way to scale out your DR solution if the protection site needs to grow. Needing to add more hosts and more VMs, just add more Z-VRAs.\nDeploying the Z-VRAs is a matter of clicking Install New VRA and giving them an IP Address.\nBelow is a graphic that explains the traditional Site to Site DR scenario. You can see a ZVM and the Z-VRAs and how a single ZVM manages multiple replication appliances. The important thing to notice is that the type of storage does not matter. In my lab I\u0026rsquo;ve used a Synology NAS on the production side and local storage on the DR Side.\nOnce your Servers are up and running, the next step is to pair the sites. This involves adding the name or IP address of the opposite site, and when done the pairing is completed. No need to redo this operation on the adjacent side.\nFrom here, you need to create a VPG. This Protection Group maps the Replicated VMs to the appropriate datastores and networks on the replication site, as well as the targeted Recovery Point Objective (RPO) Also, it\u0026rsquo;s important to notice that both a Failover network and a \u0026ldquo;Test\u0026rdquo; failover network can be specified so that test can be done without affecting production. Lastly, configuring the Virtual Machines that should be replicated, is a matter of adding the VM to a VPG.\nThe next part is difficult\u0026hellip;Waiting. Now the VM will start the process of replicating their data to the secondary site. Let this process finish and the Zerto Manager will give a really simple to view graphical representation of your DR Solution. I really love the simple interface. Just from looking at the GUI it\u0026rsquo;s very simple to see the direction of the replication as well as the performance of the replication. And a great big button ready for failovers.\nZerto allows you to test the failovers, run real failovers and create offsite clones which is a nice feature. Also during failover operations multiple point in time selections can be selected so the \u0026ldquo;Last Replication\u0026rdquo; doesn\u0026rsquo;t have to be used if not wanted.\nWhen running a \u0026ldquo;Read\u0026rdquo; failover, click the giant failover button and follow a couple of quick prompts. First, select the VPG you are looking to failover.\nNext, select the checkpoint (multiple point in time replication) and what to do with the Production VM, that is if it still exists after the disaster. Also, notice there is a commit policy which is a neat addition. Here I\u0026rsquo;ve set it to automatically commit the failover after 10 minutes.\nLastly, a big red button warning will show up to make sure you REALLY REALLY REALLY want to do this. \u0026ldquo;Seriously, you want to declare a disaster? Really????\nNotice that once the failover is done, a countdown timer showed up for me counting down from 10 minutes. This is from the commit option I selected earlier. Here I just pressed the \u0026ldquo;commit\u0026rdquo; button to go ahead and commit my changes.\nA cool thing happened as soon as I committed my failover. An option to reprotect the VM was prompted. How cool is that? If the original Production site is available the Replication will reverse directions to get you ready to fail back.\nZerto has a variety of other features that I may cover in another post, such as the ability to protect a site with the peer site being a public cloud network. http://www.zerto.com/bcdr-for-cloud-providers/dr-to-the-cloud/\nIf you\u0026rsquo;d like to know more about this solution, please check them out for yourself and grab a free trial while you\u0026rsquo;re at it.\n","permalink":"https://theithollow.com/2013/12/02/zerto-disaster-recovery/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/11/ZertoVMworld.jpg\"\u003e\u003cimg alt=\"ZertoVMworld\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/11/ZertoVMworld.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eAs many of you know I\u0026rsquo;ve been a fan of the VMware Product called Site Recovery Manager (SRM) for a disaster recovery solution, I even wrote an \u003ca href=\"/2013/11/2671/\" title=\"VMware Site Recovery Manager 5.5 Guide\"\u003eSRM 5.5 Guide\u003c/a\u003e for using it.\u003c/p\u003e\n\u003cp\u003eBut many people I talked to told me to check out Zerto as a DR solution because of how simple it was to use and setup.  I figured that I owed it to them to at least try them out, and so they are now in my lab.\u003c/p\u003e","title":"Zerto for Disaster Recovery"},{"content":"\nI was fortunate enough to get to spend an hour with Dmitriy Sandler from Nimble Storage to see what all the fuss was about with their product and more specifically their Cache Accelerated Sequential Layout (CASL) File System.\nHardware Overview Let\u0026rsquo;s cover some of the basics before we dive into CASL. The storage array comes fully loaded with all the bells and whistles, out of the box. All the software features are included with this iSCSI array and include items such as:\nApp Aligned Block Sizes Inline Data Compression Replication Instant Snapshots Thin Provisioning Zero Copy Clones Non-Disruptive Upgrades Scale Out to Storage Cluster And this list goes on ad on. WAN replication for instance is very efficient due to the inline compression that is done during the writes.\nAdditional Specs can be seen below.\nI would like to point out that the Nimble Arrays use Active-Standby Controllers which seems like a bit of a waste of capacity, but is AWESOME if you have a failure. It was pointed out to me that many Active-Active Controller systems end up having a significant performance impact during a failure because the controllers are overloaded. This shouldn\u0026rsquo;t be the case with a Nimble Array.\nFile System So the features are nothing to sneeze at on their own right, but what differentiates Nimble from other storage arrays? The Nimble philosophy is that Hybrid storage is the right way to handle 95% of storage workloads. All flash arrays are expensive and the wear involved on SSDs is a limitation. All Spinning Disk arrays typically just don\u0026rsquo;t have that performance uumph that companies want these days. So what is the best way to use both SSDs and Spinning Disks?\n\u0026lsquo;Enter CASL\nCASL stands for Cache Accelerated Sequential Layout. The name says it all here. The file system is specifically written to make the most out of a hybrid design. Lets look at a typical write sequence first.\nWrites are sent to the device in multiple block sizes depending on the application using the device. Nimble arrays don\u0026rsquo;t care what the block sizes are and will accept any block sizes thrown at it. However, each volume can be automatically tuned for the specific application’s block size to optimize both performance and capacity efficiency. The blocks enter the PCIe NVRAM device on the array and are immediately copied to the Standby Controller across a 10Gb Bus. Once both controllers have the write in NVRAM, the write is acknowledged making for some very snappy response times and low latency for application writes.\nNow that the writes are in NVRAM they are individually compressed in memory before ever writing to any disks. The data is \u0026ldquo;serialized\u0026rdquo; into a 4.5MB stripe and is evenly laid out across the entire set of SAS disks. This sequential write is pretty quick due to the fact that the data is written sequentially which limits the seek times necessary on random writes. Whats really cool about this process is that the CASL algorithm looks at the origin of the disk writes and puts them next to each other on disk. During a read operation there is a high likelihood that these writes will be retrieved together as well which will help read performance.\nPretty neat huh? But wait a second, what about those SSD\u0026rsquo;s in the system? We skipped them during this process. Well, during the write to the SAS disks, the CASL algorithm looks to find \u0026ldquo;Cache Worthy\u0026rdquo; data and segments it into smaller stripes for the SSDs and writes a copy to them as well. A graphic of this process is found below.\nCourtesy of Nimble Storage\nReads, are done first from the NVRAM which is a nice add!. Many times data that is just written is often read right away so having NVRAM able to be read from is a fast way to handle reads. NVRAM can\u0026rsquo;t store a lot of data however so reads immediately are done based off the data in the SSDs. On the first cache miss, data is copied up from the spinning disks to the SSDs again, along with a prefetch of surrounding relevant blocks to accelerate subsequent application read requests.\nBecause CASL is built around SSDs, all writes are done in a full read/delete page, eliminating any write amplification. Since SSDs are just a cache, there is no need to waste any of them for hot-spares or RAID, offering a much higher usable capacity and lower $/GB. This also allows Nimble to use MLC drives whereas most systems are still bound to higher cost eMLC or SLC technology.\nGarbage Collection When I first heard of the CASL filesystem and how writes are done, I didn\u0026rsquo;t think that the writes to spinning disks were that much different from the Netapp Write Anywhere File Layout (WAFL) but digging into the garbage collection, the difference becomes clearer.\nWAFL opportunistically tries to dump NVRAM to disk in the available open blocks similarly to CASL. The problem becomes when blocks are modified in WAFL, the blocks become very fragmented like swiss cheese. CASL has the same challenge but during Garbage Collection, these blocks are pulled back into NVRAM and re-written sequentially which keeps the system running nice and smooth.\nUnlike WAFL, CASL is built as a fully Log Structured Filesystem (LFS). In other words, every time data is written down to disk, it’s done so in an optimally sized sequential stripe offering great write performance (thousands of IOPS from 7.2K drives) but also the ability to maintain performance over time as the system is filled up by intelligently leveraging the low-priority but always-on Garbage Collection engine.\n","permalink":"https://theithollow.com/2013/11/25/casl-nimble-storage/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/11/nimbleChassis.jpg\"\u003e\u003cimg alt=\"nimbleChassis\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/11/nimbleChassis.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI was fortunate enough to get to spend an hour with \u003ca href=\"https://twitter.com/dmitriy_sandler\"\u003eDmitriy Sandler\u003c/a\u003e from \u003ca href=\"https://twitter.com/NimbleStorage\"\u003eNimble Storage\u003c/a\u003e to see what all the fuss was about with their product and more specifically their Cache Accelerated Sequential Layout (CASL) File System.\u003c/p\u003e\n\u003ch1 id=\"hardware-overview\"\u003eHardware Overview\u003c/h1\u003e\n\u003cp\u003eLet\u0026rsquo;s cover some of the basics before we dive into CASL.  The storage array comes fully loaded with all the bells and whistles, out of the box.  All the software features are included with this iSCSI array and include items such as:\u003c/p\u003e","title":"CASL with Nimble Storage"},{"content":"Hard drives are not the most fun thing to talk about, but it\u0026rsquo;s important to know some of the concepts when it comes to disk latency. Disk latency refers to the time delay between a request for data and the return of the data. It sounds like a simple thing, but this time can be critical to the performance of a system.\nWe should be surprised that traditional hard disks work at all when we consider that head designed to read minute magnetic fields sits 3 nanometers off a platter is spinning between 5400RPM and 15,000 RPM. Amazing when you stop to think about it huh?\nEven with all that, we worry about how fast we can return data to a system.\nThree specific calculations are used to determine the disk latency.\n1. Rotational Latency\n2. Seek Time\n3. Transfer Time\nRotational Latency Data is housed on the platters, and the platters spin. The readwrite head can\u0026rsquo;t be positioned on all of the data at the same time, so the platters spin around really fast to get the data under the readwrite head. The amount of time it take for the platters to spin the data under the head is the rotational latency. To calculate the maximum rotational latency = 60000/RPM (60 seconds in a minute * 1000 to get milliseconds / Revolutions Per minute)\nIn 2013 hard disks usually spin at a steady rate of: RPMsRotational Latency (ms)54001172008100006150004\nThe average rotational latency for a disk is one-half the amount of time it takes for the disk to make one revolution.\nSeek Time Seek time is the amount of time it take for the ReadWrite head to move between sectors on the disk. Some times data needs to be written on the outside of the platters and sometimes inside. The readwrite head needs to move back and forth to get this information. The amount of time this takes is the \u0026ldquo;seek time\u0026rdquo;.\nThe maximum seek time is the time in milliseconds that a head needs to travel from the outermost track to the innermost track. This time is determined from the manufacturer.\nThe average seek time is then one-third the maximum seek time. I would explain this in further detail, but it involves calculus. If you\u0026rsquo;d like to dive deeper, i recommend looking into page 10 of this excerpt from \u0026ldquo;Operating Systems: Three Easy Pieces\u0026rdquo; By Remzi Arpaci-Dusseau, Andrea Arpaci-Dusseau\nTransfer Time The speed of the disk components is only part of the struggle, there is also the amount of time it take for requests to get from the system to the disk.\nIn order to determine the transfer time we need to know the sustained transfer rate of the drive. This can be found from the manufacturer of the device.\nFrom here the transfer time will also depend on the block size. If we assume a 4K block size and a transfer rate of 151MB we can use the following equation\nTransfer time = blocksize / Sustained transfer rate\n.0325ms = 4 / 123\n","permalink":"https://theithollow.com/2013/11/18/disk-latency-concepts/","summary":"\u003cp\u003eHard drives are not the most fun thing to talk about, but it\u0026rsquo;s important to know some of the concepts when it comes to disk latency.  Disk latency refers to the time delay between a request for data and the return of the data.  It sounds like a simple thing, but this time can be critical to the performance of a system.\u003c/p\u003e\n\u003cp\u003eWe should be surprised that traditional hard disks work at all when we consider that head designed to read minute magnetic fields sits 3 nanometers off a platter is spinning between 5400RPM and 15,000 RPM.  Amazing when you stop to think about it huh?\u003c/p\u003e","title":"Disk Latency Concepts"},{"content":" I recently had the opportunity to check out a new product called Maxta. If you\u0026rsquo;re not familiar with the company yet, that\u0026rsquo;s ok as it\u0026rsquo;s just now hitting the market with some vigor.\nThe Generally Available (GA) version of Maxta is a software-centric solution that make the most out of storage all ready available on your servers.\nMaxta takes local storage on your ESXi hosts and creates a VMware datastore out of it. At first glance it\u0026rsquo;s hard not to immediately compare it to VMware new VSAN solution that is coming soon. Aside from the fact that Maxta is available right now, there are some other major differences.\nMaxta supports native snapshots and clones whereas VMware VSAN relies on VM Snapshots. Supports thin provisioning and compression out of the box, as well as inline deduplication if needed. Maxta has a Hyper-V solution coming soon whereas it\u0026rsquo;s expected taht VSAN will only be a VMware product. Can support both SSDs with traditional Spinning disks, or a combination of these. Maxta architecture makes copies of data to multiple hosts in order to protect against a host failure, but not all hosts must have local storage. The Maxta Storage Platform (MxSP) first looks to see if the local host has the data stored, if it does not, it looks to an adjacent host. A host without local storage can still participate and have VMs residing on a Maxta datastore but will look to an adjacent server for the data.\nI got an overview from CEO Yoram Novick and one of my first questions was, \u0026ldquo;What happens to the virtual machine disks during a vMotion?\u0026rdquo; Not surprisingly, the answer is , \u0026ldquo;Well, it depends.\u0026rdquo; The reason for this very standard IT answer is that the algorithm looks to try to determine why the VM moved in the first place. If it moved due to DRS, there is a good chance that a copy of the data is already on the host the VM was moved to so there is no issue. If a VM was moved because a host was put into \u0026ldquo;Maintenance Mode\u0026rdquo; the data may not be copied to the new host because there is a decent chance the VM will be moved back.\nThere is no user interface for the Maxta solution. The Interface is a plugin that integrates directly into the vSphere C# client or the Web Client.\nIf you\u0026rsquo;re wondering how the pricing works, its totally based off the amount of storage used by MxSB.\nI think one of the best things about this solution is the fact that Maxta doesn\u0026rsquo;t worry about managing the datastore, so much as the Virtual Machine itself. Many storage devices can take VM snapshots, but end up taking a Snap of the entire volume or LUN. Maxta focuses on the VM itself as the storage unit, similar to Tintri.\nChris Wahl has also written a recent post about the company and I invite you to check it out here.\nGo to their website and sign up for a demo or free trial and test it for yourself.\n","permalink":"https://theithollow.com/2013/11/13/maxta-stealth-mode/","summary":"\u003cp\u003e\u003ca href=\"maxta.com\"\u003e\u003cimg alt=\"Maxta logo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/11/Maxta-logo.jpg\"\u003e\u003c/a\u003e I recently had the opportunity to check out a new product called Maxta.  If you\u0026rsquo;re not familiar with the company yet, that\u0026rsquo;s ok as it\u0026rsquo;s just now hitting the market with some vigor.\u003c/p\u003e\n\u003cp\u003eThe Generally Available (GA) version of Maxta is a software-centric solution that make the most out of storage all ready available on your servers.\u003c/p\u003e\n\u003cp\u003eMaxta takes local storage on your ESXi hosts and creates a VMware datastore out of it.  At first glance it\u0026rsquo;s hard not to immediately compare it to VMware new VSAN solution that is coming soon.  Aside from the fact that Maxta is available right now, there are some other major differences.\u003c/p\u003e","title":"Maxta out of Stealth Mode"},{"content":"After watching Alan Renouf\u0026rsquo;s video about Open-VMConsoleWindow, I got excited about PowerShell again. In my current job I don\u0026rsquo;t get to do much scripting anymore but wanted to give building a form for PowerCLI a try. I\u0026rsquo;ve secretly wanted to be a programmer as long as I didn\u0026rsquo;t have to do it full time. :)\nUsing the video from Alan, and a Video from the LazyWinAdmin (included in this post) I created a fairly simple form that could run some commands on my home lab. My main goal was get a refresher on some PowerCLI and how to use Primal Forms.\nMy hypothetical goal was to build a small end user VMware console that I could give to a dev team so that they could manage parts of the VMware environment, without giving them the entire console.\nI first downloaded PrimalForms Community Edition so that I could put together a basic form for users. After that, I modified the buttons to run my PowerCLI code.\nDISCLAIMER: I am an amateur PoSH enthusiest and expect that this code could be better, more efficient and added to. It requires PowerCLI 5.5 and a test in your lab before actually using it in your environment.\nMy PowerCLI Code [code language=powershell] #Generated Form Function function GenerateForm { ######################################################################## # Code Generated By: SAPIEN Technologies PrimalForms (Community Edition) v1.0.10.0 # Generated On: 10/30/2013 11:04 AM # Generated By: Eric Shanks www.theITHollow.com # Code is provided as-is and no liability will be retained by the author. # All Code should be tested before being used in a production environment. ########################################################################\n#region Import the Assemblies [reflection.assembly]::loadwithpartialname(\u0026ldquo;System.Drawing\u0026rdquo;) | Out-Null [reflection.assembly]::loadwithpartialname(\u0026ldquo;System.Windows.Forms\u0026rdquo;) | Out-Null #endregion\n#region Control Helper Functions function Load-DataGridView { \u0026lt;# .SYNOPSIS This functions helps you load items into a DataGridView.\n.DESCRIPTION Use this function to dynamically load items into the DataGridView control.\n.PARAMETER DataGridView The ComboBox control you want to add items to.\n.PARAMETER Item The object or objects you wish to load into the ComboBox\u0026rsquo;s items collection.\n.PARAMETER DataMember Sets the name of the list or table in the data source for which the DataGridView is displaying data.\n#\u0026gt; Param ( [ValidateNotNull()] [Parameter(Mandatory=$true)] [System.Windows.Forms.DataGridView]$DataGridView, [ValidateNotNull()] [Parameter(Mandatory=$true)] $Item, [Parameter(Mandatory=$false)] [string]$DataMember ) $DataGridView.SuspendLayout() $DataGridView.DataMember = $DataMember\nif ($Item -is [System.ComponentModel.IListSource]` -or $Item -is [System.ComponentModel.IBindingList] -or $Item -is [System.ComponentModel.IBindingListView] ) { $DataGridView.DataSource = $Item } else { $array = New-Object System.Collections.ArrayList\nif ($Item -is [System.Collections.IList]) { $array.AddRange($Item) } else { $array.Add($Item) } $DataGridView.DataSource = $array }\n$DataGridView.ResumeLayout() }\n#region Generated Form Objects $form1 = New-Object System.Windows.Forms.Form $RefreshState = New-Object System.Windows.Forms.Button $DeleteAllSnapshots = New-Object System.Windows.Forms.Button $DeleteSnapshot = New-Object System.Windows.Forms.Button $dataGridView1 = New-Object System.Windows.Forms.DataGridView $RevertSnapshot = New-Object System.Windows.Forms.Button $CreateSnapshot = New-Object System.Windows.Forms.Button $VMToolsUpgrade = New-Object System.Windows.Forms.Button $ShutdownGuest = New-Object System.Windows.Forms.Button $PowerOff = New-Object System.Windows.Forms.Button $PowerON = New-Object System.Windows.Forms.Button $OpenConsole = New-Object System.Windows.Forms.Button $VMLabel = New-Object System.Windows.Forms.Label $textBox1vcenter = New-Object System.Windows.Forms.TextBox $ConnectvCenter = New-Object System.Windows.Forms.Button $InitialFormWindowState = New-Object System.Windows.Forms.FormWindowState #endregion Generated Form Objects\n#\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;- #Generated Event Script Blocks #\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;- #Provide Custom Code for events specified in PrimalForms.\n$RevertSnapshot_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value #List Snapshots available $Snap2Revert = Get-VM $GRIDCell | Get-Snapshot | Out-GridView -outputmode single # Set VM to snapshot Set-VM -VM $GRIDCell -Snapshot $Snap2Revert -confirm:$false }\n$RefreshState_OnClick= { #Populate the list of VMs in the DataGrid on the Form $VMLIST = Get-VM | Select-Object Name, Powerstate Load-DataGridView -DataGridView $dataGridView1 -Item $VMLIST } $PowerOff_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value #Check the State of the VM to see if it\u0026rsquo;s ON $VMState = get-vm -Name $GRIDCell $PowerState = $VMState.PowerState if ($PowerState -like \u0026ldquo;PoweredOn\u0026rdquo;) { Stop-VM $GRIDCell } }\n$CreateSnapshot_OnClick= { ##Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value $today = Get-Date #$SNAPName = \u0026ldquo;USERConsoleSnap \u0026quot; + $today $SNAPName = Read-Host \u0026ldquo;Snapshot Name?\u0026rdquo; $SnapDescript = Read-Host \u0026ldquo;Snapshot Description?\u0026rdquo; #New-Snapshot -VM $GRIDCell -Name $SNAPName -Description $today New-Snapshot -VM $GRIDCell -Name $SNAPName -Description $SnapDescript }\n$OpenConsole_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value #Check the State of the VM to see if it\u0026rsquo;s ON $VMState = get-vm -Name $GRIDCell $PowerState = $VMState.PowerState if ($PowerState -like \u0026ldquo;PoweredOn\u0026rdquo;) {\nvSphere 5.5 or higher should be used for the open-vmconsolewindow command to work. Open-VMConsoleWindow $GRIDCell } }\n$ShutdownGuest_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value #Check the State of the VM to see if it\u0026rsquo;s ON $VMState = get-vm -Name $GRIDCell $PowerState = $VMState.PowerState if ($PowerState -like \u0026ldquo;PoweredOn\u0026rdquo;) { Shutdown-VMGuest -VM $GRIDCell } }\n$PowerON_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value #Check the State of the VM to see if it\u0026rsquo;s ON $VMState = get-vm -Name $GRIDCell $PowerState = $VMState.PowerState if ($PowerState -like \u0026ldquo;PoweredOff\u0026rdquo;) { Start-VM $GRIDCell }\n}\n$DeleteAllSnapshots_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value Get-VM $GRIDCell | Get-Snapshot | Remove-Snapshot -confirm:$false -RunAsync }\n$DeleteSnapshot_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value $Snap2Delete = Get-VM $GRIDCell | Get-Snapshot | Out-GridView -outputmode single Get-Snapshot -VM $GRIDCell -name $Snap2Delete | Remove-Snapshot -ErrorAction SilentlyContinue }\n$VMToolsUpgrade_OnClick= { #Grab Selection from DataGrid $GRIDCell = $dataGridView1.currentrow.Cells[0].value Get-VM $GRIDCELL | Update-Tools –NoReboot }\n$ConnectvCenter_OnClick= { #Connect to a vCenter Server Connect-VIServer $textBox1vcenter.text #Populate the list of VMs in the DataGrid on the Form $VMLIST = Get-VM | Select-Object Name, Powerstate Load-DataGridView -DataGridView $dataGridView1 -Item $VMLIST }\n$OnLoadForm_StateCorrection= {#Correct the initial state of the form to prevent the .Net maximized form issue $form1.WindowState = $InitialFormWindowState #Load the VMware Snapin if not already loaded if ( (Get-PSSnapin -Name VMware.VimAutomation.Core -ErrorAction SilentlyContinue) -eq $null ) { Add-PsSnapin VMware.VimAutomation.Core } }\n#\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;\u0026mdash;- #region Generated Form Code $form1.BackColor = [System.Drawing.Color]::FromArgb(255,119,136,153) $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 512 $System_Drawing_Size.Width = 554 $form1.ClientSize = $System_Drawing_Size $form1.DataBindings.DefaultDataSourceUpdateMode = 0 $form1.Name = \u0026ldquo;form1\u0026rdquo; $form1.Text = \u0026ldquo;theITHollow User Console\u0026rdquo;\n$DeleteAllSnapshots.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 433 $System_Drawing_Point.Y = 351 $DeleteAllSnapshots.Location = $System_Drawing_Point $DeleteAllSnapshots.Name = \u0026ldquo;DeleteAllSnapshots\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 35 $System_Drawing_Size.Width = 110 $DeleteAllSnapshots.Size = $System_Drawing_Size $DeleteAllSnapshots.TabIndex = 15 $DeleteAllSnapshots.Text = \u0026ldquo;Delete All Snapshots\u0026rdquo; $DeleteAllSnapshots.UseVisualStyleBackColor = $True $DeleteAllSnapshots.add_Click($DeleteAllSnapshots_OnClick)\n$form1.Controls.Add($DeleteAllSnapshots)\n$DeleteSnapshot.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 274 $System_Drawing_Point.Y = 351 $DeleteSnapshot.Location = $System_Drawing_Point $DeleteSnapshot.Name = \u0026ldquo;DeleteSnapshot\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 35 $System_Drawing_Size.Width = 121 $DeleteSnapshot.Size = $System_Drawing_Size $DeleteSnapshot.TabIndex = 14 $DeleteSnapshot.Text = \u0026ldquo;DeleteSnapshot\u0026rdquo; $DeleteSnapshot.UseVisualStyleBackColor = $True $DeleteSnapshot.add_Click($DeleteSnapshot_OnClick)\n$form1.Controls.Add($DeleteSnapshot)\n$dataGridView1.DataBindings.DefaultDataSourceUpdateMode = 0 $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 60 $System_Drawing_Point.Y = 102 $dataGridView1.Location = $System_Drawing_Point $dataGridView1.Name = \u0026ldquo;dataGridView1\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 300 $System_Drawing_Size.Width = 200 $dataGridView1.Size = $System_Drawing_Size $dataGridView1.TabIndex = 12\n$form1.Controls.Add($dataGridView1)\n$RevertSnapshot.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 433 $System_Drawing_Point.Y = 304 $RevertSnapshot.Location = $System_Drawing_Point $RevertSnapshot.Name = \u0026ldquo;RevertSnapshot\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 32 $System_Drawing_Size.Width = 110 $RevertSnapshot.Size = $System_Drawing_Size $RevertSnapshot.TabIndex = 11 $RevertSnapshot.Text = \u0026ldquo;RevertSnapshot\u0026rdquo; $RevertSnapshot.UseVisualStyleBackColor = $True $RevertSnapshot.add_Click($RevertSnapshot_OnClick)\n$form1.Controls.Add($RevertSnapshot)\n$CreateSnapshot.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 274 $System_Drawing_Point.Y = 304 $CreateSnapshot.Location = $System_Drawing_Point $CreateSnapshot.Name = \u0026ldquo;CreateSnapshot\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 32 $System_Drawing_Size.Width = 121 $CreateSnapshot.Size = $System_Drawing_Size $CreateSnapshot.TabIndex = 10 $CreateSnapshot.Text = \u0026ldquo;Create Snapshot\u0026rdquo; $CreateSnapshot.UseVisualStyleBackColor = $True $CreateSnapshot.add_Click($CreateSnapshot_OnClick)\n$form1.Controls.Add($CreateSnapshot)\n$VMToolsUpgrade.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 274 $System_Drawing_Point.Y = 224 $VMToolsUpgrade.Location = $System_Drawing_Point $VMToolsUpgrade.Name = \u0026ldquo;VMToolsUpgrade\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 32 $System_Drawing_Size.Width = 121 $VMToolsUpgrade.Size = $System_Drawing_Size $VMToolsUpgrade.TabIndex = 9 $VMToolsUpgrade.Text = \u0026ldquo;VMToolsUpgrade\u0026rdquo; $VMToolsUpgrade.UseVisualStyleBackColor = $True $VMToolsUpgrade.add_Click($VMToolsUpgrade_OnClick)\n$form1.Controls.Add($VMToolsUpgrade)\n$ShutdownGuest.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 433 $System_Drawing_Point.Y = 102 $ShutdownGuest.Location = $System_Drawing_Point $ShutdownGuest.Name = \u0026ldquo;ShutdownGuest\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 30 $System_Drawing_Size.Width = 106 $ShutdownGuest.Size = $System_Drawing_Size $ShutdownGuest.TabIndex = 8 $ShutdownGuest.Text = \u0026ldquo;ShutdownGuest\u0026rdquo; $ShutdownGuest.UseVisualStyleBackColor = $True $ShutdownGuest.add_Click($ShutdownGuest_OnClick)\n$form1.Controls.Add($ShutdownGuest)\n$PowerOff.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 345 $System_Drawing_Point.Y = 102 $PowerOff.Location = $System_Drawing_Point $PowerOff.Name = \u0026ldquo;PowerOff\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 30 $System_Drawing_Size.Width = 75 $PowerOff.Size = $System_Drawing_Size $PowerOff.TabIndex = 7 $PowerOff.Text = \u0026ldquo;PowerOff\u0026rdquo; $PowerOff.UseVisualStyleBackColor = $True $PowerOff.add_Click($PowerOff_OnClick)\n$form1.Controls.Add($PowerOff)\n$PowerON.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 264 $System_Drawing_Point.Y = 102 $PowerON.Location = $System_Drawing_Point $PowerON.Name = \u0026ldquo;PowerON\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 30 $System_Drawing_Size.Width = 75 $PowerON.Size = $System_Drawing_Size $PowerON.TabIndex = 6 $PowerON.Text = \u0026ldquo;PowerON\u0026rdquo; $PowerON.UseVisualStyleBackColor = $True $PowerON.add_Click($PowerON_OnClick)\n$form1.Controls.Add($PowerON)\n$OpenConsole.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 274 $System_Drawing_Point.Y = 170 $OpenConsole.Location = $System_Drawing_Point $OpenConsole.Name = \u0026ldquo;OpenConsole\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 34 $System_Drawing_Size.Width = 121 $OpenConsole.Size = $System_Drawing_Size $OpenConsole.TabIndex = 5 $OpenConsole.Text = \u0026ldquo;Open Console\u0026rdquo; $OpenConsole.UseVisualStyleBackColor = $True $OpenConsole.add_Click($OpenConsole_OnClick)\n$form1.Controls.Add($OpenConsole)\n$VMLabel.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 60 $System_Drawing_Point.Y = 73 $VMLabel.Location = $System_Drawing_Point $VMLabel.Name = \u0026ldquo;VMLabel\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 26 $System_Drawing_Size.Width = 200 $VMLabel.Size = $System_Drawing_Size $VMLabel.TabIndex = 3 $VMLabel.Text = \u0026ldquo;Virtual Machines\u0026rdquo; $VMLabel.TextAlign = 2 $VMLabel.add_Click($handler_VMLabel_Click)\n$form1.Controls.Add($VMLabel)\n$textBox1vcenter.DataBindings.DefaultDataSourceUpdateMode = 0 $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 178 $System_Drawing_Point.Y = 30 $textBox1vcenter.Location = $System_Drawing_Point $textBox1vcenter.Name = \u0026ldquo;textBox1vcenter\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 20 $System_Drawing_Size.Width = 161 $textBox1vcenter.Size = $System_Drawing_Size $textBox1vcenter.TabIndex = 2 $textBox1vcenter.Text = \u0026ldquo;vCenter.hollow.local\u0026rdquo;\n$form1.Controls.Add($textBox1vcenter)\n$ConnectvCenter.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 60 $System_Drawing_Point.Y = 30 $ConnectvCenter.Location = $System_Drawing_Point $ConnectvCenter.Name = \u0026ldquo;ConnectvCenter\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 23 $System_Drawing_Size.Width = 100 $ConnectvCenter.Size = $System_Drawing_Size $ConnectvCenter.TabIndex = 1 $ConnectvCenter.Text = \u0026ldquo;Connect vCenter\u0026rdquo; $ConnectvCenter.UseVisualStyleBackColor = $True $ConnectvCenter.add_Click($ConnectvCenter_OnClick)\n$form1.Controls.Add($ConnectvCenter)\n$RefreshState.DataBindings.DefaultDataSourceUpdateMode = 0\n$System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 60 $System_Drawing_Point.Y = 437 $RefreshState.Location = $System_Drawing_Point $RefreshState.Name = \u0026ldquo;RefreshState\u0026rdquo; $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Height = 23 $System_Drawing_Size.Width = 200 $RefreshState.Size = $System_Drawing_Size $RefreshState.TabIndex = 16 $RefreshState.Text = \u0026ldquo;Refresh VM State\u0026rdquo; $RefreshState.UseVisualStyleBackColor = $True $RefreshState.add_Click($RefreshState_OnClick)\n$form1.Controls.Add($RefreshState)\n#endregion Generated Form Code\n#Save the initial state of the form $InitialFormWindowState = $form1.WindowState #Init the OnLoad event to correct the initial state of the form $form1.add_Load($OnLoadForm_StateCorrection) #Show the Form $form1.ShowDialog()| Out-Null\n} #End Function\n#Call the Function GenerateForm [/code] If you also would like to see some very brief but useful videos on how to do this, I\u0026rsquo;ve embedded them below as well as the VMware documentation on Open-VMConsoleWindow.\nI\u0026rsquo;m sure that many improvements can be made to this. Please share your tweaks, thoughts and how you\u0026rsquo;ve used this in the comments sections of this post!\nhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.powercli.cmdletref.doc%2FOpen-VMConsoleWindow.html\nhttp://www.youtube.com/watch?feature=player_embedded\u0026amp;v=DfQQm25GqXs\nhttp://www.youtube.com/watch?feature=player_embedded\u0026amp;v=HmcWucxQeQE\n","permalink":"https://theithollow.com/2013/11/11/end-user-vmware-console-powercli/","summary":"\u003cp\u003eAfter watching Alan Renouf\u0026rsquo;s video about \u003ca href=\"http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.powercli.cmdletref.doc%2FOpen-VMConsoleWindow.html\"\u003eOpen-VMConsoleWindow\u003c/a\u003e, I got excited about PowerShell again.  In my current job I don\u0026rsquo;t get to do much scripting anymore but wanted to give building a form for PowerCLI a try.  I\u0026rsquo;ve secretly wanted to be a programmer as long as I didn\u0026rsquo;t have to do it full time.  :)\u003c/p\u003e\n\u003cp\u003eUsing the video from Alan, and a Video from the LazyWinAdmin (included in this post) I created a fairly simple form that could run some commands on my home lab.  My main goal was get a refresher on some PowerCLI and how to use Primal Forms.\u003c/p\u003e","title":"End User VMware Console with PowerCLI"},{"content":"Archived Networking Posts on theITHollow.com\n[catlist id=4]\n","permalink":"https://theithollow.com/archived-posts/networking-posts/","summary":"\u003cp\u003eArchived Networking Posts on theITHollow.com\u003c/p\u003e\n\u003cp\u003e[catlist id=4]\u003c/p\u003e","title":"Networking Posts"},{"content":"Archived Storage Posts on theITHollow.com\n[catlist id=8]\n","permalink":"https://theithollow.com/archived-posts/storage-posts/","summary":"\u003cp\u003eArchived Storage Posts on theITHollow.com\u003c/p\u003e\n\u003cp\u003e[catlist id=8]\u003c/p\u003e","title":"Storage Posts"},{"content":"Archived Microsoft Posts on theITHollow.com\n[catlist id=3]\n","permalink":"https://theithollow.com/archived-posts/microsoft-posts/","summary":"\u003cp\u003eArchived Microsoft Posts on theITHollow.com\u003c/p\u003e\n\u003cp\u003e[catlist id=3]\u003c/p\u003e","title":"Microsoft Posts"},{"content":"Posts about Virtualization on theITHollow\n[catlist id=11]\n","permalink":"https://theithollow.com/archived-posts/virtualization-posts/","summary":"\u003cp\u003ePosts about Virtualization on theITHollow\u003c/p\u003e\n\u003cp\u003e[catlist id=11]\u003c/p\u003e","title":"Virtualization Posts"},{"content":"\nThis is a Site Recovery Manager 5.5 Guide to help understand the design, installation, operation and architecture of setting up VMware SRM 5.5\nSRM 5.5 Architecture SRM 5.5 Installation SRM 5.5 Site Configuration SRM 5.5 VM Replication Configuration\nSRM 5.5 Array Replication Configuration\nSRM 5.5 Virtual Appliance Replication SRM 5.5 Protection Groups SRM 5.5 Recovery Plans SRM 5.5 Bulk IP Customizations SRM 5.5 Test Recovery SRM 5.5 Recovery SRM Gotchas\nOfficial Documentation Links SRM 5.5 Release Notes SRM 5.5 Compatibility Matrix SRM 5.5 Documentation Center SRM Port Numbers SRM Product Page\nAdditional Resources Follow these people on Twitter if you are looking for some great resources to learn more about SRM.\n@Mike_Laverick Author of \u0026quot; Administering VMware Site Recovery Manager\u0026quot; BLOG: mikelaverick.com @vmKen Senior Technical Marketing Director for VMware DRBC products BLOG: blogs.vmware.com/vsphere/uptime/ VMware Communities SRM\nHome Lab Its one thing to read how to do something, but another to get your hands on the technology. I have a post on this site dedicated to showing how you can build an SRM site all within a single host with nested ESXi hosts.\nPoor Mans SRM Lab\n","permalink":"https://theithollow.com/2013/11/04/vmware-site-recovery-manager-55-guide/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/5.5Guide.png\"\u003e\u003cimg alt=\"5.5Guide\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/5.5Guide.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThis is a Site Recovery Manager 5.5 Guide to help understand the design, installation, operation and architecture of setting up VMware SRM 5.5\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/srm5-5architecture\"\u003eSRM 5.5 Architecture\u003c/a\u003e \u003ca href=\"/srm-5-5-installation\"\u003eSRM 5.5 Installation\u003c/a\u003e \u003ca href=\"/srm-5-5-site-configuration/\"\u003eSRM 5.5 Site Configuration\u003c/a\u003e \u003ca href=\"/srm-5-5-vm-replication-configuration\"\u003eSRM 5.5 VM Replication Configuration\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eSRM 5.5 Array Replication Configuration\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/srm-5-5-virtual-appliance-replication\"\u003eSRM 5.5 Virtual Appliance Replication\u003c/a\u003e \u003ca href=\"/srm-5-5-protection-groups\"\u003eSRM 5.5 Protection Groups\u003c/a\u003e \u003ca href=\"/srm-5-5-recovery-plans\"\u003eSRM 5.5 Recovery Plans\u003c/a\u003e \u003ca href=\"/site-recovery-manager-5-5-guide/srm-5-5-bulk-ip-customization\"\u003eSRM 5.5 Bulk IP Customizations\u003c/a\u003e \u003ca href=\"/srm-5-5-test-recovery\"\u003eSRM 5.5 Test Recovery\u003c/a\u003e \u003ca href=\"/srm-5-5-recovery\"\u003eSRM 5.5 Recovery\u003c/a\u003e \u003ca href=\"/2012/06/vmware-srm-gotchas/\" title=\"VMware SRM Gotchas\"\u003eSRM Gotchas\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"official-documentation-links\"\u003e\u003cstrong\u003eOfficial Documentation Links\u003c/strong\u003e\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"http://www.vmware.com/support/srm/srm-releasenotes-5-5-0.html\"\u003eSRM 5.5 Release Notes\u003c/a\u003e \u003ca href=\"https://www.vmware.com/support/srm/srm-compat-matrix-5-5.html\"\u003eSRM 5.5 Compatibility Matrix\u003c/a\u003e \u003ca href=\"http://pubs.vmware.com/srm-55/index.jsp\"\u003eSRM 5.5 Documentation Center\u003c/a\u003e \u003ca href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1009562\"\u003eSRM Port Numbers\u003c/a\u003e \u003ca href=\"http://www.vmware.com/products/site-recovery-manager/\"\u003eSRM Product Page\u003c/a\u003e\u003c/p\u003e","title":"VMware Site Recovery Manager 5.5 Guide"},{"content":"Many times it\u0026rsquo;s not practical to modify the IP addresses of every individual VM as they are configured. Luckily VMware has provided a way to bulk upload IP addresses.\nFrom an SRM server, open a command prompt and change the working directory to: c:Program FilesVMwareVMware vCenter Site Recovery Managerbin\nNOTE: Path may be different depending on your install location.\nGenerate a .CSV file to edit your IP Addresses by running dr-ip-customizer.exe with the \u0026ndash;cfg, \u0026ndash;cmd \u0026ndash;vc -i \u0026ndash;out switches.\n\u0026ndash;cfg should be the location of the vmware-dr.xml file. \u0026ndash;cmd should be \u0026ldquo;Generate\u0026rdquo;, \u0026ndash;vc lists the vCenter server, and \u0026ndash;out lists the location to generate the .csv file.\nExample: dr-ip-customizer.exe \u0026ndash;cfg \u0026ldquo;C:Program filesVMwareVMware vCenter Site Recovery ManagerConfigvmware-dr.xml\u0026rdquo; \u0026ndash;cmd generate \u0026ndash;vc FQDNofvCenter -i \u0026ndash;out c:ipaddys.csv\nOpen the .csv file and fill out the information. Notice that there are two entries for the VM. This is because there are two vCenters and in order to do protection and fail back we need the IP Addresses for both sides.\nOnce the IP Address information is entered, run the customizer again with the \u0026ndash;cmd \u0026ldquo;Apply\u0026rdquo; and \u0026ndash;CSV file location.\nExample: dr-ip-customizer.exe \u0026ndash;cfg \u0026ldquo;C:Program filesVMwareVMware vCenter Site Recovery ManagerConfigvmware-dr.xml\u0026rdquo; \u0026ndash;cmd apply \u0026ndash;vc FQDNofvCenter -i \u0026ndash;cmdc:ipaddys.csv\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-bulk-ip-customization/","summary":"\u003cp\u003eMany times it\u0026rsquo;s not practical to modify the IP addresses of every individual VM as they are configured.  Luckily VMware has provided a way to bulk upload IP addresses.\u003c/p\u003e\n\u003cp\u003eFrom an SRM server, open a command prompt and change the working directory to:  c:Program FilesVMwareVMware vCenter Site Recovery Managerbin\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eNOTE:\u003c/strong\u003e Path may be different depending on your install location.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/11/SRM-IPCustom1.png\"\u003e\u003cimg alt=\"SRM-IPCustom1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/11/SRM-IPCustom1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eGenerate a .CSV file to edit your IP Addresses by running dr-ip-customizer.exe with the \u0026ndash;cfg, \u0026ndash;cmd \u0026ndash;vc -i \u0026ndash;out switches.\u003c/p\u003e","title":"SRM 5.5 Bulk IP Customization"},{"content":"Solid State drives are much faster than their spinning disk predecessors, but can also have performance degradation due to how they interact with Operating Systems.\nFlash consists of blocks of data and those blocks are full of smaller items called pages. A typical SSD might have block sizes of 512KB and 4KB pages.\nThere are 3 statuses that a healthy page could be in on a flash disk.\nWritten to: Data from the OS has been written to the page. Unwritten to: The page is free and available to be written to by the Operation System. Invalid: The page has data in it, but is available to be overwritten by the Operating System. Parts 1 and 2 probably don’t need any explanation, but part 3 requires a bit more knowledge about how disk controllers interact with the Operating System.\nBecause modifying a page, or deleting a page takes time, the Operating System usually updates the file system to say that the page is available to be overwritten, but never informs the disk controller. Since the disk controller never knows what pages have real data in them, and which pages are invalid, they are never deleted. Normally, this speeds up the process because you don’t have to go through a whole routine of deleting the actual data. The OS shows the amount of “Free Space” which is = (Total Unwritten Pages) + (Total Invalid Pages) * (Page Size).\nFlash Memory has three kinds of operations.\nPage-read: A page is read. Page-write: A page is written. Block-Erase: A block is erased. You might notice that one of those operations is not like the other. The fact that an SSD needs to erase entire blocks at a time causes some performance issues. Flash disks cannot erase a single page, they have to delete an entire block.\nOverwriting an invalid page is not as simple as you\u0026rsquo;d think it would be. It turns out that flash has to write to unwritten pages. So if a page is invalid, the data in it must be deleted first and then it can be written to again. Remember that the data in that invalid page still has data in it, the file system has marked that page as available to be re-written. The disk controller has not deleted the info in that page.\nLets look at an example of what needs to happen to overwrite a single page.\nHere we have a single block with all invalid pages, and two written pages. The OS file system tells us that we can write to any of the pages except the two that are written, but the disk controller needs to delete the invalid pages first.\nThe process of deleting a single page requires you to copy the entire block to memory, erase the block from disk, and then written pages back from memory. This is a lot of work to write a few pages to NAND cells that should already be available to write to.\nTRIM TRIM commands have now been added to some Operating Systems to let the SSD\u0026rsquo;s know ahead of time that data can be deleted. During the disk\u0026rsquo;s garbage collection cycle, it can go through and delete the blocks in advance of the OS needing to write to them again which helps alleviate this performance cliff.\n","permalink":"https://theithollow.com/2013/10/28/understanding-ssd-write-performance-cliff/","summary":"\u003cp\u003eSolid State drives are much faster than their spinning disk predecessors, but can also have performance degradation due to how they interact with Operating Systems.\u003c/p\u003e\n\u003cp\u003eFlash consists of blocks of data and those blocks are full of smaller items called pages.  A typical SSD might have block sizes of 512KB and 4KB pages.\u003c/p\u003e\n\u003cp\u003eThere are 3 statuses that a healthy page could be in on a flash disk.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eWritten to:\u003c/strong\u003e Data from the OS has been written to the page.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eUnwritten to:\u003c/strong\u003e The page is free and available to be written to by the Operation System.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eInvalid:\u003c/strong\u003e The page has data in it, but is available to be overwritten by the Operating System.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/Pages.png\"\u003e\u003cimg alt=\"Pages\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/Pages.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Understanding the SSD write performance cliff"},{"content":"If you plan to use VMware Replication to replicate your VMs, you\u0026rsquo;ll need to deploy a VR Appliance at each site. Click the \u0026ldquo;Deploy VR Appliance\u0026rdquo; link under the VMware replication tab. Deploy the OVF template to your desired ClusterHostnetwork and datastore.\nName your VR appliance and select the inventory location.\nChoose your cluster.\nChoose a resource pool, or just the cluster name if no resource pools exist. The cluster itself, acts as a resource pool.\nChoose your datastore.\nProvision your virtual disks.\nSelect your virtual network. This might be a good time to review you networks again. It may make sense to create a different network strictly for your replication traffic.\nChange the root password for the VR appliance as well as configuring your networking settings.\nSelect the vCenter Extension vService to connect to vCenter.\nReview the settings and click finish.\nWait for the replication appiance to deploy the disks and register with vCenter.\nOnce the appliance has been deployed, you can use the URL to make further modifications, or upgrade later on.\nCongratulations!\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-vm-replication-configuration/","summary":"\u003cp\u003eIf you plan to use VMware Replication to replicate your VMs, you\u0026rsquo;ll need to deploy a VR Appliance at each site.  Click the \u0026ldquo;Deploy VR Appliance\u0026rdquo; link under the VMware replication tab. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup1.png\"\u003e\u003cimg alt=\"VMReplicationSetup1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eDeploy the OVF template to your desired ClusterHostnetwork and datastore.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup2.png\"\u003e\u003cimg alt=\"VMReplicationSetup2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup2.png\"\u003e\u003c/a\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup3.png\"\u003e\u003cimg alt=\"VMReplicationSetup3\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup3.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eName your VR appliance and select the inventory location.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup4.png\"\u003e\u003cimg alt=\"VMReplicationSetup4\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup4.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eChoose your cluster.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup5.png\"\u003e\u003cimg alt=\"VMReplicationSetup5\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/VMReplicationSetup5.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eChoose a resource pool, or just the cluster name if no resource pools exist.  The cluster itself, acts as a resource pool.\u003c/p\u003e","title":"SRM 5.5 VM Replication Configuration"},{"content":"Once virtual replication appliances have been paired and configured, the virtual machines can be replicated. Click the hanger next to the VM to be replicated, click \u0026ldquo;All vSphere Replication Actions\u0026rdquo; then click \u0026ldquo;Configure Replication\u0026rdquo; NOTE: VMware is phasing out the C# client for vCenter, but hasn\u0026rsquo;t yet. In order to do SRM Configurations you still need to use this client which is why many of my examples are still using it. The act of replicating the VM I\u0026rsquo;ve chosen to use the web client because it adds extra features that the C# client is missing.\nIf you have a lot of replication happening, you may have more than one VR Server to handle the replication. Each VM will be assigned a VR server, and you can control which ones are used if needed. If you\u0026rsquo;re not sure, choose the Auto-assign.\nSelect the datastore the VM should be replicated to at the recovery site.\nSelect the individual disks that need to be replicated and to what datastore they should be replicated to. If multiple disks are attached to the VM, different datastores can be selected for replication.\nSelect if the guest operating system should be quiesed. This may be very useful for databases, exchange, etc.\nSet your Recovery Point Objective (RPO). The RPO is the amount of data loss that is acceptable during a disaster. Obviously no one wants to lose data, but the lowest RPO that can be configured by vSphere Replication is 15 minutes. If you need lower RPO than this, you\u0026rsquo;ll need to select a storage replication method, likely at a cost.\nAlso, one of the new features with vSphere Replication 5.5 is to use multiple point in time replications. This means that you can replicate your snapshots as well as the VM, but to do this you MUST use the web client. This option is not available with the C# client. Review the settings and choose finish.\nThe recent tasks will show the replication start after the initial configuration.\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-virtual-appliance-replication/","summary":"\u003cp\u003eOnce virtual replication appliances have been paired and configured, the virtual machines can be replicated.  Click the hanger next to the VM to be replicated, click \u0026ldquo;All vSphere Replication Actions\u0026rdquo; then click \u0026ldquo;Configure Replication\u0026rdquo; \u003cstrong\u003eNOTE:\u003c/strong\u003e VMware is phasing out the C# client for vCenter, but hasn\u0026rsquo;t yet.  In order to do SRM Configurations you still need to use this client which is why many of my examples are still using it.  The act of replicating the VM I\u0026rsquo;ve chosen to use the web client because it adds extra features that the C# client is missing.\u003c/p\u003e","title":"SRM 5.5 Virtual Appliance Replication"},{"content":"Running a test recovery is an important step to take before having an actual disaster. One of the most valuable pieces of VMware Site Recovery Manager is that you can non-disruptively test the configurations.\nFind the recovery plan to test in the SRM module of the vSphere Client. Click the \u0026ldquo;Test\u0026rdquo; button.\nA warning message will appear before continuing on with your test. Also on this screen there is an option to run one last replication to make sure the VMs are up to date. This is useful if the plan is to use SRM to migrate VMs to a different datacenter, but as we all know, disasters aren\u0026rsquo;t going to wait for one more replication. If the goal is to test what happens during a disaster, leave this box unchecked to make it more realistic.\nReview the information and click \u0026ldquo;Start\u0026rdquo;.\nThe recovery test will start and information will start appearing in the SRM console about number of VMs in progress, finished and not started.\nOnce the recovery plan test is done, a message will appear letting you know that the test environment is setup. During this phase, the VMs should be available for running any application tests, connectivity tests etc that may need to be verified. Any errors here should be reviewed so that they can be fixed before running a real failover, or subsequent failover tests.\nThe example below shows that during the test recovery the VM named \u0026ldquo;OMGNO-VM\u0026rdquo; is operational at both the DR Datacenter and the Production Datacenter without issue.\nIf during the recovery plan setup, an \u0026ldquo;Auto\u0026rdquo; network was used for test recoveries, a bubble network would be created during the test failover. In the example below there is a new vSwitch created without any uplinks. This is where the VMs will be attached for testing purposes.\nAs you can see below, the OMGNO-VM is fully operational during the test failover.\nWhen the testing is done, click the \u0026ldquo;cleanup\u0026rdquo; link on the SRM screen. A warning message will appear asking you to confirm that this is what is intended. During this cleanup phase, the VMs will be powered back off, temporary datastores that were created will be destroyed, and any bubble networks will be removed.\nReview the information and click \u0026ldquo;Start\u0026rdquo;.\nA screen similar to the testing screen will be displayed showing what VMs have been cleaned up etc.\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-test-recovery/","summary":"\u003cp\u003eRunning a test recovery is an important step to take before having an actual disaster.  One of the most valuable pieces of VMware Site Recovery Manager is that you can non-disruptively test the configurations.\u003c/p\u003e\n\u003cp\u003eFind the recovery plan to test in the SRM module of the vSphere Client.  Click the \u0026ldquo;Test\u0026rdquo; button.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/TestVM1.png\"\u003e\u003cimg alt=\"TestVM1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/TestVM1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eA warning message will appear before continuing on with your test.  Also on this screen there is an option to run one last replication to make sure the VMs are up to date.  This is useful if the plan is to use SRM to migrate VMs to a different datacenter, but as we all know, disasters aren\u0026rsquo;t going to wait for one more replication.  If the goal is to test what happens during a disaster, leave this box unchecked to make it more realistic.\u003c/p\u003e","title":"SRM 5.5 Test Recovery"},{"content":"Once the SRM servers, vCenter Servers, SSO etc are all installed, you can finally begin the configuration. In the vSphere client go to the SRM icon to get started. The first requirement is to setup a connection between the protected site (production site) and the recovery site (disaster site). Click the \u0026ldquo;configure connection\u0026rdquo; link.\nEnter the recovery site information. This is the server running vCenter at the DR site.\nMake sure you have login credentials for the DR site\u0026rsquo;s vCenter server. Enter your login info.\nWhen you have finished entering in the information, SRM will finish the rest.\nNow you\u0026rsquo;ll see that both vCenter servers are listed under sites. Next you want to configure mappings. Mappings are necessary for the recovery site vCenter to know what to do with the virtual machines when a failover occurs. For instance there is a folder mapping that shows where to put VMs when they are failed over.\nNext, the network mappings should be done. For instance there may be different vlans, vSwitch names etc between the protected site and the recovery site. This allows you to change the network configs when they are failed over.\nLastly, you\u0026rsquo;ll need to setup a placeholder datastore. This is the datastore where the VM lives. This is only the .vmx file and not the .vmdk files (or virtual disks that are replicated).\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-site-configuration/","summary":"\u003cp\u003eOnce the SRM servers, vCenter Servers, SSO etc are all installed, you can finally begin the configuration.  In the vSphere client go to the SRM icon to get started. \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/configSRM.png\"\u003e\u003cimg alt=\"configSRM\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/configSRM.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe first requirement is to setup a connection between the protected site (production site) and the recovery site (disaster site).  Click the \u0026ldquo;configure connection\u0026rdquo; link.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/Configure-Connection1.png\"\u003e\u003cimg alt=\"Configure-Connection1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/Configure-Connection1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eEnter the recovery site information.  This is the server running vCenter at the DR site.\u003c/p\u003e","title":"SRM 5.5 Site Configuration"},{"content":"Recovery Plans are the guts of the Disaster Recovery Scenario. The recovery plan determines what VMs get started, servers that need to be powered down, scripts to be run, startup order and the overall automated section of your failover.\nMultiple recovery plans can be created in order to handle failures of a single application like Exchange (Edge Transport, Mailbox, Hub Transport and CAS Servers). It can also handle an entire site failover in the event of a natural disaster or \u0026ldquo;smoking hole\u0026rdquo; event.\nChose the \u0026ldquo;Create Recovery Plan\u0026rdquo; from the SRM Module \u0026ndash;\u0026gt; Recovery Plan tab.\nSelect the protection groups you want to plan the recovery of.\nWhen the recovery plan networks are displayed, two networks should be mapped. The first network called the \u0026ldquo;Recovery Network\u0026rdquo; is where the VMs will start up in the event that you need to run recovery. The second network is called the \u0026ldquo;Test NetworK\u0026rdquo; and this is where the VMs will connect to in the event you just want to test the failovers.\nThe test network is very useful to be sure you don\u0026rsquo;t disrupt the production servers by having two identical machines on the same networks. The test network can be a \u0026ldquo;bubble\u0026rdquo; network where any VMs in the network can see each other, but nothing else. If left to \u0026ldquo;auto\u0026rdquo; a bubble network will be created during the test so that basic testing can be done during the test recovery.\nYou may choose to have your test network a predefined network of your choosing.\nName your recovery plan. This is a good opportunity to call it something like \u0026ldquo;Full Disaster Failure\u0026rdquo; or \u0026ldquo;Database Recovery\u0026rdquo;.\nReview your selections and the click finish.\nAt this point the recovery plan is ready to go. In most cases additional configuration would be necessary to customize how the VMs should be recovered.\nTo do this, open the Recovery plan that was just created and click the \u0026ldquo;Configure Recovery\u0026rdquo; link for the VMs.\nInside the Configure Recovery Menu a list of customizable settings will be available. A popular setting change is to modify the IP settings of the virtual machine. Many times moving a VM from one site to another site requires the IP address to change for it to communicate on the network. (There are ways around this if you\u0026rsquo;re wondering)\nSet the Priority Group. Here you can have a group of VMs in the Recovery plan, but you can then set priorities for VMs to be started. Maybe there are some highly mission critical servers that should be started in Priority Group 1 and some lab servers that can wait until everything else is up and running that can go in Group 5.\nA dependency can be set so that other VMs must be recovered first before the VM will start. This may be useful if a web server needs to wait for a database server to be available first.\nIf the customization needed, isn\u0026rsquo;t available in the menu, there is always the option to run a script at any point in the process.\nOnce the VMs are configured, take a look at the recover steps tab in SRM. This will show all of the steps that should take place during a recovery. Additional scripts can be added from this menu at the points needed.\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-recovery-plans/","summary":"\u003cp\u003eRecovery Plans are the guts of the Disaster Recovery Scenario.  The recovery plan determines what VMs get started, servers that need to be powered down, scripts to be run, startup order and the overall automated section of your failover.\u003c/p\u003e\n\u003cp\u003eMultiple recovery plans can be created in order to handle failures of a single application like Exchange (Edge Transport, Mailbox, Hub Transport and CAS Servers).  It can also handle an entire site failover in the event of a natural disaster or \u0026ldquo;smoking hole\u0026rdquo; event.\u003c/p\u003e","title":"SRM 5.5 Recovery Plans"},{"content":"The day has come where something happened and Site Recovery Manager is required to save the day. Hopefully, this is a planned migration to a new datacenter and not due to some sort of unfortunate outage. In any event it should help to know that SRM is available to help fix the situation.\nGo to the Recovery Plan and choose the \u0026ldquo;Recovery\u0026rdquo; button. This may also be known as the \u0026ldquo;Big Red Button\u0026rdquo;.\nA warning message will appear letting you know that this is a serious situation. Also, if this is a planned migration the option to complete a final replication will be available.\nReview the information and click \u0026ldquo;Start\u0026rdquo;.\nFailover will occur and when done, the VMs should be powered on at the DR site. The example below shows the OMGNO-VM is now powered of in the DR Site. Since my example still has access to the production site (not a real disaster) the old OMGNO-VM is still there, but was powered off.\nOnce the recovery plan has been failed over, a \u0026ldquo;Recovery Complete\u0026rdquo; message shows up and will have a list of the recovered VMs, etc.\nThis screen is important because at some point the original datacenter may become operational again. If this happens, the \u0026ldquo;Reprotect\u0026rdquo; link will save a lot of time.\nReprotect will reverse the direction of replication and make the original datacenter site the new recovery site. When all of the replication is complete, a second \u0026ldquo;failover\u0026rdquo; can be initiated to get back to the production site.\nAfter selecting \u0026ldquo;Reprotect\u0026rdquo; a warning message will appear showing the new protection site and recovery sites.\nReview the information and click \u0026ldquo;Start\u0026rdquo;.\nRe-run a failover to get back to the original datacenter.\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-recovery/","summary":"\u003cp\u003eThe day has come where something happened and Site Recovery Manager is required to save the day.  Hopefully, this is a planned migration to a new datacenter and not due to some sort of unfortunate outage.  In any event it should help to know that SRM is available to help fix the situation.\u003c/p\u003e\n\u003cp\u003eGo to the Recovery Plan and choose the \u0026ldquo;Recovery\u0026rdquo; button.  This may also be known as the \u0026ldquo;Big Red Button\u0026rdquo;.\u003c/p\u003e","title":"SRM 5.5 Recovery"},{"content":"Protection Groups house one to many virtual machines that you want to fail over together. \u0026ldquo;A protection group is a collection of virtual machines and templates that you protect together by using SRM\u0026rdquo; - http://pubs.vmware.com/srm-55/topic/com.vmware.ICbase/PDF/srm-admin-5-5.pdf\nOnce replication has been setup, a protection group can then be created. The below example is a protection group consisting of a single VM, but protection groups may consist of an entire application such as Exchange (CAS Server, HT Server, Mailbox Servers, Edge Transport Servers) or even an entire site.\nMy recommendation is to break these protection groups down into the smaller parts so if you are designing DR for an entire site with Exchange, SQL File Servers etc; create a protection group for each of them. You\u0026rsquo;ll see later that a recovery plan can then start all of these groups up to consist of the entire site.\nSelect which site is the protected site and what kind of replication is being used. NOTE: if you are using SAN replication, an entire datastore must be selected for the protection group instead of a single VM. This is important to note when making your design considerations.\nSince my example uses VR replication, the next step is to choose the VMs to put in the group.\nName the Protection group and give it a description. It would be good to name this with the type of application involved such as Oracle Databases, or Web Servers, etc.\nReview your selections and choose finish.\nOnce done, you may have a warning that the status is not configured. This may happen if you haven\u0026rsquo;t setup all of the mappings in the site configuration. You can click on the entire protection group or the individual VMs and select \u0026ldquo;Configure Protection\u0026rdquo;.\nNext, you\u0026rsquo;ll see that a mapping needs to be setup for the VM folders, Resource Pools, and networks. Click on each of them to set the mappings.\nSet Folder mappings.\nSet Resource Pool Mappings.\nSet Network Mappings.\nIf you receive an error such as the example below, be sure to check your mappings for the protection group.\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-protection-groups/","summary":"\u003cp\u003eProtection Groups house one to many virtual machines that you want to fail over together.  \u0026ldquo;A protection group is a collection of virtual machines and templates that you protect together by using SRM\u0026rdquo;  - \u003ca href=\"http://pubs.vmware.com/srm-55/topic/com.vmware.ICbase/PDF/srm-admin-5-5.pdf\"\u003ehttp://pubs.vmware.com/srm-55/topic/com.vmware.ICbase/PDF/srm-admin-5-5.pdf\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eOnce replication has been setup, a protection group can then be created.  The below example is a protection group consisting of a single VM, but protection groups may consist of an entire application such as Exchange (CAS Server, HT Server, Mailbox Servers, Edge Transport Servers) or even an entire site.\u003c/p\u003e","title":"SRM 5.5 Protection Groups"},{"content":"Installling VMware Site Recovery Manager is a pretty simple task. For the most part the wizard asks some basic info about vCenter and IP addresses and takes you through the individual steps. It would be a good idea to turn off UAC on the server before you install though. Making changes after the install requires UAC to be off and can get messy when you do it after the fact.\nNOTE: Turning off UAC on Server 2012 requires editing the registry. https://social.technet.microsoft.com/wiki/contents/articles/13953.windows-server-2012-deactivating-uac.aspx SRM needs to register with the local vCenter server. Enter connection information for the vCenter that is in the same site as the SRM server.\nEnter a local site name as well as email address. You\u0026rsquo;ll also need to select the Local host which can be domain name or an IP address of the SRM server. You also have the opportunity to change ports if necessary.\nSelect your database client. I\u0026rsquo;ve used SQL in this instance. The configuration wizard requires an ODBC DSN to be created, but you can click the button on the screen to add one if you haven\u0026rsquo;t done it ahead of time. Make sure to enter credentials for the database connection.\n","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm-5-5-installation/","summary":"\u003cp\u003eInstallling VMware Site Recovery Manager is a pretty simple task.  For the most part the wizard asks some basic info about vCenter and IP addresses and takes you through the individual steps.  It would be a good idea to turn off UAC on the server before you install though.  Making changes after the install requires UAC to be off and can get messy when you do it after the fact.\u003c/p\u003e","title":"SRM 5.5 Installation"},{"content":"VMware Site Recovery Manager consists of several different pieces that all have to fit together, let alone the fact that you are working with two different physical locations.\nThe following components will all need to be configured for a successful SRM implementation:\n2 or more sites 2 or more Single Sign On Servers 2 or more vCenter Servers 2 or more SRM Servers Storage - Either storage arrays with replication, or 2 or more Virtual Replication Appliances Networks It\u0026rsquo;s worth noting that SSO, vCenter, and SRM could all be installed on the same machine, but you\u0026rsquo;ll need this many instances of these components.\nAs of VMware Site Recovery Manager 5.5 you can do a traditional Protected to Recovery Site implementation like the one shown below. This can be a unidirectional setup with a warm site ready for a failover to occur, or it can be bi-directional where both sites are in use and a failure at either site could be failed over to the opposite site.\nEach site will require their own vCenter Server and SRM Server, as well as a method of replication such as a storage array.\nAlong with a 1 to 1 setup, SRM 5.5 can manage a many to one failover scenario where multiple sites could fail over to a single site. This would require an SRM instance for each of the protected sites as seen in the diagram below.\nThe configuration that is not available at the moment is a single site to multiple failover sites. *as of SRM 5.5\nSingle Sign-On After vSphere 5.1 was released, Single Sign-On needs to be a consideration. If you\u0026rsquo;re taking disaster recovery into consideration, it is important to keep SSO available at both sites.\nThe most logical choice is to install an SSO server at each site. This requires Active Directory sites to be setup correctly with multiple subnets and multiple replication sites. This ensures that one copy of the database is being housed across the entire environment, but either SSO server can operate if the other fails.\nFor more information I urge you to visit: http://blogs.vmware.com/vsphere/2013/02/linked-mode-with-sso-for-srm.html ","permalink":"https://theithollow.com/site-recovery-manager-5-5-guide/srm5-5architecture/","summary":"\u003cp\u003eVMware Site Recovery Manager consists of several different pieces that all have to fit together, let alone the fact that you are working with two different physical locations.\u003c/p\u003e\n\u003cp\u003eThe following components will all need to be configured for a successful SRM implementation:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e2 or more sites\u003c/li\u003e\n\u003cli\u003e2 or more Single Sign On Servers\u003c/li\u003e\n\u003cli\u003e2 or more vCenter Servers\u003c/li\u003e\n\u003cli\u003e2 or more SRM Servers\u003c/li\u003e\n\u003cli\u003eStorage - Either storage arrays with replication, or 2 or more Virtual Replication Appliances\u003c/li\u003e\n\u003cli\u003eNetworks\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eIt\u0026rsquo;s worth noting that SSO, vCenter, and SRM could all be installed on the same machine, but you\u0026rsquo;ll need this many instances of these components.\u003c/p\u003e","title":"SRM 5.5 Architecture"},{"content":"I was asked to provide some twitter statistics for VMworld -Barcelona like I did for the VMworld San Francisco this year.\nHere were the results\n#VMWorld Tweets by time of day are as of US Central timezone. This is why there are so many at 2am. Central time is -7 hours behind Barcelona.\nMost Mentions Tweet Map #vExpert Most Mentions #VCDX Most Mentions ","permalink":"https://theithollow.com/2013/10/22/vmworld-eu-2013-twitter-statistics/","summary":"\u003cp\u003eI was asked to provide some twitter statistics for VMworld -Barcelona like I did for the \u003ca href=\"/2013/09/vmworld-twitter-statistics/\" title=\"VMworld twitter statistics\"\u003eVMworld San Francisco\u003c/a\u003e this year.\u003c/p\u003e\n\u003cp\u003eHere were the results\u003c/p\u003e\n\u003ch1 id=\"vmworld\"\u003e\u003cstrong\u003e#VMWorld\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter1.png\"\u003e\u003cimg alt=\"twitter1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter6.png\"\u003e\u003cimg alt=\"twitter6\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter6.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter7-vmworld.png\"\u003e\u003cimg alt=\"twitter7-vmworld\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter7-vmworld.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eTweets by time of day are as of US Central timezone.  This is why there are so many at 2am.  Central time is -7 hours behind Barcelona.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter3.png\"\u003e\u003cimg alt=\"twitter3\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter3.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"most-mentions-twitter4\"\u003eMost Mentions \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter4.png\"\u003e\u003cimg alt=\"twitter4\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter4.png\"\u003e\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"tweet-map\"\u003eTweet Map\u003c/h1\u003e\n\u003ch1 id=\"twitter5\"\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter5.png\"\u003e\u003cimg alt=\"twitter5\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter5.png\"\u003e\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"vexpert\"\u003e#vExpert\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter1-vexpert.png\"\u003e\u003cimg alt=\"twitter1-vexpert\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter1-vexpert.png\"\u003e\u003c/a\u003e  \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter3-vexpert.png\"\u003e\u003cimg alt=\"twitter3-vexpert\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter3-vexpert.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"twitter7-vexpert\"\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter7-vexpert.png\"\u003e\u003cimg alt=\"twitter7-vexpert\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter7-vexpert.png\"\u003e\u003c/a\u003e\u003c/h1\u003e\n\u003ch1 id=\"most-mentions\"\u003eMost Mentions\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter4-vexpert.png\"\u003e\u003cimg alt=\"twitter4-vexpert\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter4-vexpert.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter6-vexpert.png\"\u003e\u003cimg alt=\"twitter6-vexpert\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter6-vexpert.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"vcdx\"\u003e\u003cstrong\u003e#VCDX\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter1-vcdx.png\"\u003e\u003cimg alt=\"twitter1-vcdx\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter1-vcdx.png\"\u003e\u003c/a\u003e  \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter3-vcdx.png\"\u003e\u003cimg alt=\"twitter3-vcdx\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/twitter3-vcdx.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"VMworld EU 2013 Twitter Statistics"},{"content":" I think the Olsen twins have been using FT longer than VMware has.\nAwesome! So you\u0026rsquo;ve got your brand new shiny VMware cluster all setup with HA and think, \u0026ldquo;Man, I\u0026rsquo;m in great shape now. Downtime is a thing of the past!\u0026rdquo;.\nWell, not so fast! VMware High Availability just means that if a physical host fails, the virtual machines can reboot on another host which LIMITS your downtime. What if your machines are so critical that you can\u0026rsquo;t have this reboot time in the case of a host failure? The answer might be VMware Fault Tolerance (FT).\nVMware implements FT by adding a second virtual machine (or a twin) that is in lockstep with the first. In essence, they\u0026rsquo;re twins. If something happens to the host one of the FT enabled VMs are running on, that VM may stop, but the twin will continue running and handling all of the operations for production use. Pretty Awesome Stuff.\nHow it works In order for the two machines to work in \u0026ldquo;vLockstep\u0026rdquo; there is a logging network setup. Much like a vMotion network being configured on a different vlan or subnet, the FT network should be setup on a separate network from production.\nTwo VMs are setup and forced onto different hosts. The primary VM will read all of the non-deterministic data such as mouse clicks, network info, disk reads ect, and send them to the secondary VM on the FT logging network. The secondary VM will then replay those logs so that the two VMs seem identical.\nWhen a failure of the host of the Primary VM occurs, the secondary VM will take over and a new secondary VM would then be created in order to keep FT current.\nHow to Configure First, you need to make sure you have your network setup. I won\u0026rsquo;t go into this in detail, but create your portgroup and make sure you select the \u0026ldquo;Use this virtual adapter for Fault Tolerance logging\u0026rdquo; option. Obviously this will need to be setup on all of your hosts. Distributed switches makes this easier.\nNext you can right click your virtual machine and select Fault Tolerance \u0026ndash;\u0026gt;Turn on Fault Tolerance. You should get a warning message that you can\u0026rsquo;t use thin provisioning and the VMs won\u0026rsquo;t use DRS and a memory reservation will be set. Choose Yes to continue. FT will start up and create the secondary VM, which you\u0026rsquo;ll be able to see in the VMs and Templates view. Problems My setup was not without an issue or two. I needed to manually change the disks from thin to Eager zeroed thick and the monitor mode wasn\u0026rsquo;t compatible with my hosts in my lab. This was easily resolved from the VMware KB http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=2000589 Demo time Here, I\u0026rsquo;ve opened the console on both the Primary and Secondary VM. You can see that a ping is running on both, the task manager is identical on both machines and the up times are the same. Also, notice that the secondary VM is listed as read-only.\nNext, I\u0026rsquo;ve simulated a failure on one of the hosts. You can see that one of the VM\u0026rsquo;s keeps right on humming, while the other one goes blank.\nWhen the host recovers and both VMs are up again, notice the uptime looks identical again.\nRestrictions So FT solves everything\u0026hellip; not quite. FT comes with quite a few restrictions. The full list is here: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1010631\nThe biggest problem is that virtual SMP is not supported. I would guess in many cases the most critical virtual machine in your organization is possible one of the most resource intensive (think SQL Server).\nIf you\u0026rsquo;ve got a VM with a single vCPU and need very high uptime, try FT and see how it goes!\n","permalink":"https://theithollow.com/2013/10/21/vmware-fault-tolerance-ft/","summary":"\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/olsontwins-300x277.png\"\n         alt=\" I think the Olsen twins have been using FT longer than VMware has.\" width=\"300\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003eI think the Olsen twins have been using FT longer than VMware has.\u003c/p\u003e\n        \u003c/figcaption\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eAwesome!  So you\u0026rsquo;ve got your brand new shiny VMware cluster all setup with HA and think, \u0026ldquo;Man, I\u0026rsquo;m in great shape now.  Downtime is a thing of the past!\u0026rdquo;.\u003c/p\u003e\n\u003cp\u003eWell, not so fast!  VMware High Availability just means that if a physical host fails, the virtual machines can reboot on another host which LIMITS your downtime.  What if your machines are so critical that you can\u0026rsquo;t have this reboot time in the case of a host failure?  The answer might be VMware Fault Tolerance (FT).\u003c/p\u003e","title":"VMware Fault Tolerance (FT)"},{"content":" Today, VMware announced the new and much improved VMware vCenter Operations Management 5.8.\nThe new features of 5.8 are poised to make this version of vCOps much more useful to hybrid environments that use both ESXi and Hyper-V, or hybrid cloud environments that utilize vCloud Director and Amazon Web Services.\nExtensibilty VMware has taken the approach that vCOps needs to be able to change quickly so that multiple types of platforms can be monitored from a the application. Being able to quickly handle different types of environments will likely increase the adoption of a VMware application, but it\u0026rsquo;s good for everyone.\nVMware has a marketplace for \u0026ldquo;Management Packs\u0026rdquo; which allows different plugins to be added to several of their platforms including vCOps. Please see the https://solutionexchange.vmware.com/store/category_groups/cloud-management for more information about the packs, and vendors associated.\nApplication Monitoring A management pack for Microsoft Exchange as well as Microsoft SQL server will be available. Now, not only can vCOps monitor the virtual machines, but the running applications as well. How many times have your VMs been running fine, but something broke your Exchange Database Availability Group (DAG) or your SQL cluster? With vCOps 5.8 you\u0026rsquo;ll be able to drill down into your VM to the application level to find out where a problem exists.\nHyper-V integration\nIf you\u0026rsquo;ve run a Microsoft Hyper-V and VMware vSphere shop, you\u0026rsquo;ve possibly had to use several management tools to view the health of your environment. vCOps 5.8 is taking a step towards aggregating the system information of those two technologies so that they can be viewed in a single application. Much like with a vSphere environment, the relationship between Hyper-V hosts \u0026ndash;\u0026gt; Hyper-V VMs \u0026ndash;\u0026gt; Guest OS will be shown in the vCOps interface.\nYou\u0026rsquo;ll be able to download a management pack for Microsoft System Center Operations Manager (SCOM) or the Hyperic Management Pack. Hyperic will require an agent to be installed. In either case, the dashboards shown in vCOps will look identical.\nAmazon Web Services Not all virtual machines live inside your private cloud environment. Some of them are housed in a public cloud such as Amazon Web Services. vCOps 5.8 will be able to monitor your public cloud as well with the AWS Management Pack.\nThe AWS pack will allow you to view your EC2 instances, Elastic Block Store (EBS) Volumes, Elastic Map Reduce (EMR), Elastic Load Balancing (ELB) and Auto Scaling Group (ASG) to name a few. This data is all pulled from AWS Cloudwatch which uses the REST API\nLicensing If you want to know the versions available and what you\u0026rsquo;ll need, the chart is below, but always refer to the official VMware documentation in case things change.\n","permalink":"https://theithollow.com/2013/10/15/vmware-vcops-5-8-announced/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/Health.png\"\u003e\u003cimg alt=\"Health\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/Health.png\"\u003e\u003c/a\u003e Today, VMware announced the new and much improved VMware vCenter Operations Management 5.8.\u003c/p\u003e\n\u003cp\u003eThe new features of 5.8 are poised to make this version of vCOps much more useful to hybrid environments that use both ESXi and Hyper-V, or hybrid cloud environments that utilize vCloud Director and Amazon Web Services.\u003c/p\u003e\n\u003ch1 id=\"extensibilty\"\u003e\u003cstrong\u003eExtensibilty\u003c/strong\u003e\u003c/h1\u003e\n\u003cp\u003eVMware has taken the approach that vCOps needs to be able to change quickly so that multiple types of platforms can be monitored from a the application.  Being able to quickly handle different types of environments will likely increase the adoption of a VMware application, but it\u0026rsquo;s good for everyone.\u003c/p\u003e","title":"VMware vCOps 5.8 Announced"},{"content":" The annual Chicago VMUG Users Conference will be held on Tuesday October 22nd and you will not want to miss this one. With the recent announcements at VMworld, this will be a great opportunity to learn about the new offerings that will affect the industry. Oh, and by the way, there is an opportunity to win a brand new VMware home lab valued at $4000.\nJay Cuthrell is a thought leader within the Office of the CTO at VCE (A company formed by Cisco and EMC with investments from VMware, and Intel) working with service providers, systems integrators, ISVs, and media \u0026amp; entertainment companies to deliver converged infrastructure. He is a frequent industry speaker currently based on the West Coast of the United States. Previously, as a strategic technology consultant with cuthrell.com he worked with service providers, startup companies, and investment groups in addition to writing for ReadWrite and Telecompetitor. He has held CTO, VP, and GM roles at Digitel and NeoNova (an Azure Capital and Bridgescale Partners portfolio company) and infrastructure consulting roles working domestically and internationally for Fortune 500 clients. He also served at Scient (formerly iXL now Publicis), Nortel, Analysts International, IBM, and NCSU College of Engineering. He holds a BS in Materials Science and Engineering from North Carolina State University and grew up in Beaufort, NC. His blog can be found at fudge.org Symantec will be delivering the lunch time keynote on \u0026ldquo;Overcoming the Challenges to Virtualizing Tier 1 Business Critical Applications\u0026rdquo;.\n\u0026ldquo;Symantec was founded in 1982 by visionary computer scientists. The company has evolved to become one of the world’s largest software companies with more than 18,500 employees in more than 50 countries. We provide security, storage and systems management solutions to help our customers – from consumers and small businesses to the largest global organizations – secure and manage their information-driven world against more risks at more points, more completely and efficiently than any other company. \u0026quot;\nSolutions Exchange More than 70 VMware partners will be at the event handing out swag and answering questions. There will be free time to mingle with vendors, meet industry experts and see product demos that you may not have seen before. The full list can be found here: http://www.vmug.com/p/cm/ld/fid=902\nGive Aways The Chicago VMUG team always has stuff to give away. Usually we have books, jackets, shirts etc and a VMUG Advantage membership.\nThis year, we\u0026rsquo;ve got some great sponsors like VMware and American Digital to donate some hardware and licenses for a new home lab!\nSocial Media Room This year we will have a dedicated room for bloggers, pod-casters and just about anyone who wants to hang out and chat. Below is a list of the social media personalities that you might see if you (and we encourage you too) stop by!\nLauren Malhoit - AdaptingIT Podcast - Lauren is a well known vExpert and EMC Elect who has amazing discussions with women in technology. Kasia Lorenc – Tom’s IT Pro – Kasia is the Managing Editor for Tom’s IT Pro, a site focused on all sorts of technical goodies. She also has an excellent video series called Top 5 in Tech that covers a wide variety of weekly news. If you have a passion for writing about tech, make sure to make an introduction! Andy McCaskey – SDRNews – Andy is the mind behind the magic at SDRNews, a very popular podcast and broadcasting show that covers technology, the enterprise, and is often found at major tech conferences. Check out his channel on Roku here. Brian Suhr – VirtualizeTips – Brian has been contributing to the community as a blogger and vExpert for years, and recently acquired his VCDX at VMworld. Come chat with one of vSphere-Land’s top 50 virtualization bloggers. Chris Wahl – WahlNetwork – Chris needs very little introduction. A Chicago VMUG leader, VCDX and vExpert. Chris was recently listed as 12th on the vSphere-Land\u0026rsquo;s top 50 bloggers list. I invite you to check out his blog and on twitter. Eric Shanks – TheITHollow – TheITHollow.com will be in attendance to cover the event, send out tweets and take pictures to cover the event. Stop by and grab a button! Get Certified This year we will have an available room where you can get your VMware Certified Associate credentials. If you think you have what it takes, come on in and try it out. Maybe you\u0026rsquo;ll pick up a certification.\nREGISTER!!!!! The only thing you need to do before the event is block off your calendar and register here: http://www.vmug.com/p/cm/ld/fid=1887\nWe\u0026rsquo;ll see you at the event!\n","permalink":"https://theithollow.com/2013/10/09/chicago-vmug-user-conference-oct-22nd/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/09/VMUG.png\"\u003e\u003cimg alt=\"VMUG\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/09/VMUG-300x95.png\"\u003e\u003c/a\u003e The annual Chicago VMUG Users Conference will be held on Tuesday October 22nd and you will not want to miss this one.  With the recent announcements at VMworld, this will be a great opportunity to learn about the new offerings that will affect the industry.  Oh, and by the way, there is an opportunity to win a brand new VMware home lab valued at $4000.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/10/JayCuthrell.png\"\u003e\u003cimg alt=\"JayCuthrell\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/10/JayCuthrell.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eJay Cuthrell is a thought leader within the Office of the CTO at \u003ca href=\"http://vce.com/\"\u003eVCE\u003c/a\u003e (A company formed by Cisco and EMC with investments from VMware, and Intel) working with service providers, systems integrators, ISVs, and media \u0026amp; entertainment companies to deliver converged infrastructure. He is a frequent industry speaker currently based on the West Coast of the United States. Previously, as a strategic technology consultant with \u003ca href=\"http://cuthrell.com/\"\u003ecuthrell.com\u003c/a\u003e he worked with service providers, startup companies, and investment groups in addition to writing for ReadWrite and Telecompetitor. He has held CTO, VP, and GM roles at Digitel and NeoNova (an Azure Capital and Bridgescale Partners portfolio company) and infrastructure consulting roles working domestically and internationally for Fortune 500 clients. He also served at Scient (formerly iXL now Publicis), Nortel, Analysts International, IBM, and NCSU College of Engineering. He holds a BS in Materials Science and Engineering from North Carolina State University and grew up in Beaufort, NC. His blog can be found at \u003ca href=\"http://fudge.org/\"\u003efudge.org\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/09/symantec.png\"\u003e\u003cimg alt=\"symantec\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/09/symantec.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Chicago VMUG User Conference Oct 22nd"},{"content":"Hewlett-Packard is unveiling a new product called HP OneView which is designed to give you a one stop shop to managing all of your data center management responsibilities.\nIf you\u0026rsquo;re familiar with the HP System Insight Management and Insight Control, you are probably aware of the difficult setup procedures required to get everything setup correctly and running smoothly. The products are really useful in having a single place to manage the data center, but the setup process is a bit tedious. My take on HP OneView is that it is a complete revamp of the HP SIM design which should be a welcomed site to those of you familiar with the old process.\nHP OneView will be able to be your one stop shop for managing the following datacenter objects:\nVirtual Connect Management - Logical VC profiles can be setup ahead of time to be deployed over an over again.\niLO Advanced Management - A single point to access all of your integrated Lights Out processes.\nEnvironmental Management - You can combine the following three features to give you a full view of the datacenter.\n3D data center thermal mapping\nPower Discovery Services\nLocation Discovery Services\nSystem Health - You must be able to see any system errors in one location. The solution wouldn\u0026rsquo;t be complete without this. Server Provisioning - Data Centers are quickly changing. We\u0026rsquo;ve gone from deploying servers in weeks, to minutes. Now we need to be able to deploy servers through a service catalog in many different flavors. Security - OneView will allow a single place for Role Based Access Controls (RBAC) and Single Sign On (SSO). If an administrator left the organization and needed to be removed from thousands of iLO, Virtual Connect Modules, Servers, etc this could be very difficult to manage. OneView will allow a single point to change permissions across the infrastructure. HP has abandoned the HP SIM model of TreeBranch and moved towards a searching methodology. The thought being that with large data centers an Internet like search method may be more useful than a drill down approach.\nDashboard provides capacity and health information at your fingertips Map View allows you to visualize the relationships between your devices, up to the highest levels of your datacenter infrastructure Smart Search instantly gets you the information you want for increased productivity, with search support for all the elements in your inventory (e.g. search for alerts) Activity View allows you to display and filter all system tasks and alerts Group management enables administration, troubleshooting, and monitoring of large enterprise as though they were a fraction of their size Mobile access using a scalable, modern user interface based on HTML5 ","permalink":"https://theithollow.com/2013/09/30/hp-oneview/","summary":"\u003cp\u003eHewlett-Packard is unveiling a new product called HP OneView which is designed to give you a one stop shop to managing all of your data center management responsibilities.\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re familiar with the HP System Insight Management and Insight Control, you are probably aware of the difficult setup procedures required to get everything setup correctly and running smoothly.  The products are really useful in having a single place to manage the data center, but the setup process is a bit tedious.  My take on HP OneView is that it is a complete revamp of the HP SIM design which should be a welcomed site to those of you familiar with the old process.\u003c/p\u003e","title":"HP OneView"},{"content":"vSphere 5.5 went GA on Sept 22nd 2013. If you\u0026rsquo;re ready to upgrade your home or work environment, here are some suggestions:\nUpgrade order\n1. Take a Backup! You want to have a backup of your vCenter database in case the unthinkable happens. You will probably want to backup your SSL certificate folder as well. It can be located here: %ALLUSERSPROFILE%VMWareVMware VirtualCenter\nIt may also be worth your time to save an updated host profile so that you can re-apply it if needed after your upgrade. Perhaps your host upgrade fails and you re-install from scratch. Now you\u0026rsquo;ve got all your configs that can be re-applied quickly.\n2. vCenter. Upgrade your vCenter first. vCenter 5.5 can manage older ESXi hosts, so it makes sense to install this first. This way if it takes more time to get your hosts updated, it\u0026rsquo;s ok. vCenter can manage your 5.5 hosts and your 5.x hosts.\nI\u0026rsquo;ve never been a fan of the \u0026ldquo;Simple Install\u0026rdquo; because I\u0026rsquo;d like to see what is happening. This is especially useful if there are errors. The 5.5 vCenter installer will show you the correct order to install the components in.\nSo you must install SSO \u0026ndash;\u0026gt; Web Client \u0026ndash;\u0026gt; Inventory Service \u0026ndash;\u0026gt; vCenter Server\nAfter this is complete, you can update the vSphere client (not necessary as now you\u0026rsquo;ll likely be using the Web Client) and the Support tools. If you are going to upgrade your hosts via Update Manager, you\u0026rsquo;ll need to install this, and I highly recommend installing the Host Agent Pre-Upgrade Checker. There is nothing worse than finding out in the middle of an install that their is something wrong. VMware gives you the Upgrade Checker so please USE IT!\n3. Upgrade your ESXi hosts. This can be done from Update Manager, AutoDeploy, or a rebuild of the hosts, but one way or another you\u0026rsquo;re going to upgrade your hosts to 5.5\n4. Upgrade your Distributed Switches to the latest version.\n5. Upgrade your VMware Tools. I prefer to leave this to be updated during the next reboot. Since the VMs must be turned off to apply this update, I have always done it during a \u0026ldquo;patch\u0026rdquo; window to avoid multiple reboots.\n6. Upgrade the virtual machine hardware to take advantage of all the new stuff!\n7. Make sure to update your documentation, change management logs etc.\n","permalink":"https://theithollow.com/2013/09/24/vmware-5-5-upgrade-tips/","summary":"\u003cp\u003evSphere 5.5 went GA on Sept 22nd 2013.  If you\u0026rsquo;re ready to upgrade your home or work environment, here are some suggestions:\u003c/p\u003e\n\u003cp\u003eUpgrade order\u003c/p\u003e\n\u003cp\u003e1.  \u003ca href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1023985\"\u003eTake a Backup\u003c/a\u003e!  You want to have a backup of your vCenter database in case the unthinkable happens.  You will probably want to backup your SSL certificate folder as well.  It can be located here:  %ALLUSERSPROFILE%VMWareVMware VirtualCenter\u003c/p\u003e\n\u003cp\u003eIt may also be worth your time to save an updated host profile so that you can re-apply it if needed after your upgrade.  Perhaps your host upgrade fails and you re-install from scratch.  Now you\u0026rsquo;ve got all your configs that can be re-applied quickly.\u003c/p\u003e","title":"VMware 5.5 Upgrade Tips"},{"content":"If you\u0026rsquo;ve got a some hardware lying around for your lab, Windows Server 2012 may be a great solution for a home storage device. You can now do both block (iSCSI) and NAS (NFS) on the same server, as well as having an OS to install some management apps on it. In my lab, I use this management server to run Veeam for my backups, PRTG network monitor for bandwidth tracking, as well as using this server for both iSCSI targets and NFS mounts.\nHome Lab Setup\nInstallation Installing the necessary components on the server is very simple.\nGo to the Server Manager and choose the \u0026ldquo;Add roles and features\u0026rdquo; link.\nChoose a role-based or feature-based installation.\nSelect your localhost if your installing from the same machine, or use the remote host option if installing from a group member.\nScroll down the list of roles and add:\niSCSI Target Server iSCSI Target Storage Provider (VDS and VSS hardware) Server for NFS Confirm your selections and click install\nYou\u0026rsquo;ve got the services installed now. The next steps are to setup an iSCSI target and and an NFS Mount. Obviously both of these are not necessary, but if you can setup both, why not do both.\nNFS Setup NFS is pretty easy to get setup for a VMware ESXi lab. Create a folder in the Windows file system; this is where your NFS mount point will be set. Right click on that folder and choose properties. Here you\u0026rsquo;ll see a tab called \u0026ldquo;NFS Sharing\u0026rdquo;. You can click on the \u0026ldquo;Manage NFS Sharing\u0026hellip;\u0026rdquo; button to setup the properties.\nClick the box to \u0026ldquo;Share this folder\u0026rdquo;. Please note that this does not share the folder for Server Message Block (SMB) like you may be used to doing for standard Windows File Shares. This is only the NFS Sharing setup. If you want, you can share this as a Windows share as well but I recommend keeping your NFS shares separate to avoid accidentally deleting files used by your ESXi lab.\nClick the \u0026ldquo;Permissions\u0026rdquo; button to allow readwrite access to the NFS Share. Change the \u0026ldquo;Type of access\u0026rdquo; box to \u0026ldquo;Read-Write\u0026rdquo; and be sure to tick the check box to \u0026ldquo;Allow Root Access\u0026rdquo;\nDONE! You can now setup your vSphere environment to access this NFS mount.\niSCSI Setup To setup the iSCSI Targets, go to the \u0026ldquo;File and Storage Services\u0026rdquo; section of the Server Manager. Click on the Hyperlink to create an iSCSI virtual disk.\nChoose the server and physical disk location that you\u0026rsquo;d like to create a virtual disk on. This virtual disk will be the iSCSI target so be sure to place it on the right sized and speed disk you\u0026rsquo;d like to use. In my lab i\u0026rsquo;ve got a pair of mounts, one on an SSD and one on a traditional spinning disk.\nName the virtual disk. This will end up being the file name.\nGive the virtual disk a size. Obviously you\u0026rsquo;re restricted to the size of the physical disk that it\u0026rsquo;s on.\nNow you need to create an iSCSI Target. Click the \u0026ldquo;New Target\u0026rdquo; section if it\u0026rsquo;s your first time running through this setup.\nGive the target a descriptive name.\nNow we need to list the initiators that will have access to this target. We don\u0026rsquo;t want to allow all machines to access this iSCSI disk, so we\u0026rsquo;ll limit it to our ESXi host initiators. Click the \u0026ldquo;Add\u0026rdquo; button.\nAdd the iSCSI initiators at the bottom. If you\u0026rsquo;ve run through this before, you can look at the initiators that are cached in the middle section.\nWhen you\u0026rsquo;re done you\u0026rsquo;ll see all of the initiators that have access. Click \u0026ldquo;Next\u0026rdquo; to continue on.\nIf you are interested in setting up additional security, you can enable CHAP andor Reverse CHAP. My lab is super secure so I\u0026rsquo;ve left it disabled. :)\nConfirm your selections and click \u0026ldquo;Create\u0026rdquo;.\nThe creation process will complete.\nDONE! Now you have an iSCSI Target on the Windows 2012 Server. Add this target to your vSphere environment and you\u0026rsquo;ve now got block storage for your hosts.\n","permalink":"https://theithollow.com/2013/09/24/windows-server-2012-as-a-storage-device-for-vsphere-home-lab/","summary":"\u003cp\u003eIf you\u0026rsquo;ve got a some hardware lying around for your lab, Windows Server 2012 may be a great solution for a home storage device.  You can now do both block (iSCSI) and NAS (NFS) on the same server, as well as having an OS to install some management apps on it.  In my lab, I use this management server to run Veeam for my backups, PRTG network monitor for bandwidth tracking, as well as using this server for both iSCSI targets and NFS mounts.\u003c/p\u003e","title":"Windows Server 2012 as a Storage Device for vSphere Home Lab"},{"content":"Windows 8.1 is set to be released on October 17th but the Release Preview is available for download and testing as of right now.\nSome of the biggest criticisms of Windows 8 was the new MetroUI and lack of a start menu. Windows 8.1 isn\u0026rsquo;t abandoning these new features, but have tweaked them up a bit to make them slightly more user friendly. While Windows 8 may great for a tablet, normal PC users have been frustrated with the learning curve.\nStart Button Returns\nYou can see that instead of empty space in the bottom left hand corner of your desktop, you now have a windows logo. It doesn\u0026rsquo;t say \u0026ldquo;Start\u0026rdquo; so if you\u0026rsquo;re giving instructions to someone over the phone, saying \u0026ldquo;start\u0026rdquo; may be confusing, but at least there is an icon there.\nStart Screen\nThe Windows start screen hasn\u0026rsquo;t changed too much with the exception that you can use different sized objects. The start screen is also a bit more customizable where you can group your apps into categories.\nOne thing that some users didn\u0026rsquo;t realize is that in windows 8.0 you could open your start screen and just start typing the name of an app you wanted. In Windows 8.1 they\u0026rsquo;ve added a search box at the top to make this a bit more obvious. You can still just start typing without having to click in the search field though.\nCustomizations\nYou can modify your PC settings by hovering your mouse in the bottom right hand corner of the desktop. This is the same as in windows 8.0. If you decide to customize your desktop, you have an additional tab for the taskbar and navigation properties menu. Now you can do things like going straight to the desktop on boot instead of the start screen, and bypassing the start screen to go directly to the apps menu.\nThe biggest annoyance for me hasn\u0026rsquo;t been fixed. Some apps still don\u0026rsquo;t allow you to close them without a shortcut key. Notice that there is no \u0026ldquo;Big Red X\u0026rdquo; to close a PDF. Obviously installing your favorite PDF reader may fix this, but if you\u0026rsquo;re using the default reader built into windows you\u0026rsquo;ll still have to put up with this. ","permalink":"https://theithollow.com/2013/09/17/windows-8-1-review/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/Navigation1.png\"\u003eWindows 8.1 is set to be released on October 17th but the Release Preview is available for download and testing as of right now.\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eSome of the biggest criticisms of Windows 8 was the new MetroUI and lack of a start menu.  Windows 8.1 isn\u0026rsquo;t abandoning these new features, but have tweaked them up a bit to make them slightly more user friendly.  While Windows 8 may great for a tablet, normal PC users have been frustrated with the learning curve.\u003c/p\u003e","title":"Windows 8.1 review"},{"content":"\nWhile I was at VMworld this year in San Francisco, there was a lot of buzz about this company called PernixData. Maybe the buzz was just from some of the superstars that built this company such as Co-Founders Satyam Vaghani (better known as the father of VMFS) and Poojan Kumar (also co-founder of Exadata).\nGiven the smart minds that have been around the company since the start, I thought I better stop by their booth and at least say \u0026ldquo;hi\u0026rdquo;.\nThe main goal of PernixData\u0026rsquo;s main product FVP is to allow storage performance to be increased without the need for adding additional capacity.\nWhat is PernixData FVP? PernixData FVP allows customers to add local SSD or flash drives to a server to be used as a local cache. The main premise is that any time a virtual machine needs to read from disk, it has to traverse the storage fabric and access a SANNAS. The SAN usually isn\u0026rsquo;t full of SSD drives due to price, so it has a variety of slower speed spinning disks.\nThis is a basic version of what happens to a normal virtual machine doing a disk read. Virtual machine utilizes the hypervisor to then access a shared disk device.\nPernixData utilizes a spare SSD drive on the hypervisor to be used as a read cache. Imagine how much better the disk reads could be if the data was all stored locally.\nPernixData uses that data stream and caches this data locally on the SSD Drive. This process could actually slow down the storage performance in a few specific situations. Frank Denneman describes this process and the \u0026ldquo;False Writes\u0026rdquo; on his blog.\nOnce this data has been \u0026ldquo;warmed up\u0026rdquo; (caching enough data on the local drive to be usable) the future reads can be done directly from the local Solid State Drive instead of traveling all the way to the storage array. These reads should be much faster since there is a shorter path and a higher speed disk. It also has the added benefit of taking load off the storage array. Less reads on the array means more time for the array to handle other requests.\nPernixData Installation PernixData Server install\nThe installation requires a virtual machine to handle the management of the flash caching. This can be done on a standard windows server. The software install is a pretty straight forward process resulting in a next .. next .. finish type wizard which will ask for the vCenter config info during the process. My screenshots are below.\nHost VIBS Each ESXi host will also need to add a new vib so that you can claim an SSD drive for use by PernixData FVP to do the caching. In order to do this you need to SSH into the host while it\u0026rsquo;s in \u0026ldquo;Maintenance Mode\u0026rdquo;. Once SSH\u0026rsquo;d into the server you can run the below command substituting the location of the PernixData vib.\nConfiguration Configuration of the PernixData Cluster was a breeze. There is a guided setup on the vCenter server to get you started. Click on the Cluster and then the PernixData tab. Click on the \u0026ldquo;Get started\u0026rdquo; button to learn how to use it.\nCreate a PernixData Cluster. (this consists of a name)\nSelect the Local SSD\u0026rsquo;s that will be used for the caching. Select the datastores that you want to be accelerated. NOTE: PernixData is VM aware, meaning that you can actually accelerate a specific VM without doing an entire datastore if you\u0026rsquo;d prefer.\nThat\u0026rsquo;s it! Reap the rewards of local cache!\nYou can look at the usage and performance tabs to see how your IO is doing and how much of your data is being read from cache and how fast your data is being evicted for new data to be cached.\nWrite Back vs Write Through PernixData has two types of caching. Write Back and Write through.\nWrite Through will take any writes from a virtual machine and write them to the shared storage device. During this process the data is also written to the local flash device by PernixData FVP. The diagram below should show what a Write through policy looks like (it\u0026rsquo;s the same graphic from above)\nWrite Back on the other hand will write directly to the local flash device and then in the background commit those writes from the flash device to the shared storage. This method allows for lower latency but also introduces a risk where a system fails after data is written to the local flash device, but before it could be committed to the shared storage. In order to combat this, FVP gives you the ability to use replicas which then write the data to a sibling host\u0026rsquo;s SSD device in case of failure.\nPerformance I ran some performance tests on my home lab with the free 60 day trial. My home lab uses a Synology DS411+ slim and I have 2 hosts each with a local 60GB SSD for the flash cache.\nI didn\u0026rsquo;t want to use IOmeter to test the performance because it\u0026rsquo;s not a real world test. Running IOmeter 1 time wouldn\u0026rsquo;t cache any data and would cause the \u0026ldquo;False Writes\u0026rdquo; issue and not show any performance gain. Instead I used a free trial of LoginVSI which simulates a number of Terminal Server sessions, complete with copies, pastes, web browser sessions and some Outlook and Excel procedures. This was the closest thing I could find to a real world testing scenario.\nBelow are two screenshots of the performance of my Synology. The first half of the graphs show the activity during a LoginVSI (10 RDP Sessions) and PernixData FVP off. The right side of the graph shows a new LoginVSI test with PernixData FVP on.\nTo be fair, the performance doesn\u0026rsquo;t look that much better, but I think in a real life scenario maybe doing database reads, you would see much better performance.\nYou can see from the PernixData tab in vCenter that reads are being cached from local storage and in my tests, none of the data is being evicted. Again, a longer real world test should show more accurate information. The fact that there are no evictions during this test shows that I don\u0026rsquo;t have a large enough testing footprint to show what would happen in real life.\nOverall I was impressed with PernixData FVP and how easy it was to get setup and configured. You can turn the caching on and off very simply and set it by VM, datastore, or both and have a few options about replicas. If you\u0026rsquo;re storage array has additional capacity but you\u0026rsquo;re not getting the performance you were hoping from it, consider trying out PernixData FVP and see if this relieves your pain points.\n","permalink":"https://theithollow.com/2013/09/11/pernix-data-in-the-lab/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/09/SAM_0112.jpg\"\u003e\u003cimg alt=\"SAMSUNG CSC\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/09/SAM_0112-300x200.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eWhile I was at \u003ca href=\"/2013/09/vmworld-2013-recap/\"\u003eVMworld this year\u003c/a\u003e in San Francisco, there was a lot of buzz about this company called \u003ca href=\"http://www.pernixdata.com/\"\u003ePernixData\u003c/a\u003e.  Maybe the buzz was just from some of the superstars that built this company such as Co-Founders Satyam Vaghani (better known as the father of VMFS) and Poojan Kumar (also co-founder of Exadata).\u003c/p\u003e\n\u003cp\u003eGiven the smart minds that have been around the company since the start, I thought I better stop by their booth and at least say \u0026ldquo;hi\u0026rdquo;.\u003c/p\u003e","title":"PernixData in the Lab"},{"content":"\nVMworld 2013 is now over and its time for a rundown of what went on during this years show. As always, this is THE event that virtualization junkies must attend.\nThis year there was no shortage of things to do. Keynotes, Hands on Labs, Solutions Exchange, VMware Education Services, Blogger hang out, parties, meetings and social gatherings. I wear a Jawbone UP device that tracks my steps and it was not uncommon to hit 20,000 steps each day. Reminders for anyone going next year, that comfortable footwear is a must.\nKeynotes The keynotes obviously super charged everything going on at VMworld. The announcements about vSphere 5.5, NSX and VSAN were heavily talked about subjects. NSX took over 2 of the \u0026ldquo;Ask the Experts\u0026rdquo; panel sessions that I went too, and the VMware NSX booth was always packed at the solutions Exchange.\nSAMSUNG CSC\nSessions I didn\u0026rsquo;t plan well enough ahead and wasn\u0026rsquo;t able to get into any of the technical deep-dive sessions on NSX (shame on me) but had plenty of opportunities to sit at some really great sessions. Industry experts (not all VMware employees) presented on a variety of subjects about how technology is changing, and how we can get the most out of what we\u0026rsquo;re doing on a daily basis.\nSAMSUNG CSC\nAsk the Expert vBloggers session with (left to right) Rick Scherer, Scott Lowe, Duncan Epping, William Lam, Vaughn Stewert\nMr. Luk Dekens explaining PowerCLI to a packed house.\nThe top 10 sessions are listed below:\nVSVC4944 - PowerCLI Best Practices - A Deep Dive Speakers: Luc Dekens - Systems Engineer, Eurocontrol | Alan Renouf - Sr Technical Marketing Architect, VMware\nBCO5129 - Protection for All - vSphere Replication \u0026amp; SRM Technical Update\nSpeakers: Lee Dilworth, Ken Werneburg - VMware\nSTO5715-S - Software-defined Storage - The Next Phase in the Evolution of Enterprise Storage\nSpeakers: Vijay Ramachandran, Alberto Farronato - VMware\nPHC5605-S - Everything You Want to Know About vCloud Hybrid Service - But Were Afraid to Ask.\nSpeakers: Mathew Lodge - VMware | Christopher Rence - Digital River, Inc\nVCM7369-S - Uncovering the Hidden Truth in Log Data With vCenter Log Insight\nSpeakers: Tim Russell - NetApp | Manesh Kumar, Jon Herlocker - VMware\nVAPP4679 - Software-Defined Datacenter Design Panel for Monster VM\u0026rsquo;s: Taking the Technology to the Limits for High Utilisation, High Performance Workloads\nSpeakers: Frank Dennemean - Pernix Data | Andrew Mitchell, Mark Achtemichuck, Mostafa Khalil, Michael Webster - VMware\nEUC7370-S - The Software-Defined Data Center Meets End User Computer\nSpeakers: Scott Davis, Frank Nydam, Mike Coleman - VMware\nOPT5194 - Moving Enterprise Application Dev/Test to VMware’s Internal Private Cloud- Operations Transformation Speakers: Kurt Milne, Venkat Gopalakrishn - VMware\nSEC5893 - Changing the Economics of Firewall Services in the Software-Defined Center – VMware NSX Distributed Firewall\nSpeaker: Srinivas Nimmagadda, Anirban Sengupta - VMware\nSolutions Exchange This is the huge area where just about every vendor you can think of was out promoting their products and in many cases explaining how well they work with VMware vSphere. I mean it\u0026rsquo;s VMworld right?\nVMware gives out awards to these companies based off of community buzz and their showing. This year the following companies won the best of breed awards.\nCategory: Storage and Backup for Virtualized Environments\nSimpliVity, OmniCube\nCategory: Security and Compliance for Virtualization\nAFORE Solutions, CloudLink Secure VSA\nCategory: Virtualization Management\nEaton, Eaton Intelligent Power Manager 1.3\nCategory: Networking and Virtualization\nPLUMgrid, PLUMgrid Platform\nCategory: Desktop Virtualization and End-User Computing\nLakeside Software, SysTrack Resolve 6.1\nCategory: Private Cloud Computing Technologies\nNutanix, NX-6270\nCategory: Public/Hybrid Cloud Computing Technologies\nEmbotics, vCommander 5.0\nCategory: New Technology\nNeverfail Group, Neverfail IT Continuity Architect\nCategory: Best of Show\nEaton, Eaton Intelligent Power Manager 1.3\nJudges’ Choice – recognition for a product not nominated for the Best of VMworld 2013 Awards\nNVIDIA, NVIDIA GRID\nHands on Labs In 2013 there were a few technical issues with the VMware Hands on Labs, but this year they seemed to be very smooth and fast. I went to three sessions and walked right in each time. No waiting. Hats off to Doug Baer @dobaer (also known as Trevor) and the Technical Marketing team at VMware on a great job. The labs should be available online to all attendees at a later date, but during the conference I know that more than 9000 labs and 80,000 virtual machines were deployed in the 5 days it ran. Amazing.\nVMware Education Services VMware was allowing discounted certifications during the week so people were taking exams all week and trying to \u0026ldquo;level up\u0026rdquo; their qualifications. Josh Andrews also ran a small area were attendees could test their metal against a VCAP-DCA question. The fastest solutions were awarded with prizes.\nMy prize was a signed copy of the \u0026ldquo;VCDX Boot Camp\u0026rdquo;, autographed by VCDX001 himself John Yani Arrasjid.\nSocial Gatherings\nThis was also a great time to get together with colleagues that you may not see face to face very often. VMware threw a great party on Thursday night at the AT\u0026amp;T ballpark to unwind after a long week of learning. Guest bands \u0026ldquo;Imagine Dragons\u0026rdquo; and \u0026ldquo;Train\u0026rdquo; performed and I think everyone had a great time.\nimaginedragons\nOverall the conference was a great time, a great opportunity to network with peers, and a good chance to learn from industry experts.\n","permalink":"https://theithollow.com/2013/09/04/vmworld-2013-recap/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/SAM_00721.jpg\"\u003e\u003cimg alt=\"SAMSUNG CSC\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/SAM_00721-300x200.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eVMworld 2013 is now over and its time for a rundown of what went on during this years show.  As always, this is THE event that virtualization junkies must attend.\u003c/p\u003e\n\u003cp\u003eThis year there was no shortage of things to do.  Keynotes, Hands on Labs, Solutions Exchange, VMware Education Services, Blogger hang out, parties, meetings and social gatherings.  I wear a Jawbone UP device that tracks my steps and it was not uncommon to hit 20,000 steps each day.  Reminders for anyone going next year, that comfortable footwear is a must.\u003c/p\u003e","title":"VMworld 2013 Recap"},{"content":"I\u0026rsquo;ve been interested in using twitter\u0026rsquo;s API to do some analytic analysis of things lately. If you\u0026rsquo;re interested in this as well, there are several sites that can help you do you\u0026rsquo;re own queries, or use the Microsoft Office analytic tool. Here are some interesting stats about VMworld.\n#VMWORLD stats #vExpert Stats I also thought it might be worth looking at where the vExperts call home (assuming geolocation was turned on and accurate, Yes, I\u0026rsquo;m talking to you Josh Andrews)\n#VCDX stats Disclaimer: The VCDX Stats were taken on the day before the new VCDX\u0026rsquo;s were added. I\u0026rsquo;m sure if I\u0026rsquo;d ran this query 2 days later, the stats would be vastly different.\n","permalink":"https://theithollow.com/2013/09/02/vmworld-twitter-statistics/","summary":"\u003cp\u003eI\u0026rsquo;ve been interested in using twitter\u0026rsquo;s API to do some analytic analysis of things lately.  If you\u0026rsquo;re interested in this as well, there are several sites that can help you do you\u0026rsquo;re own queries, or use the Microsoft Office analytic tool.  Here are some interesting stats about VMworld.\u003c/p\u003e\n\u003ch1 id=\"vmworld-stats\"\u003e#VMWORLD stats\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord1.png\"\u003e\u003cimg alt=\"hashVMWord1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord1.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord2.png\"\u003e\u003cimg alt=\"hashVMWord2\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord2.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord3.png\"\u003e\u003cimg alt=\"hashVMWord3\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord3.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord5.png\"\u003e\u003cimg alt=\"hashVMWord5\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord5.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord6.png\"\u003e\u003cimg alt=\"hashVMWord6\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord6.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord7.png\"\u003e\u003cimg alt=\"hashVMWord7\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVMWord7.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"vexpert-stats\"\u003e#vExpert Stats\u003c/h1\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert1.png\"\u003e\u003cimg alt=\"hashvexpert1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert1.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert3.png\"\u003e\u003cimg alt=\"hashvexpert3\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert3.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert4.png\"\u003e\u003cimg alt=\"hashvexpert4\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert4.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVexpert5.png\"\u003e\u003cimg alt=\"hashVexpert5\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashVexpert5.png\"\u003e\u003c/a\u003e \u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert7.png\"\u003e\u003cimg alt=\"hashvexpert7\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/hashvexpert7.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI also thought it might be worth looking at where the vExperts call home (assuming geolocation was turned on and accurate, Yes, I\u0026rsquo;m talking to you Josh Andrews)\u003c/p\u003e","title":"VMworld twitter statistics"},{"content":" VMware announced their new product called VSAN this week at VMworld in San Francisco CA. The VSAN is a new offering that will allow customers to provision “shared” storage by using locally direct attached disks. Traditionally, in order to use the features like vMotion, customers had to have an external NAS or SAN device to house the virtual machines. VMware isn’t abandoning the idea of SAN or NAS, but they now have a lower cost offering that can help smaller businesses get more out of their capital investment. Consider disaster recovery scenarios where a company might not want to spend the upfront cost of a SAN that may never (hopefully) be used. This will allow a basic DR plan with less cost.\nVSAN has several requirements to get started.\n- 3 or more hosts\n- Each host has 1 or more SSDs and 1 spinning disk\n- A pass through Raid controller\n- A dedicated 1 Gbps Network card (10 Gbps preferred)\n- A dedicated network for VSAN traffic (similar to vMotion)\n- A VSAN Port Group on every host\nHow does it work? VSAN uses the local disks of the servers to create an active disk, one or more passive disks depending on the number of host failures you’ve chosen to tolerate, and possibly a witness which is used for a tie-breaker.\nData is written to the Solid State Disks first where they are then used for read and write caching, and then offloaded to the spinning disks.\nIn order to make sure that a single host failure doesn\u0026rsquo;t destroy the virtual machine and all of the associated data along with it, it’s replicated to the passive copies over the new VSAN VMkernel port. This port should be setup like a vMotion port and its own dedicated network.\nYou can see from the diagram below that all of the hosts are connected to the VSAN network in order to handle data requests. Also notice that one of the hosts doesn\u0026rsquo;t have a copy of the data. Not all hosts in your cluster have to participate in the VSAN if you don’t want them too.\nYou might be wondering what happens when I vMotion the virtual machine to a host that isn\u0026rsquo;t participating in VSAN. Moving a virtual machine around is no problem. The VM can still access all of the data via the VSAN Network. In fact the location of the virtual machine does not matter with regards to VSAN. As long as there is network connectivity the storage should still be available.\nVSAN may be a new thing offering from VMware but it’s arguable that this technology has been around for a while. Hyperconverged companies like Nutanix and Simplivity have been doing this for a while, where their solutions don’t require a dedicated SAN.\nIf you’d like to find out more, or sign up for the beta of vSAN visit: http://www.vmware.com/products/virtual-san/features.html\n","permalink":"https://theithollow.com/2013/08/29/vmware-vsan/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/VSANbooth.png\"\u003e\u003cimg alt=\"VSANbooth\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/VSANbooth-300x261.png\"\u003e\u003c/a\u003e VMware announced their new product called VSAN this week at VMworld in San Francisco CA.  The VSAN is a new offering that will allow customers to provision “shared” storage by using locally direct attached disks.  Traditionally, in order to use the features like vMotion, customers had to have an external NAS or SAN device to house the virtual machines.  VMware isn’t abandoning the idea of SAN or NAS, but they now have a lower cost offering that can help smaller businesses get more out of their capital investment.  Consider disaster recovery scenarios where a company might not want to spend the upfront cost of a SAN that may never (hopefully) be used.  This will allow a basic DR plan with less cost.\u003c/p\u003e","title":"VMware VSAN"},{"content":" This week at VMworld 2013, VMware\u0026rsquo;s CEO Pat Gelsinger announced the new features of vSphere 5.5. The entire list of updates can be found in the \u0026quot; What\u0026rsquo;s New?\u0026quot; file from VMware but here are some of the highlights.\nSingle Sign on was completely re-written. I would bet that the #1 reason that users didn\u0026rsquo;t adopt vSphere 5.1 release was due to issues with single sign on. VMware re-wrote this code not only fix the bugs, but make the entire experience better. This feature was a necessity for VMware to move forward with the vSphere platform. Additional GPU Support. This may be a big deal for some companies who are afraid to switch to a VDI infrastructure because of limited graphics processors. 62 TB VMDK\u0026rsquo;s now supported. Bigger is always better\u0026hellip; right? This could be a very big deal. I know several clients who got into a jam when they created their 2TB vmdks only to find out that they couldn\u0026rsquo;t snapshot them or expand the disks any further. 62TBs should suffice for now! :) Flash Read Cache. VMware now natively supports using SSD\u0026rsquo;s as read cache for specific VMDK files. In the past vSphere could use local SSDs for host cache. This was used to mitigate the issue of swapping to disk. If you have to swap to disk SSD is at least better than spinning disks right? Well now you can use local host SSDs as a read cache for an entire VM or maybe just a single disk. vSphere vCenter virtual Appliance can now support up to 5000 virtual machines. I\u0026rsquo;m having fewer and fewer reasons to build out an entire VM now. This makes me want to just deploy the vApp and be done with the whole process. Application HA. vSphere has been able to provide virtual machine high availability for a while now, but with the release of 5.5 they can also take action against guest services as well. Additional Announcements VMware NSX will be taking over for vCNS (vCloud Networking and Security). I was told that vCNS will still be available in 5.5 but future iterations would be inside the NSX Product. NSX uses the VXLAN protocols to virtualize the physical infrastructure. ESXi hosts will now be able to manage internal routing, switching and firewalls. Virtual SAN is a new product that will allow you to take advantage of ESXi host local storage. It requires a new vmkernal port which acts much like FT or vMotion does. The SAN Traffic is then dedicated to that vmkernal port. This is a nice feature for companies who want some virtualization benefits without the cost of a SAN, or decide they want to try out one of the new Micron Cards in their servers instead of going to a traditional shared storage approach. ","permalink":"https://theithollow.com/2013/08/26/vsphere-5-5-announced/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/SAM_0072.jpg\"\u003e\u003cimg alt=\"SAMSUNG CSC\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/SAM_0072-300x200.jpg\"\u003e\u003c/a\u003e This week at VMworld 2013, VMware\u0026rsquo;s CEO Pat Gelsinger announced the new features of vSphere 5.5. The entire list of updates can be found in the \u0026quot; \u003ca href=\"http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf\"\u003eWhat\u0026rsquo;s New\u003c/a\u003e?\u0026quot; file from VMware but here are some of the highlights.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eSingle Sign on was completely re-written.  I would bet that the #1 reason that users didn\u0026rsquo;t adopt vSphere 5.1 release was due to issues with single sign on.  VMware re-wrote this code not only fix the bugs, but make the entire experience better.  This feature was a necessity for VMware to move forward with the vSphere platform.\u003c/li\u003e\n\u003cli\u003eAdditional GPU Support.  This may be a big deal for some companies who are afraid to switch to a VDI infrastructure because of limited graphics processors.\u003c/li\u003e\n\u003cli\u003e62 TB VMDK\u0026rsquo;s now supported.  Bigger is always better\u0026hellip; right?  This could be a very big deal.  I know several clients who got into a jam when they created their 2TB vmdks only to find out that they couldn\u0026rsquo;t snapshot them or expand the disks any further.  62TBs should suffice for now! :)\u003c/li\u003e\n\u003cli\u003eFlash Read Cache.  VMware now natively supports using SSD\u0026rsquo;s as read cache for specific VMDK files.  In the past vSphere could use local SSDs for host cache.  This was used to mitigate the issue of swapping to disk.  If you have to swap to disk SSD is at least better than spinning disks right?  Well now you can use local host SSDs as a read cache for an entire VM or maybe just a single disk.\u003c/li\u003e\n\u003cli\u003evSphere vCenter virtual Appliance can now support up to 5000 virtual machines.  I\u0026rsquo;m having fewer and fewer reasons to build out an entire VM now.  This makes me want to just deploy the vApp and be done with the whole process.\u003c/li\u003e\n\u003cli\u003eApplication HA.  vSphere has been able to provide virtual machine high availability for a while now, but with the release of 5.5 they can also take action against guest services as well.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch1 id=\"additional-announcements\"\u003eAdditional Announcements\u003c/h1\u003e\n\u003cul\u003e\n\u003cli\u003eVMware NSX will be taking over for vCNS (vCloud Networking and Security).  I was told that vCNS will still be available in 5.5 but future iterations would be inside the NSX Product.  NSX uses the VXLAN protocols to virtualize the physical infrastructure.  ESXi hosts will now be able to manage internal routing, switching and firewalls.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cimg loading=\"lazy\" src=\"https://lh3.googleusercontent.com/-WhgtL1zaRhI/UhvqklxVJYI/AAAAAAAAEEw/aLGPTF6dnI8/w1064-h709-no/SAM_0063.JPG\"\u003e\u003c/p\u003e","title":"vSphere 5.5 announced"},{"content":" In the event that you\u0026rsquo;re heading to VMworld 2013 in San Francisco, this post should help to prepare you for what to expect.\nPacking This is a five-day event that will consist of a ton of walking, some bouts of sitting, social engagements, labs, and fun.\nPacking rule #1 - wear comfortable shoes. Walking back and forth from your hotel, to the conference center for sessions, to the solutions exchange and general moving about will destroy your feet.\nPacking rule #2 - leave room in your suitcase. If you plan on picking up any swag (and there will be lots of it) you will want some extra space in your bag to get it home. You CAN get away with only bringing a shirt or two because I guarantee you\u0026rsquo;ll receive a few before the week is over. Below is a picture of the swag I brought home from last year\u0026rsquo;s conference.\nScheduling Plan your days out ahead of time and then expect those plans to change. My suggestion is to over book your days and then decide what you\u0026rsquo;re doing when the time comes.\nVMware Sessions - VMworld.com has a schedule builder that is useful in signing up for any sessions that you might want to sit through. Sessions can be a great way to learn about VMware concepts that you\u0026rsquo;re weak on or want a better understanding of.\nSocial Engagements - There will be opportunities to go to plenty of parties, tweetups, sporting activities and vBeers. The full official list is here: http://www.vmworld.com/community/gatherings but you can find other listings with additional info all over the web. Hans DeLeenHeer has posted a great article about where to be at. If you can do it, you\u0026rsquo;ll want to be at the Hall Crawl which is open to everyone, and the very difficult to get tickets for, the vMunderground Party.\nSolutions Exchange - Plan to spend a decent amount of time at the solutions exchange. If you want to learn about all the products that are influencing the virtualization market then go to the solutions Exchange. There are so many industry experts to talk to, you\u0026rsquo;ll find it hard to leave. If you want to see the list of exhibitors, VMworld.com has it covered as well. http://www.vmworld.com/community/conference/us/sponsors/list\nCome and Find Me! I\u0026rsquo;ve got swag as well to give out and would love to have a chinwag with any readers of my blog. I\u0026rsquo;ll have buttons, stickers, and business cards while supplies last! :)\nI\u0026rsquo;ll also have stickers for any bloggers who were on the http://vlp.vsphere-land.com/ list of top 50 bloggers for 2013. Please stop me and ask me for one if you\u0026rsquo;re on the list.\nI\u0026rsquo;ll be playing v0dgeball on Sunday, and will be at the vExpertVCDX party, VMunderground, vBacon tweetup and will certainly be in the solutions exchange and the blogger lounge. If you see me at the conference, please stop by and say hi. If you hate the blog, stop and tell me that too!\nNeed to get in touch with me? Find me on instagram, twitter, facebook, linkedin or Google Plus.\nGood luck, learn lots, and have fun!\n","permalink":"https://theithollow.com/2013/08/21/vmworld-2013-preparation/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/vmworld2013prep.png\"\u003e\u003cimg alt=\"vmworld2013prep\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/vmworld2013prep.png\"\u003e\u003c/a\u003e In the event that you\u0026rsquo;re heading to VMworld 2013 in San Francisco, this post should help to prepare you for what to expect.\u003c/p\u003e\n\u003ch1 id=\"packing\"\u003ePacking\u003c/h1\u003e\n\u003cp\u003eThis is a five-day event that will consist of a ton of walking, some bouts of sitting, social engagements, labs, and fun.\u003c/p\u003e\n\u003cp\u003ePacking rule #1 - wear comfortable shoes.  Walking back and forth from your hotel, to the conference center for sessions, to the solutions exchange and general moving about will destroy your feet.\u003c/p\u003e","title":"VMworld 2013 Preparation"},{"content":" I\u0026rsquo;ve been a huge fan of Microsoft Exchange ever since I\u0026rsquo;ve been involved in managing email servers. Exchange has been a topic of several of my posts this year because let\u0026rsquo;s face it, Exchange is the 100 lb gorilla of mail servers and has been for some time. But I\u0026rsquo;ve seen a fair number of colleges using a new mail system from Zimbra so I thought it was my duty to try it out. After all, there is a free 60 day trial of a VMware appliance available so what did I have to lose?\nThe installation was very simple. Go to the Zimbra free trial site and enter the obligatory contact information to retrieve your installation bits and trial license. The installation comes in either a server edition or a VMware OVF file. I chose the OVF file for my VMware home lab since it was easy :)\nAfter I downloaded and deployed the OVF file there was a very simple setup process. I took the IP Address of the appliance and threw it in my favorite web browser with the 5480 port added. Enter in the username: root and password: vmware.\nThe install gives you some options based on scalability. I only wanted to try it out so I picked the single server option. But, if you were deploying this for a larger environment, the components could be split up.\nEnter a hostname, IP Address, a new password (not vmware) and your domain.\nThen point the configuration at your license. You should receive a trial license if you\u0026rsquo;ve gone through the Zimbra website.\nYou\u0026rsquo;ll see a variety of things happening in the background. Grab a cup of coffee and wait for the setup to finish. I will mention that I did have an issue with this process the first time I went through it and found that I needed to be sure to have an A record and an MX record setup to point to the appliance before this step.\nWhen the setup finishes, you can go to the admin console to setup some users, mailboxes, email addresses etc. Be sure to log in with your new password, and not the default vmware password.\nI found the admin portal to be very intuitive. There are sections to add accounts, administrators, aliases, email addresses, distribution groups and seemed to be much cleaner than the Exchange interfaces I\u0026rsquo;ve been accustomed to.\nWhen you\u0026rsquo;re all done setting up users, you can go to the https:// address of the appliance to get your webmail.\nA very clean look for sending emails.\nThe webclient has folders, search folders, a preview pane and search functions.\nFor those of you that aren\u0026rsquo;t super excited about a webclient, zimbra desktop is a thick client that can be installed on your machine much like Microsoft Outlook.\nIf you\u0026rsquo;re looking to deploy an on premises mail server (as opposed to a SAAS option like Office 365 or Google Apps) and aren\u0026rsquo;t satisfied with the old standby of Microsoft Exchange, check out Zimbra. If you\u0026rsquo;re still not convinced, show the pricing to your CIO and let him decide. It\u0026rsquo;s a pretty decent alternative and the pricing may just make better sense for your organization.\n","permalink":"https://theithollow.com/2013/08/19/zimbra-offers-great-alternative-for-microsoft-exchange/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/08/email.jpg\"\u003e\u003cimg alt=\"email\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/08/email.jpg\"\u003e\u003c/a\u003e I\u0026rsquo;ve been a huge fan of Microsoft Exchange ever since I\u0026rsquo;ve been involved in managing email servers.  \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-1/\"\u003eExchange has been a topic of several of my posts this year\u003c/a\u003e because let\u0026rsquo;s face it, Exchange is the 100 lb gorilla of mail servers and has been for some time.  But I\u0026rsquo;ve seen a fair number of colleges using a new mail system from \u003ca href=\"zimbra.com\"\u003eZimbra\u003c/a\u003e so I thought it was my duty to try it out.  After all, there is a free 60 day trial of a VMware appliance available so what did I have to lose?\u003c/p\u003e","title":"Zimbra Offers Great Alternative for Microsoft Exchange"},{"content":" This is a series of posts designed to help readers understand how the Internet works. This specific post looks directly at how devices know what machines are on their network segment.\nIn previous posts, we looked at how machines communicate on the same network by utilizing frames, and how machines on different network segments use packets. The next logical question is, \u0026ldquo;How do machines know if these machines are on the same network or not?\u0026rdquo; The answer to this question is subnetting.\nBinary In order subnet, we need to first be able to convert a 32 bit decimal notation IP address into binary. For our example we\u0026rsquo;ll use the IP 192.168.1.10. Look at the first \u0026ldquo;octet\u0026rdquo; 192.\nLet\u0026rsquo;s break the number 192 out into expanded notation. (Remember from grade school where you\u0026rsquo;d write out a number based on the location of the digits)\nThe brief refresher of expanded notation should make our transition to binary easier. For binary, there are only two numbers available for each column instead of 10. (0-9). So we instead have something that looks like this:\nSo after 2 quick calculations we have shown how to take a decimal and convert it into binary. 192 = 11000000\nIf we take the rest of the IP address and convert it to binary we get: 11000000.10101000.00000001.00001010\n\u0026ldquo;AND\u0026rdquo; Operations Lastly, before we can determine what network segment we are on, we need to understand how an \u0026ldquo;AND\u0026rdquo; operation is complete. An \u0026ldquo;AND\u0026rdquo; operation takes two inputs and creates one output. A binary \u0026ldquo;AND\u0026rdquo; operations must take 0s or 1s and the result is a 0 or a 1.\nThere are four scenarios when running an \u0026ldquo;AND\u0026rdquo; operation on binary numbers. 0AND0**-\u0026gt;00AND1****-\u0026gt;01AND0****-\u0026gt;01AND1****-\u0026gt;****1**Subnet Masks\nA subnet mask get\u0026rsquo;s get\u0026rsquo;s applied to an IP address in an \u0026ldquo;AND\u0026rdquo; operation to determine the network segment. A pretty common mask is 255.255.255.0. Converting this address to binary is an easy calculation:\n11111111.11111111.11111111.00000000\nLet\u0026rsquo;s \u0026ldquo;AND\u0026rdquo; the subnet mask above with our original IP address of 192.168.1.10\nOnce the \u0026ldquo;AND\u0026rdquo; operation has been completed, we receive a network segment.\nNow once a machine tries to determine where a destination machine is, it just checks the IP address of the destination machine to see if it has the same network segment as the source machine. If it does, the frames are sent directly to the destination machine. If the destination machine is on a different segment, the source machine passes the frame to the default gateway to be routed.\n","permalink":"https://theithollow.com/2013/08/12/internetworking-101-series-subnets/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard.png\"\u003e\u003cimg alt=\"Chalkboard\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard-300x161.png\"\u003e\u003c/a\u003e This is a series of posts designed to help readers understand how the Internet works.  This specific post looks directly at how devices know what machines are on their network segment.\u003c/p\u003e\n\u003cp\u003eIn previous posts, we looked at how \u003ca href=\"/2013/07/internetworking-101-series-frames-data-link-layer/\"\u003emachines communicate on the same network by utilizing frames\u003c/a\u003e, and \u003ca href=\"/2013/07/internetworking-101-series-packets-network-layer/\"\u003ehow machines on different network segments use packets.\u003c/a\u003e  The next logical question is, \u0026ldquo;How do machines know if these machines are on the same network or not?\u0026rdquo;  The answer to this question is subnetting.\u003c/p\u003e","title":"Internetworking 101 series - Subnets"},{"content":"\nThis is a series of posts designed to help readers understand how the Internet works. This specific post looks directly at collision domains.\nEthernet uses a process called \u0026ldquo;Carrier Sense Multiple Access with Collision Detection\u0026rdquo; or CSMA/CD for short. This is a very long way of explaining the process of how network adapters can share the same media to communicate. Think about it if you have 10 machines on a network that are all sharing the same wires or devices, how can any of the devices understand anything with all those frames?\nDevices on the same collision domains share media but need to make sure that there is no traffic on the lines before transmitting data. The devices will listen on the line for traffic and when it believes it\u0026rsquo;s safe to transmit, will send the data frames out on the line. If another machine transmits at the same time, the electrical signals will \u0026ldquo;collide\u0026rdquo; and transmissions won\u0026rsquo;t go through. If this happens, they will each wait a random period of time and try to re-transmit. This process is standardized in IEEE 802.3 http://www.ieee802.org/3/.\nNow that you know how collisions work, it\u0026rsquo;s easy to see why adding more and more devices to a collision domain can cause problems. Below we\u0026rsquo;ll explain a bit how we can overcome these issues.\nHubs A network hub is a form of a repeater in the sense that any incoming transmissions get repeated out all of the rest of the ports. Hubs operate at Layer 1 of the OSI model so they are essentially like a multi-port network interface card (NIC).\nHere is what a collision domain would look like if you had four machines connected to a network hub.\nSwitches A network switch is similar to a hub because it has multiple ports, but this device works at layer 2 of the OSI model (data link layer) so these devices understand MAC addresses. Since these devices know MAC Addresses, they can determine what devices are connected to each port. So, when they receive a frame destined for another machine, they only need to send out the frame on that single port. This dramatically increases the number of collision domains and decreases the number of collisions.\nAfter looking at the two diagrams, it should be easy to see why switches can increase network performance even if the port speeds are the same as a switch.\n","permalink":"https://theithollow.com/2013/08/05/internetworking-101-series-collision-domains/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard.png\"\u003e\u003cimg alt=\"Chalkboard\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard-300x161.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThis is a series of posts designed to help readers understand how the Internet works.  This specific post looks directly at collision domains.\u003c/p\u003e\n\u003cp\u003eEthernet uses a process called \u0026ldquo;Carrier Sense Multiple Access with Collision Detection\u0026rdquo; or CSMA/CD for short.  This is a very long way of explaining the process of how network adapters can share the same media to communicate.  Think about it if you have 10 machines on a network that are all sharing the same wires or devices, how can any of the devices understand anything with all those frames?\u003c/p\u003e","title":"Internetworking 101 series – Collision Domains"},{"content":" This is a series of posts designed to help readers understand how the Internet works. This specific post looks directly at how machines on different network segments communicate.\nIn my previous post, we looked at how two machines on the same network segment exchange information by using frames. So what happens when two machines on different segments need to communicate?\nEncapsulation Before we get too involved in the discussion, we should take a peak at what an IP packet looks like. IP Packets relate to Ethernet frames much like one nesting doll relates to the rest in the set.\nAn IP Packet isn\u0026rsquo;t useful on it\u0026rsquo;s own. It needs an Ethernet frame in order to travel to it\u0026rsquo;s destination. So the IP Packet is encapsulated inside of an Ethernet frame for travel along a network segment.\nIPackets contain a source and a destination, as well as the data it\u0026rsquo;s trying to send (called the payload). IP Packets will also contain other things such as priority, checksums, time to live etc, but to keep it simple, just think of it as the source, destination and payload with a few other things thrown in.\nRouting In the below example I\u0026rsquo;ve revisited a scenario where two computers are trying to exchange information, and are on different network segments. In order for different networks to communicate we need to have a layer three device. In this case our layer three device is a router (you could have a layer three switch but in this case we have layer 2 switches to illustrate the point).\nWhen Computer A attempts to send information to Computer B, it first checks to see if the destination IP address is in the same network segment. In the above example we find that the two machines are on different networks so instead of sending the frame to Computer B, it is sent to the default gateway which has a MAC address of CC:CC:CC:CC:CC:CC. Once the Router receives the frame on the CC:CC:CC:CC:CC:CC physical adapter, it will break down the Ethernet Frame and read the IP Packet which is encapsulated inside the frame. The router can then compare the source and destination IP addresses in the packet and compare them to it\u0026rsquo;s routing table. For our example we\u0026rsquo;ll assume there is a route to computer B. The router will then encapsulate the IP Packet in a new frame and send it to Computer B.\nVisually it looks like this: (Please excuse the crude diagram. There is more to a frame and a packet than shown, but this illustrates how the encapsulation works.)\nFrame is sent to the default gateway with a destination IP Address of 192.168.50.1.\nFrame is delivered to the router. The router strips the frame apart to reveal the Packet. The packet can then be compared against the routing table for a next hop.\nThe router makes a routing decision and forwards a new frame to Computer B.\nNotice that the packet information didn\u0026rsquo;t change, but the source MAC and the destination MAC in the new frame is different.\n","permalink":"https://theithollow.com/2013/07/29/internetworking-101-series-packets-network-layer/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard.png\"\u003e\u003cimg alt=\"Chalkboard\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard-300x161.png\"\u003e\u003c/a\u003e This is a series of posts designed to help readers understand how the Internet works.  This specific post looks directly at how machines on different network segments communicate.\u003c/p\u003e\n\u003cp\u003eIn my previous post, we looked at how two machines on the same network segment exchange information by using frames.  So what happens when two machines on different segments need to communicate?\u003c/p\u003e\n\u003ch1 id=\"encapsulation\"\u003eEncapsulation\u003c/h1\u003e\n\u003cp\u003eBefore we get too involved in the discussion, we should take a peak at what an IP packet looks like.  IP Packets relate to Ethernet frames much like one nesting doll relates to the rest in the set.\u003c/p\u003e","title":"Internetworking 101 series - Packets (Network Layer)"},{"content":" This is a series of posts designed to help readers understand how the Internet works. This specific post looks directly at how machines on the same network segment communicate with each other.\nWe\u0026rsquo;ll look at the concept of a network segment in a future post, but for now all you need to think about is how two computers communicate on a Local Area Network.\nMAC Addresses Before we discuss how machines on the same LAN segment communicate, we need to understand what Media Access Control (MAC) addresses are. A MAC address is the physical address of a network adapter. These are 48 bit addresses that are expressed as 12 digit hexadecimal notation and are unique to each network adapter. Each manufacturer has an assigned range that they are to use for their first 24 bits, this is known as the Organizationally Unique Identifier (OUI). The second 24 bits are known as the Network Interface Controller (NIC) specific and must be unique within that vendor\u0026rsquo;s range. Keeping these ID\u0026rsquo;s unique is imperative for successful LAN communication.\nBelow you can see an Ethernet Address (MAC Address).\nIf you want to find your own MAC Address, you can do so by running ipconfig /all from a command prompt in Windows.\nYou can also look these OUI numbers from a variety of sites. http://www.macvendorlookup.com/\nSending a Frame Now that we\u0026rsquo;ve reviewed MAC Addresses, lets look at how they are used. Below is an example scenario that has two machines connected by a switch. For the purposes of this example, Computer A will be the sending computer and Computer B will be the receiving computer.\nComputer A will do a DNS lookup on the name \u0026ldquo;Computer B\u0026rdquo; to determine the IP Address 192.168.1.2. At this point Computer A will check to see if this IP Address is on the same broadcast domain or a different one. (We\u0026rsquo;ll explore this more in a future post.) Once Computer A has determined it\u0026rsquo;s on the same broadcast domain, it will check it\u0026rsquo;s arp table.\nARP stands for Address Resolution Protocol and is responsible for mapping IP Addresses to physical MAC Addresses. On a windows machine, you can view the arp table by running \u0026ldquo;arp -a\u0026rdquo; in a command prompt.\nBack to our example, Computer A now did a lookup on the arp table to find Computer B\u0026rsquo;s MAC Address. At this point one of two things happens.\nIf the MAC Address shows up in the arp table, Computer A will then send frames with a destination address of BB-BB-BB-BB-BB-BB out on the network. Computer B will respond to these frames because the destination MAC Address is the same as its local MAC Address.\nIf the MAC Address does not show up in the arp table, Computer A will send a frame with a destination address of FF-FF-FF-FF-FF-FF which is a broadcast address. This address is accepted by all machines, but will request a reply from the machine with IP Address 192.168.1.2. When the machine gets the return frame from Computer B, it will then have the physical address and can do step one.\nIn our example we would have a frame that look similar to the one below.\n","permalink":"https://theithollow.com/2013/07/22/internetworking-101-series-frames-data-link-layer/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard.png\"\u003e\u003cimg alt=\"Chalkboard\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/07/Chalkboard-300x161.png\"\u003e\u003c/a\u003e This is a series of posts designed to help readers understand how the Internet works.  This specific post looks directly at how machines on the same network segment communicate with each other.\u003c/p\u003e\n\u003cp\u003eWe\u0026rsquo;ll look at the concept of a network segment in a future post, but for now all you need to think about is how two computers communicate on a Local Area Network.\u003c/p\u003e\n\u003ch1 id=\"mac-addresses\"\u003eMAC Addresses\u003c/h1\u003e\n\u003cp\u003eBefore we discuss how machines on the same LAN segment communicate, we need to understand what Media Access Control (MAC) addresses are.  A MAC address is the physical address of a network adapter.  These are 48 bit addresses that are expressed as 12 digit hexadecimal notation and are unique to each network adapter.  Each manufacturer has an assigned range that they are to use for their first 24 bits, this is known as the Organizationally Unique Identifier (OUI).  The second 24 bits are known as the Network Interface Controller (NIC) specific and must be unique within that vendor\u0026rsquo;s range.  Keeping these ID\u0026rsquo;s unique is imperative for successful LAN communication.\u003c/p\u003e","title":"Internetworking 101 series - Frames (Data Link Layer)"},{"content":"\nHP announced a new virtual storage appliance (VSA) recently at their annual HP Discover conference. This is a virtual appliance based on the StoreOnce line (formerly known as D2D) of hardware appliances that HP has sold for a long time. These appliances have the catalyst software which allows for deduplication of all your backup data, hence the term StoreOnce.\nThese devices have allowed administrators to switch from the older tape based backups to a virtual tape library (VTL) or a NAS type backup solution. It has replication options in it which allow for deduplicated data to be migrated or copied without re-hydrating the backups and wasting valuable bandwidth. It also allows for federated backups and when matched with HP Data Protector 8 (also newly released) can throttle bandwidth during backup operations in order to prevent production slow downs.\nObviously the coolest thing about this announcement is the fact that it\u0026rsquo;s a virtual appliance meaning that you don\u0026rsquo;t have to rackcoolcable any new devices and better yet, can continue to use some older storage appliances that you have lying around. Do you have an older storage array that doesn\u0026rsquo;t have replication capabilities but works just fine for storing data? Maybe this is your new solution?\nI\u0026rsquo;ve included some of the specs below from the StoreOnce Backup Datasheet, and am a little disappointed by the fact that it can handle 10TB of storage, but does it in 1TB virtual disk chucks. I don\u0026rsquo;t really want to have ten 1TB disks sitting around for my backup appliance, but I\u0026rsquo;ve heard of worse.\nAccording to the HP folks I talked to at Discover, there should be a free 30 day trial coming out soon, much like their StoreVirtual VSA which is based on the P4000 series (Lefthand) storage that\u0026rsquo;s been around for a while now.\nDavid Scott announcing the new StoreOnce VSA at HP Discover\nFrom the HP StoreOnce Backup Datasheet\nHP StoreOnce VSA 10 TB Backup Model differentiator Virtual appliance running vSphere 5.0 or later Drive description Supported drives dependent on VMware environment Drive type 0 included, (1) vDisk per TB of usable capacity supported Capacity Supports up to 10 TB (configured in 1 TB increments) in a VMware virtual appliance Transfer rate 500 GB/hr maximum supported; depending on VSA configuration Deduplication HP StoreOnce deduplication Storage expansion options Dependent on VMware environment Host interface 1GbE vNIC (2) ports per controller minimum supported Replication support Replication license included Target for backup applications HP StoreOnce Catalyst, Virtual Tape Library and NAS (CIFS only) Number of Virtual Tape Libraries and NAS Targets 4 Number of Virtual Tape Cartridges emulated 768 maximum, depending on VMware environment Maximum number of source appliances 1 Form factor Dependent upon VMware environment Warranty (parts-labor-onsite) HP warrants only ","permalink":"https://theithollow.com/2013/07/16/hp-storeonce-vsa/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/06/VSA-Logo.jpg\"\u003e\u003cimg alt=\"VSA Logo\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/06/VSA-Logo.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eHP announced a new virtual storage appliance (VSA) recently at their annual HP Discover conference.  This is a virtual appliance based on the StoreOnce line (formerly known as D2D) of hardware appliances that HP has sold for a long time.  These appliances have the catalyst software which allows for deduplication of all your backup data, hence the term StoreOnce.\u003c/p\u003e\n\u003cp\u003eThese devices have allowed administrators to switch from the older tape based backups to a virtual tape library (VTL) or a NAS type backup solution.  It has replication options in it which allow for deduplicated data to be migrated or copied without re-hydrating the backups and wasting valuable bandwidth.  It also allows for federated backups and when matched with HP Data Protector 8 (also newly released) can throttle bandwidth during backup operations in order to prevent production slow downs.\u003c/p\u003e","title":"HP StoreOnce VSA"},{"content":" I\u0026rsquo;ve never paid too much attention to the different types of RAM (Random Access Memory) during my tenure as a Systems Engineer but wonder how much time it would have saved me in troubleshooting. This post is not only an attempt to educate other technicians, but an opportunity to refine my own knowledge of the subject.\nThere are three main categories that I want to review. Ecc vs Non-ECC, Unbuffered vs Registered memory, and Memory Rank.\nECC vs Non-ECC Memory ECC stands for Error Correcting Code and for good reason. ECC uses parity by using an XOR function much like you would use RAID. More Information\nIt\u0026rsquo;s usually easy to spot ECC memory because you can look at the number of chips on the side of a stick of RAM and if you can evenly divide the number of banks by three then it\u0026rsquo;s usually ECC. The reason for this is RAM manufacturers will add an additional chip to handle the parity information. (Wouldn\u0026rsquo;t it be nice to get disk manufacturers to do the same!)\nECC RAM with 9 chips\nSo ECC has error correcting and should be used in all cases right? Well, of course there is still a use for non-ecc memory. Non-ECC memory is usually cheaper and can be useful in desktops. The worst case is an app may crash or a desktop may need restarted. Servers, especially those that are virtualization hosts, should use ECC in order to prevent application crashes due to memory errors.\nECC also has a 2% performance impact due to the parity checks. This is a small price for stability.\nUnbuffered vs Registered ECC Memory These days, the memory controller is built into the CPU. This means that the CPU needs to manage the memory and there are a couple of ways this can happen.\nUnbuffered memory requires that the CPU manage all of the chips, as shown in the picture below.\nRegistered memory helps to take some of the load off the CPU by putting the information into the registers.\nunbuffered memory configs go directly from the controller to the memory module.\nThe main benefit of using registered memory is that there is less for the CPU to manage and it allows for larger amounts of RAM to be put into a computer.\nThe downside is that Registered memory is a bit slower because it takes an additional clock cycle to retrieve information for the processor to execute.\nMemory Rank Memory Rank refers to the number of 64-bit data blocks. (72-bit data blocks if the memory is ECC) If two 64-bit data blocks are on the same chip select [bus] they would be considered dual rank.\nRanks are a good way for manufacturers to jam more memory into a system, but these ranks will slow down the memory in doing so. For this reason, single rank memory is usually the fastest.\nCAS Latency You\u0026rsquo;ve probably heard about CAS latency as well, but maybe didn\u0026rsquo;t realize what it was. I doubt this was a problem because as the name suggests, the lower the amount of latency the better.\nCAS latency stands for Column Address Strobe Latency. What\u0026rsquo;s important to know is that it\u0026rsquo;s the amount of time between when a request is made and when the data is output.\nIn the example below a read request is made of memory and it takes seven clock cycles for the data to be retrieved. To put numbers into perspective, CAS latency of 9 would take an extra two clock cycles to complete the same operation, assuming the same speed processors were used.\nSDRAM vs DDR In the CAS Latency example above I showed how read requests are dependent upon clock cycles. The main difference between SDRAM and DDR is that DDR can read on both the uptick and downtick of the clock cycle whereas SDRAM can only read on the uptick. DDR stands for Double Data Rate for this reason.\nDDR2 and DDR3 are newer concoctions of the DDR standard and can combine multiple banks of memory to obtain additional bandwidth.\n","permalink":"https://theithollow.com/2013/07/09/understanding-ram/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/06/RAM1.jpg\"\u003e\u003cimg alt=\"RAM1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/06/RAM1.jpg\"\u003e\u003c/a\u003e I\u0026rsquo;ve never paid too much attention to the different types of RAM (Random Access Memory) during my tenure as a Systems Engineer but wonder how much time it would have saved me in troubleshooting.  This post is not only an attempt to educate other technicians, but an opportunity to refine my own knowledge of the subject.\u003c/p\u003e\n\u003cp\u003eThere are three main categories that I want to review.  Ecc vs Non-ECC, Unbuffered vs Registered memory, and Memory Rank.\u003c/p\u003e","title":"Understanding RAM"},{"content":" I recently purchased a Synology DS411slim NAS device for my home lab in order to quiet down my rack and start using up less power. Obviously to accomplish this I would need to fill it up with Solid State drives which have the added benefit of a large number of IOPS :)\nI screwed my four 480GB OCZ SSD\u0026rsquo;s into the drive cages and slipped them into the chassis. Really my only con for this device was getting the drives into the chassis. I did have to do a bit of wiggling to get them seated correctly.\nI powered up the device and plugged the NIC into my core switch and ran the installation utility. The first thing the utility did was run a discovery on the device.\nOnce the device was found, I pointed it to the DSM which I downloaded from the Synology site.\nAs you might hope, the wizard asks you to change the default password.\nEnter the new IP information for the device. It will check for an IP address conflict before actually changing it.\nPatiently wait for all of the tasks to complete.\nWhen I was all done, I was able to access the NAS via a web browser. A quickstart is available to hep you find your way.\nOnce it was setup, I found that I got about 8000 IOPS while writing to disk. Not bad for such a tiny device.\n","permalink":"https://theithollow.com/2013/07/01/synology-ds411slim-review/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/06/20130608_194909.jpg\"\u003e\u003cimg alt=\"synology\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/06/20130608_194909-300x225.jpg\"\u003e\u003c/a\u003e I recently purchased a Synology DS411slim NAS device for my home lab in order to quiet down my rack and start using up less power.  Obviously to accomplish this I would need to fill it up with Solid State drives which have the added benefit of a large number of IOPS :)\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/06/20130608_203157.jpg\"\u003e\u003cimg alt=\"20130608_203157\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/06/20130608_203157-1024x768.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI screwed my four 480GB OCZ SSD\u0026rsquo;s into the drive cages and slipped them into the chassis.  Really my only con for this device was getting the drives into the chassis.  I did have to do a bit of wiggling to get them seated correctly.\u003c/p\u003e","title":"Synology DS411slim Review"},{"content":"This post was a direct result of a request from one of my readers. I hope that this post will explain VMware networks a bit more and how they fit into a production network.\nTo begin I\u0026rsquo;d like to review how a VMware ESXi server might have its virtual switches and port groups setup to connect to a physical switch. Here is a list of networks that we\u0026rsquo;ll be working with.\nBelow, I show the port groups for each network, the physical adapters that might be setup, and a physical server switch that the ESXi host might connect to. (You should always have multiple physical switches for redundancy but for brevity I\u0026rsquo;ve omitted this in the design,)\nThe last virtual switch contains two port groups. Production and Lab port groups contain only virtual servers and no vmkernal ports.\nNow that we\u0026rsquo;ve got an idea how how a single ESXi hosts has been setup, we can reproduce the same configuration for multiple hosts so I won\u0026rsquo;t go into details on that. Instead, let\u0026rsquo;s look at what the network might look like with clients and some physical servers. I\u0026rsquo;ve omitted some of the ESXi information we just covered.\nFrom the diagram we can still see the Production and Lab port groups but I\u0026rsquo;ve added the network they are on as well, in order to show that they are on the same LAN segment as two physical servers \u0026ldquo;Production SQL Server\u0026rdquo; and \u0026ldquo;LAB SQL Server\u0026rdquo;.\nThe fact that some of the production servers are virtual and some are physical will have little to do with the network design. If you were to P2V the Production SQL Server, you could put it directly in the Production port group and keep the same IP Address.\nWhy use different networks at all? You could create a single network such as 192.168.1.0/24 and put management, Fault Tolerance, vMotion, iSCSI, production, lab and client computers in it. This is possible to do, but shouldn\u0026rsquo;t be done for performance reasons. The more devices on a segment, the more chance for collisions which will cause re-transmits to occur and will slow down your network.\nAnother reason to not use a single network segment would be for security purposes. If different services such as iSCSI and management traffic are on different networks, it makes it easier to firewall the traffic based off the network ID.\n","permalink":"https://theithollow.com/2013/06/24/an-overview-of-vmware-virtual-networks/","summary":"\u003cp\u003eThis post was a direct result of a request from one of my readers.  I hope that this post will explain VMware networks a bit more and how they fit into a production network.\u003c/p\u003e\n\u003cp\u003eTo begin I\u0026rsquo;d like to review how a VMware ESXi server might have its virtual switches and port groups setup to connect to a physical switch.  Here is a list of networks that we\u0026rsquo;ll be working with.\u003c/p\u003e","title":"An Overview of [VMware] Virtual Networks"},{"content":"I was recently interviewed by Andrew McCaskey from SDR News about the HP Discover conference. The interview is below\nhttp://www.youtube.com/watch?feature=player_detailpage\u0026amp;v=IUM-sDiHD18\n","permalink":"https://theithollow.com/2013/06/21/sdr-news-interview-2/","summary":"\u003cp\u003eI was recently interviewed by Andrew McCaskey from SDR News about the HP Discover conference.  The interview is below\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.youtube.com/watch?feature=player\"\u003ehttp://www.youtube.com/watch?feature=player\u003c/a\u003e_detailpage\u0026amp;v=IUM-sDiHD18\u003c/p\u003e","title":"SDR News Interview for HP Discover"},{"content":"A few months ago there was an annual vote to determine the top 50 virtualization bloggers according to vsphere-land.com. These top 50 bloggers are then added to the vLaunchPad where they will remain cataloged for a year until the next year\u0026rsquo;s voting. Kudos to Eric Siebert for running this whole process!\nTheITHollow.com was lucky enough to make the top 50 in it\u0026rsquo;s first year of existence and to celebrate I asked my good friends (well, family really) over at www.whateverinspires.com to create a top 50 badge to put on my site. They were happy to oblige and even created a top 25 and also a top 10.\nIf you were in the list of top blogs, please feel free to download and use these awards on your own site, free of charge.\nIn addition to the blog award, they also created my logo and have some great graphics skills. If you\u0026rsquo;re looking for some additions to your own site, Feel free to stop over to whateverinspires.com or drop me a line and I\u0026rsquo;ll get you some direct contact information.\n","permalink":"https://theithollow.com/2013/06/18/2013-top-50-blogs-awards-vsphere-land/","summary":"\u003cp\u003eA few months ago there was an annual vote to determine the top 50 virtualization bloggers according to \u003ca href=\"http://vsphere-land.com/news/2013-top-vmware-virtualization-blog-voting-results.html\"\u003evsphere-land.com\u003c/a\u003e.  These top 50 bloggers are then added to the \u003ca href=\"http://vlp.vsphere-land.com/\"\u003evLaunchPad\u003c/a\u003e where they will remain cataloged for a year until the next year\u0026rsquo;s voting.  Kudos to \u003ca href=\"https://twitter.com/ericsiebert\"\u003eEric Siebert\u003c/a\u003e for running this whole process!\u003c/p\u003e\n\u003cp\u003eTheITHollow.com was lucky enough to make the top 50 in it\u0026rsquo;s first year of existence and to celebrate I asked my good friends (well, family really) over at \u003ca href=\"http://whateverinspires.com\"\u003ewww.whateverinspires.com\u003c/a\u003e to create a top 50 badge to put on my site.  They were happy to oblige and even created a top 25 and also a top 10.\u003c/p\u003e","title":"2013 Top 50 Blogs Awards (vSphere Land)"},{"content":"Only a few years removed from when HP announced they were ditching the workstation part of their product line, they\u0026rsquo;ve announced a new set of desktops this week.\nThe EliteOne 800 G1 The EliteOne 800 G1 is an all in one solution for companies that include the newer Intel 8 Series Q85 chipset which includes the Intel vPro technology. I haven\u0026rsquo;t been able to get my hands on playing with this yet, but essentially it uses the trusted execution environment in order to allow access to the PC even if it\u0026rsquo;s powered off, or if it has a virus. Perhaps this doesn\u0026rsquo;t seem like a big deal to end users, but the IT group that has to support these devices will certainly take notice. I liken it to having an iLO processor on every desktop so that troubleshooting can be done remotely by support. This will even allow off site or third party support to gain access in order to troubleshoot issues.\nIt\u0026rsquo;s difficult to roll out big hard drives with the performance users want. I know I\u0026rsquo;d never be able to replace my desktops that have SSDs in them without users screaming at me that it takes too long to boot. The new HP Desktops have a hybrid drive of up to 1TB which uses a smaller SSD drive for caching in a single drive enclosure.\nAs one might expect, these devices optionally come as a touch screen loaded with Windows 8.\nThe EliteDesk 800 G1\nThe EliteDesk 800 G1 also has the Intel vPro technology (just get used to saying that. I\u0026rsquo;m pretty sure it\u0026rsquo;s going to be a standard option on all desktops soon) as well as having Intel Wireless Display ( WIDI). This technology will allow you to connect your desktop to a projector or monitor without the use of those pesky cables. This might not be the greatest achievement ever for the IT admins, but it will certainly make the brass at your company happy when they walk into the conference room and wirelessly project their presentations. A little bit of the \u0026ldquo;WOW\u0026rdquo; factor here, I suppose.\nYou have the options for SSD drives or not on these machines and they\u0026rsquo;ll hold 32GB of RAM and some pretty high end graphics cards for a business desktop. NO GAMING AT WORK!!!!\nThe ProOne 600 G1 and the ProDesk 600 G1\nThe final two desktops seem to be the \u0026ldquo;Security Focused\u0026rdquo; desktops. They offer self encrypting hard drives as well as some additional software that permanently wipes data instead of the traditional delete operations. The ProDesk 600 G1 is the higher end model while the ProDesk is a slightly cheaper version.\nLook for more information about these devices this week as HP Discover is going on in Las Vegas, NV.\nFull Disclosure: HP sponsored a trip for me to come to Discover this year to cover the event but did not pay for any blog articles or have any say in the editorial process. The thoughts on this site are my own.\n","permalink":"https://theithollow.com/2013/06/13/new-hp-business-desktops/","summary":"\u003cp\u003eOnly a few years removed from when HP announced they were ditching the workstation part of their product line, they\u0026rsquo;ve announced a new set of desktops this week.\u003c/p\u003e\n\u003ch2 id=\"the-eliteone-800-g1\"\u003eThe EliteOne 800 G1\u003c/h2\u003e\n\u003cp\u003eThe EliteOne 800 G1 is an all in one solution for companies that include the newer Intel 8 Series Q85 chipset which includes the \u003ca href=\"http://www.intel.com/content/www/us/en/architecture-and-technology/vpro/vpro-technology-general.html\"\u003eIntel vPro technology\u003c/a\u003e.  I haven\u0026rsquo;t been able to get my hands on playing with this yet, but essentially it uses the trusted execution environment in order to allow access to the PC even if it\u0026rsquo;s powered off, or if it has a virus.  Perhaps this doesn\u0026rsquo;t seem like a big deal to end users, but the IT group that has to support these devices will certainly take notice.  I liken it to having an iLO processor on every desktop so that troubleshooting can be done remotely by support.  This will even allow off site or third party support to gain access in order to troubleshoot issues.\u003c/p\u003e","title":"New HP Business Desktops"},{"content":" HP has a new MicroServer out that would be perfect for the anyone who is looking for a solid home lab server. The Microserver G7 was a fairly popular server for home computing enthusiasts and HP decided to add upon that line.\nThe original reason for this server was for small businesses that wanted a small but stable server with features such as integrated Lights Out (iLO) but as it so happens, this line was pretty useful for those bloggers and certification junkies that wanted to take some HP Servers home with them.\nI got to see this server up close at HP Discover this week and it looks great. HP has added an Intel processor to the Gen8 server as well as a Smart Array Controller.\nIf you\u0026rsquo;re worried about how this server will look in your rack, never fear; The Gen8 bezels are available for this server and they come in several colors as well as the silver.\nIf you\u0026rsquo;re interested in taking some HP ExpertOne exams and need to get some hands on experience with an HP Server, give this little guy a try.\nProcessor familyProcessor:Intel® Celeron® G1610T (2 core, 2.3 GHz, 2MB, 35W)Number of processors:1Processor cores:2MemoryStandard memory:2GB (1x2GB) UDIMMMaximum memory:16GBMemory slots:2 DIMM slotsMemory protection:Unbuffered ECCI/OExpansion slots:(1) PCIe For detail descriptions reference the QuickSpecNetwork controller:1Gb 332i Ethernet Adapter 2 Ports per controllerPower supplyPower supply:(1) 150W non-hot plug, non redundant power supply kit Multi-outputStorage controllerStorage controller:(1) Dynamic Smart Array B120i/ZM\nFull Disclosure: HP sponsored a trip for me to come to Discover this year to cover the event but did not pay for any blog articles or have any say in the editorial process. The thoughts on this site are my own.\n","permalink":"https://theithollow.com/2013/06/12/hp-proliant-microserver-gen8/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/06/hpe_US_EN_TSG_SMB_Microserver_20130610.jpg\"\u003e\u003cimg alt=\"hpe_US_EN_TSG_SMB_Microserver_20130610\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/06/hpe_US_EN_TSG_SMB_Microserver_20130610-300x124.jpg\"\u003e\u003c/a\u003e HP has a new MicroServer out that would be perfect for the anyone who is looking for a solid home lab server.  The Microserver G7 was a fairly popular server for home computing enthusiasts and HP decided to add upon that line.\u003c/p\u003e\n\u003cp\u003eThe original reason for this server was for small businesses that wanted a small but stable server with features such as integrated Lights Out (iLO) but as it so happens, this line was pretty useful for those bloggers and certification junkies that wanted to take some HP Servers home with them.\u003c/p\u003e","title":"HP Proliant MicroServer Gen8"},{"content":" HP officially announced the new HP 3PAR StoreServ 7450 today to some oohs and ahhs at HP Discover.\nThe new 3PAR is an all flash array that can be utilized for small, medium or large business needs. HP is touting that some of their competitors are having difficulty with the flash hurdle because their systems were optimized for spinning disks. And other competitors who designed their arrays specifically with flash in mind from the ground up are not proven arrays from a reliability standpoint.\nHP\u0026rsquo;s 3PAR ASICs give them a unique opportunity to use what they\u0026rsquo;re already built to then add on solid state drives to add performance. The 3PAR ASIC will allocate the writes to disk evenly to eliminate wear on these SSDs which tend to wear out much more quickly than the traditional spinning-disks.\nIf you\u0026rsquo;re looking for some specs on the new device, you can expect around 554,000 IOPS at under .7 ms latency. (I usually don\u0026rsquo;t pay too much attention to the IOPS count because it\u0026rsquo;s usually inflated due to it not being under a typical production workload. )\nTo me this announcement doesn\u0026rsquo;t come as too much of a shock since every major array vendor wants to have a flash array in their arsenal these days. The bigger announcement was the updated 3PAR software capabilities that went with this.\n3PAR Priority Optimization is a new feature that creates a QOS type setup with the array so that specific workloads will have priority access to the array if necessary. This comes in handy with their 3PAR units especially since they are multi-tenant.\n3PAR StoreServe data at rest encryption is another new feature that was announced and will likely help HP with the financial and medical customers who have compliance regulations about making sure the data is encrypted at all times.\nFull Disclosure: HP sponsored a trip for me to come to Discover this year to cover the event but did not pay for any blog articles or have any say in the editorial process. The thoughts on this site are my own.\n","permalink":"https://theithollow.com/2013/06/11/new-hp-3par-storeserv-7450-announced/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/06/StoreServ-e1370973358646.jpg\"\u003e\u003cimg alt=\"StoreServ\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/06/StoreServ-e1370973358646-225x300.jpg\"\u003e\u003c/a\u003e HP officially announced the new HP 3PAR StoreServ 7450 today to some oohs and ahhs at \u003ca href=\"http://h30614.www3.hp.com/Discover/OnDemand/LasVegas2013\"\u003eHP Discover.\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe new 3PAR is an all flash array that can be utilized for small, medium or large business needs.  HP is touting that some of their competitors are having difficulty with the flash hurdle because their systems were optimized for spinning disks.  And other competitors who designed their arrays specifically with flash in mind from the ground up are not proven arrays from a reliability standpoint.\u003c/p\u003e","title":"New HP 3PAR StoreServ 7450 Announced"},{"content":"\nI want to address a concern that many HP Virtual Connect customers have had about monitoring their Blade Chassis. A question I’ve received was “How do I know if I have sufficient uplinks for my traffic?”\nDepending on the size of the organization and their familiarity with their networking equipment, they could be monitoring the available metrics on their switches. If they are not necessarily that network savvy or don’t have the proper monitoring tools in place, they can use the throughput statistics tools within Virtual Connect. These tools only give a simplistic view to the amount of traffic that is going across your uplinks, and doesn’t show the traffic going out each blade but it does get you some great high level information.\nOpen Virtual connect, choose Tools\u0026ndash;\u0026gt; Throughput Statistics.\nAdd the selected uplinks you’d like to take a look at.\nSelect the statistics you’d like to see.\nReview your data in a nice easy to read graph.\nYou can then zoom in on what area you want to see by clicking and dragging.\nIf you need to modify your view so you can see a longer period of time, or would like to change how often the metrics are save, you can also modify them in the Ethernet Settings section.\nThis is a pretty simple “How-to” article but I’m not sure how many Virtual Connect Customers know about the throughput statistics option. I hope it helps!\n","permalink":"https://theithollow.com/2013/06/05/hp-virtual-connect-throughput/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/05/truckoverload.png\"\u003e\u003cimg alt=\"truckoverload\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/05/truckoverload-300x175.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI want to address a concern that many HP Virtual Connect customers have had about monitoring their Blade Chassis.  A question I’ve received was “How do I know if I have sufficient uplinks for my traffic?”\u003c/p\u003e\n\u003cp\u003eDepending on the size of the organization and their familiarity with their networking equipment, they could be monitoring the available metrics on their switches.  If they are not necessarily that network savvy or don’t have the proper monitoring tools in place, they can use the throughput statistics tools within Virtual Connect.  These tools only give a simplistic view to the amount of traffic that is going across your uplinks, and doesn’t show the traffic going out each blade but it does get you some great high level information.\u003c/p\u003e","title":"HP Virtual Connect Throughput"},{"content":"\nOnce per year VMware takes time to present the VMware vExpert distinction to members of the community Evangelizing, teaching, helping and speaking about VMware techniques.\nThe annual VMware vExpert title is given to individuals who have significantly contributed to the community of VMware users over the past year. The title is awarded to individuals (not employers) for their commitment to sharing their knowledge and passion for VMware technology above and beyond their job requirements.\nThis was my first year being honored with the title vExpert and I can tell you that it is nice to be noticed for the time spend blogging, speaking at VMUGs, answering questions on the VMware Communities and helping others learn virtualization. I hope that in some way you\u0026rsquo;ve gotten something worth while from my time as well.\nI urge others who are in the VMware trenches to pass along their knowledge no matter what level of expertise you may have. I\u0026rsquo;ve found that spending time writing blog posts has helped me to refine my skills and learn a ton in the process. I\u0026rsquo;ve also met some really great people along the way.\nIf you\u0026rsquo;d like to see this years vExperts please see the link from the VMware website. http://blogs.vmware.com/vmtn/2013/05/vexpert-2013-awardees-announced.html\nAlso, a special thanks to John Troyer ( @jtroyer) and his team\u0026rsquo;s work on the vExpert program. His time should not go unnoticed either.\n","permalink":"https://theithollow.com/2013/06/03/vexperts-2013/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2012/02/vexpert.jpg\"\u003e\u003cimg alt=\"vexpert\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2012/02/vexpert.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eOnce per year VMware takes time to present the VMware vExpert distinction to members of the community Evangelizing, teaching, helping and speaking about VMware techniques.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003eThe annual VMware vExpert title is given to individuals who have significantly contributed to the community of VMware users over the past year. The title is awarded to individuals (not employers) for their commitment to sharing their knowledge and passion for VMware technology above and beyond their job requirements.\u003c/p\u003e","title":"vExperts 2013"},{"content":" If you find yourself in an unfamiliar network and want to understand how the networks are connected, it would certainly be nice to be able to tell what is connected to each other. Luckily there are a couple of protocols that are responsible for just that.\nCisco Discovery Protocol (CDP) As you can probably guess from the name, the Cisco Discovery Protocol is a proprietary protocol from Cisco Systems.\nHow does it work? Every 60 seconds (by default) the Cisco device will send an announcement to the multicast address 01-00-0c-cc-cc-cc on every interface. The information in this announcement may contain any configured IP Addresses, the OS version, Hostname of the device, and the port name that the announcement came from. It may also send some vlan trunking protocol (VTP) information as well because Cisco uses CDP in setting up VTP.\nAny devices that can understand CDP will then store this information for up to 180 seconds (by default) so that they can then determine what device is directly connected to them and what ports they’re connected to.\nTo determine this information on a Cisco switch you can run “Show CDP Neighbors”\nIf you want to show your CDP Settings you may run “Show CDP”\nLink Layer Discovery Protocol (LLDP) Link Layer Discovery Protocol (LLDP) is very similar to CDP but is formalized in the IEEE 802.1AB standard. Since LLDP is a formalized standard, it is used by a variety of vendors including Cisco.\nHow does it work? LLDP works in a very similar fashion to CDP but used more than one multicast address. These frames are not able to be forwarded by a router whereas CDP can be used in a layer 3 network. In fact CDP is used by Cisco to do some routing advertisements.\nLLDP does advertise the hostname, management IP Address, port name and description just as CDP does.\nNeighbor Discovery Protocol (NDP) If you haven’t heard about Neighbor Discovery Protocol (NDP) yet it’s probably because you haven’t had to delve into the depths of IPv6 yet.\nNDP is used on more than just switches and routers. Any computers that have the IPv6 network stack on them should be doing some sort of NDP. This discovery protocol is used to not only discover the neighboring devices, but also the networks they’re on, path selection, DNS Server addresses, Gateways and IP address duplicate prevention! This is a pretty robust protocol that combine’s IPv4’s ARP and ICMP requests.\nI would think that this protocol might replace LLDP and CDP since it is necessary for stateless IP address auto configuration.\n","permalink":"https://theithollow.com/2013/05/28/discovery-protocols/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/05/discovery.jpg\"\u003e\u003cimg alt=\"discovery\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/05/discovery.jpg\"\u003e\u003c/a\u003e If you find yourself in an unfamiliar network and want to understand how the networks are connected, it would certainly be nice to be able to tell what is connected to each other.  Luckily there are a couple of protocols that are responsible for just that.\u003c/p\u003e\n\u003ch1 id=\"cisco-discovery-protocol-cdp\"\u003eCisco Discovery Protocol (CDP)\u003c/h1\u003e\n\u003cp\u003eAs you can probably guess from the name, the Cisco Discovery Protocol is a proprietary protocol from Cisco Systems.\u003c/p\u003e","title":"Discovery Protocols"},{"content":"One of the new features in the latest version of Windows Server is the ability to create server groups. When you open the server manager you\u0026rsquo;ll see some server group options on the dashboard. You can add other servers to manage, or create a new group. Also, pay attention to the \u0026ldquo;Roles and Server Groups\u0026rdquo; section at the bottom of the screen which shows some of the server groups that were already set up.\nIf you click the \u0026ldquo;Create a server group\u0026rdquo; link, a new dialog box will open and show you what servers are currently in the server pool. Slide over to the Active Directory Tab.\nHere, I just hit the \u0026ldquo;Find Now\u0026rdquo; button which showed me all of my servers in my domain, but you can filter this by name if you\u0026rsquo;d like by entering it into the CN box.\nI highlighted some servers that I wanted to put into a group, then hit the \u0026ndash;\u0026gt; button. Lastly, I gave this group the name \u0026ldquo;HollowGroup\u0026rdquo;.\nClick OK and you\u0026rsquo;ll have a new group created. If you look under the \u0026ldquo;Roles and Groups\u0026rdquo; section of the dashboard, you\u0026rsquo;ll notice that I have a new card listed \u0026ldquo;HollowGroup\u0026rdquo;. That group also shows some alarms. If you click on the newly created card, you\u0026rsquo;ll get some more detailed information about the servers is that group. In my case, you\u0026rsquo;ll notice I have an alarm listed and that\u0026rsquo;s because the server manager can\u0026rsquo;t communicate with a couple of my servers in the group (They\u0026rsquo;re 2008R2 servers and not Server 2012). To use Windows Server 2008R2 servers in your group, you want to make sure that WinRM 3.0 is installed and running.\nIf you run the add Roles and Feature wizard from the dashboard, you\u0026rsquo;ll also notice that you now have the ability to add these rolesfeatures to any member of the group, without having to log in directly.\n","permalink":"https://theithollow.com/2013/05/20/windows-server-2012-server-groups/","summary":"\u003cp\u003eOne of the new features in the latest version of Windows Server is the ability to create server groups.  When you open the server manager you\u0026rsquo;ll see some server group options on the dashboard.  You can add other servers to manage, or create a new group.  Also, pay attention to the \u0026ldquo;Roles and Server Groups\u0026rdquo; section at the bottom of the screen which shows some of the server groups that were already set up.\u003c/p\u003e","title":"Windows Server 2012 Server Groups"},{"content":"I\u0026rsquo;ve almost always preferred HP laptops for work purposes based on how stable they\u0026rsquo;ve been for me. But while shopping for my last laptop, I decided to try out the Samsung Series 9.\nTo be honest, the biggest reason I decided to look at this laptop was the sleek design. I knew that I would be traveling a lot with my new position and having a light weight laptop was certainly preferable. In addition an SSD drive made me feel better about jostling the laptop around without damaging it.\nSpecs CPU: Intel(R) Core (TM) i7-3517U CPU@ 1.90 GHZ 2.40GHz\nMemory: 8 GB\nGPU: Intel(R) HD Graphis 4000\nDisk: LITEONIT LMT-256M3m (256 GB SSD)\nIntel(R) Centrino (R) Advanced-N 6235 (Wireless N Adapter)\nPerformance As you can imagine, an i7 processor with 8GB of RAM and an SSD drive, the performance is pretty solid so far. Time will tell what kind of longevity I\u0026rsquo;ll get out of this, but for right now I\u0026rsquo;m more than happy with the giddy-up.\nBooting up the laptop takes less than 10 seconds. It\u0026rsquo;s fast enough that it may limit your coffee breaks because you don\u0026rsquo;t have that 5 minute reboot period.\nBattery Life I\u0026rsquo;ve found the battery life of this laptop to be very good so far but I\u0026rsquo;ve seen in the past where the battery life will gradually stop performing so well as time goes on.\nSamsung does have a setting that will allow you to stop charging the battery when it hits 80% so that you don\u0026rsquo;t overcharge the battery and cause possible damage. I\u0026rsquo;m not sure how many people will be interested in limiting their battery run time though. It would also beg the question, \u0026ldquo;If we can limit the charge to 80%, why can\u0026rsquo;t we limit it to 100%?\u0026rdquo;\nCons The laptop is too thin to have certain ports available on it without a dongle. There is no VGA port, no DVD drive, or even an Ethernet port. An Ethernet dongle is included with the laptop though.\nI also had an issue where the mouse would stop working but seems to have been fixed once I updated all of the firmware.\n","permalink":"https://theithollow.com/2013/05/13/samsung-series-9-laptop-review/","summary":"\u003cp\u003eI\u0026rsquo;ve almost always preferred HP laptops for work purposes based on how stable they\u0026rsquo;ve been for me.  But while shopping for my last laptop, I decided to try out the Samsung Series 9.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/05/samsung91.jpg\"\u003e\u003cimg alt=\"samsung91\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/05/samsung91.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eTo be honest, the biggest reason I decided to look at this laptop was the sleek design.  I knew that I would be traveling a lot with my new position and having a light weight laptop was certainly preferable.  In addition an SSD drive made me feel better about jostling the laptop around without damaging it.\u003c/p\u003e","title":"Samsung Series 9 Laptop Review"},{"content":"\nUntil recently, I never paid too much attention to flow control. I knew that it was used in networking, and that it was a setting that sometimes needed modified when I would puttyhyperterminal into a device, but that pretty much ended my knowledge of the matter.\nAs the name suggests, \u0026ldquo;Flow Control\u0026rdquo; will limit the amount of data across a network interface. It\u0026rsquo;s a pretty simple concept but typically we\u0026rsquo;re not trying to slow down our network, but rather speed it up. Flow control can be used to slow traffic down rather than dropping frames.\nHalf Duplex Flow Control When Half Duplex was common place, Cisco used \u0026ldquo;back pressure\u0026rdquo; to control how fast data was coming in. Cisco would essential just send frames down the line to create collisions.\nHalf Duplex Flow Control by using \u0026lsquo;Back Pressure\u0026rsquo;\nOf course this doesn\u0026rsquo;t work on full duplex links because you can send and receive at the same time. So, a new method needed to be devised to handle the communication speed.\nFull Duplex Example\nPause Frames When a network device needs to slow down a sender, it can issue a \u0026ldquo;pause frame\u0026rdquo; which tells the sender to stop transmitting frames for a period of time which is included in the pause frame. Pause frames are implemented at level 2 of the OSI stack so they are implemented below IP and TCP and means they have very basic functionality and really limits their usability.\nProblems with Pause Frames Let\u0026rsquo;s look at a couple of more examples, using flow control. Here, we see an issue where 3 clients: A,B,and C are all trying to reach theITHollow.com using 50% of their line capacity. The receiver then is overloaded and can\u0026rsquo;t handle all of the requests.\nNo Flow Control\nWe could implement flow control so that the senders will get \u0026ldquo;pause\u0026rdquo; frames and slow down their transmissions so that they don\u0026rsquo;t overload the receiver any more.\nFlow Control Implemented\nAn issue remains that since these pause frames are implemented at layer 2 of the OSI model, all Ethernet traffic on the interfaces that receive the pause frame are impacted. In the example below, we see that flow control is implemented and slows down the transmissions of the clients A,B and C in order to keep the receiver from being overloaded. However if clients A and C have additional network capacity and want to communicate with each other, they are also \u0026ldquo;paused\u0026rdquo;\nAll Traffic Limited due to Flow Control\n","permalink":"https://theithollow.com/2013/05/07/flow-control-explained/","summary":"\u003cp\u003e\u003cimg alt=\"dam\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/dam-300x225.jpg\"\u003e\u003c/p\u003e\n\u003cp\u003eUntil recently, I never paid too much attention to flow control.  I knew that it was used in networking, and that it was a setting that sometimes needed modified when I would puttyhyperterminal into a device, but that pretty much ended my knowledge of the matter.\u003c/p\u003e\n\u003cp\u003eAs the name suggests, \u0026ldquo;Flow Control\u0026rdquo; will limit the amount of data across a network interface.  It\u0026rsquo;s a pretty simple concept but typically we\u0026rsquo;re not trying to slow down our network, but rather speed it up.  Flow control can be used to slow traffic down rather than dropping frames.\u003c/p\u003e","title":"Flow Control Explained"},{"content":" Microsoft has made the Exchange 2013 transition from Exchange 2010 a bit easier than it was in the past. This article should help to explain the process.\nPrerequisites Before you begin with this endeavor:\nMake sure that your Exchange 2010 infrastructure has been patched to Exchange Service Pack 3, this includes Edge transport servers, Client Access Servers, Hub Transport Servers and Mailbox Servers. This service pack is required for the coexistence period with Exchange 2013 as noted in the Exchange Team\u0026rsquo;s Blog. Say goodbye to Exchange 2003. You can not have Exchange 2003 in your organization any longer. Check your DNS Server and Event logs for errors. It\u0026rsquo;s unlikely that you had DNS errors before an upgrade that you didn\u0026rsquo;t already know about but it\u0026rsquo;s certainly worth taking a look just to check. A few minutes of discovery is well worth not having hours of troubleshooting afterwards. Plan your Exchange 2013 infrastructure. This article only explains the transition steps, but you should research and understand what your infrastructure should look like before you start a migration. Do you have multiple sites that need High Availability? Do you need multiple Exchange servers in a Database Availability Group? Do you need to separate your Client Access Server from your Mailbox Server for performance or management reasons, or can you put them on the same box? How many different Mailbox databases should you have? These are important design considerations. Licensing There are two flavors of Exchange 2013. Standard allows for up to five mailbox databases, and Enterprise allows for up to 50.\nYou don\u0026rsquo;t need to download one type or another, the same install works for both editions, but entering a product key unlocks the additional features. There is also a 120 day evaluation period that can be utilized and according to Technet when this evaluation period expires, no functionality is lost so it can be used for labs and non-production equipment. It sounds very much like this is on the honor system.\nClient Access Licenses (CALS) must also be purchased and also come in Standard or Enterprise editions. However, you should note that just because you have an Enterprise edition of Exchange Server, doesn\u0026rsquo;t mean you need an Enterprise CAL. Standard CALs can be used to achieve basic mail functionality.\nInstall Exchange 2013 In this design I will be standing up a single CAS server and a single Mailbox server. I have one existing Exchange 2010 server that I will be transitioning from.\nYou should be using the latest installer, but at the time of this post Exchange 2013 CU1 is required for coexistence with Exchange 2010, so that is a minimum. Also, make sure that the user that is logged in to install Exchange is a member of Schema Admins, because the installer will need permission to modify Active Directory Schema.\nAs a time saver, you might take a look at the errors I received when doing the install. When I ran setup it politely explained to me that I needed to install some stuff. I\u0026rsquo;ve added my errorswarnings so you can install this ahead of time.\nError:\nThis computer requires the Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit. Please install the software from http://go.microsoft.com/fwlink/?LinkId=260990.\nWarning:\nThis computer requires the Microsoft Office 2010 Filter Packs - Version 2.0. Please install the software from http://go.microsoft.com/fwlink/?LinkID=191548.\nWarning:\nThis computer requires the Microsoft Office 2010 Filter Packs - Version 2.0 - Service Pack 1. Please install the software from http://go.microsoft.com/fwlink/?LinkId=262358.\nError:\nThe Windows component RSAT-Clustering-CmdInterface isn\u0026rsquo;t installed on this computer and needs to be installed before Exchange Setup can begin.\nI also needed the RSAT tools for Active Directory when I installed CU1.\nDownload the latest installer and run the setup.exe. You\u0026rsquo;ll be presented with a familiar wizard. Once nice thing is Exchange can connect to the Internet to get the latest updates. This is especially nice based off of how new Exchange 2013 is. If there are bug fixes, you can get them at install time.\nThe Introduction screen gives you access to the Exchange 2013 Deployment Assistant and some helpful links for your new server. Click next.\nRead every word of the EULA, consult your attorneys and then choose \u0026ldquo;I accept\u0026rdquo; and move on.\nIt\u0026rsquo;s really up to you how much data you want to send to Microsoft. Pick your pill and move along.\nChoose the server roles you plan on deploying. I\u0026rsquo;ve chosen the Mailbox role only for this install as I will install a CAS server later. You may be wondering where the Hub Transport option is, it doesn\u0026rsquo;t exist in Exchange 2013. Microsoft has slimed down the number of roles needed. (Unified Communications is considered a sub component)\nPick your Exchange server install directory.\nChoose your malware settings. This used to be a product called Microsoft Forefront that is now included in Exchange 2013.\nNow the install process actually starts. You\u0026rsquo;ll see an Organization prep happen which handles the schema changes. If you don\u0026rsquo;t have sufficient permissions you\u0026rsquo;ll get an error.\nWhen all of the magic stuff is done, you\u0026rsquo;ll see the following screen.\nRepeat the same steps on your CAS Server, except this time choose only the CAS role.\nPart 2 will cover migrating mailboxes and some of the news Exchange 2013 things to be aware of.\nMicrosoft Exchange 2010 to Exchange 2013 Transition Part 2 Microsoft Exchange 2010 to Exchange 2013 Transition Part 3 Microsoft Exchange 2010 to Exchange 2013 Transition Part 4\n","permalink":"https://theithollow.com/2013/04/29/microsoft-exchange-2010-to-exchange-2013-transition-part-1/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox.jpg\"\u003e\u003cimg alt=\"R2D2Mailbox\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox-150x150.jpg\"\u003e\u003c/a\u003e   Microsoft has made the Exchange 2013 transition from Exchange 2010 a bit easier than it was in the past.  This article should help to explain the process.\u003c/p\u003e\n\u003ch1 id=\"prerequisites\"\u003ePrerequisites\u003c/h1\u003e\n\u003cp\u003eBefore you begin with this endeavor:\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eMake sure that your Exchange 2010 infrastructure has been patched to \u003ca href=\"http://www.microsoft.com/en-us/download/details.aspx?id=36768\"\u003eExchange Service Pack 3\u003c/a\u003e, this includes Edge transport servers, Client Access Servers, Hub Transport Servers and Mailbox Servers.  This service pack is required for the coexistence period with Exchange 2013 as noted in the \u003ca href=\"http://blogs.technet.com/b/exchange/archive/2013/02/12/released-exchange-server-2010-sp3.aspx\"\u003eExchange Team\u0026rsquo;s Blog\u003c/a\u003e.\u003c/li\u003e\n\u003cli\u003eSay goodbye to Exchange 2003.  You can not have Exchange 2003 in your organization any longer.\u003c/li\u003e\n\u003cli\u003eCheck your DNS Server and Event logs for errors.  It\u0026rsquo;s unlikely that you had DNS errors before an upgrade that you didn\u0026rsquo;t already know about but it\u0026rsquo;s certainly worth taking a look just to check.  A few minutes of discovery is well worth not having hours of troubleshooting afterwards.\u003c/li\u003e\n\u003cli\u003ePlan your Exchange 2013 infrastructure.  This article only explains the transition steps, but you should research and understand what your infrastructure should look like before you start a migration.  Do you have multiple sites that need High Availability?  Do you need multiple Exchange servers in a Database Availability Group?  Do you need to separate your Client Access Server from your Mailbox Server for performance or management reasons, or can you put them on the same box?  How many different Mailbox databases should you have?  These are important design considerations.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch1 id=\"licensing\"\u003eLicensing\u003c/h1\u003e\n\u003cp\u003eThere are two flavors of Exchange 2013.  Standard allows for up to five mailbox databases, and Enterprise allows for up to 50.\u003c/p\u003e","title":"Microsoft Exchange 2010 to Exchange 2013 Transition (part 1)"},{"content":"Microsoft Exchange 2010 to Exchange 2013 Transition part 1 Microsoft Exchange 2010 to Exchange 2013 Transition part 3 Microsoft Exchange 2010 to Exchange 2013 Transition part 4\nI assume you\u0026rsquo;ve reached this page because you finished ready part 1 and are now ready to dive into your newly installed Exchange 2013 server.\nExchange Admin Center Normally your first instinct at this point would be to fire up the Exchange Management Console and poke around. Unfortunately, Exchange 2013 does not have the traditional GUI. Don\u0026rsquo;t worry though, for all of you System Administrators that aren\u0026rsquo;t cozy with Exchange Powershell yet, there is a web console where most of the heavy lifting can be done.\nYou can reach the EAC by going to https://EXCHANGE2013SERVERNAME/ecp under normal circumstances to get an OWA type login.\nI say under normal circumstances because if your login\u0026rsquo;s mailbox doesn\u0026rsquo;t exist on the Exchange 2013 infrastructure yet, you will receive an error message. You may need to force yourself to the Exchange 2013 server by going to: https://EXCHANGE2013SERVERNAME/ecp?ExchClientVer=15.\nOnce you\u0026rsquo;re logged in, you\u0026rsquo;ll probably have no problem getting around, but it is kind of nice to have an example of what each of the menus are called. It\u0026rsquo;s easier than saying the menu on the right or the top middle menu.\nAt this point you could start moving mailboxes over to your new server, but I would suggest verifying some of your settings before you do that to make sure everything is how you like it.\nOne of the first things I checked was the receive connectors. Exchange 2010 didn\u0026rsquo;t give anonymous users access to the receive connectors by default so your new Exchange Server wouldn\u0026rsquo;t receive Internet Email. That wasn\u0026rsquo;t very useful for most people and in my install I found that Microsoft has changed this default setting.\nBy clicking the pencil edit button on the toolbar you can modify the receive connectors. I made no changes to the existing connections.\nSet your permissions on the receive connector.\nSet the IP Addresses for the receive connector.\nYou might have guessed that checkingsetting your send connectors would be the next logical step.\nSetup your SMTP connector to either use DNS or send everything through a smarthost.\nSet your address spaces! The below settings are default and should work in most cases.\nCheck your permissions! You should make sure that another account has access to the EAC to make sure at least two people have access. This is a good management practice.\nIf you are not sure about what permissions should be modified, take a look at the following article from Microsoft Technet. http://technet.microsoft.com/en-us/library/dd638105(v=exchg.150).aspx\nAlso, these roles can be modified via Active Directory now as long as your organization doesn\u0026rsquo;t want to do Exchange Split Permissions.\nMicrosoft Exchange 2010 to Exchange 2013 Transition part 1 Microsoft Exchange 2010 to Exchange 2013 Transition part 3 Microsoft Exchange 2010 to Exchange 2013 Transition part 4 Microsoft Exchange 2010 to Exchange 2013 Transition (part 1)\n","permalink":"https://theithollow.com/2013/04/29/microsoft-exchange-2010-to-exchange-2013-transition-part-2/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox.jpg\"\u003e\u003cimg alt=\"R2D2Mailbox\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox-150x150.jpg\"\u003e\u003c/a\u003e\u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-1/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 1)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition part 1\u003c/a\u003e \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-3/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 3)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition part 3\u003c/a\u003e \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-4/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 4)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition part 4\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI assume you\u0026rsquo;ve reached this page because you finished ready \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-1/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 1)\"\u003epart 1\u003c/a\u003e and are now ready to dive into your newly installed Exchange 2013 server.\u003c/p\u003e","title":"Microsoft Exchange 2010 to Exchange 2013 Transition (part 2)"},{"content":"Microsoft Exchange 2010 to Exchange 2013 Transition part 1 Microsoft Exchange 2010 to Exchange 2013 Transition part 2 Microsoft Exchange 2010 to Exchange 2013 Transition part 4\nMigrate Mailboxes You\u0026rsquo;re ready to migrate your mailboxes! Go to the Recipient link, mailbox category and choose the mailbox(es) you want to migrate. I\u0026rsquo;ll be migrating Ferb@hollow.lab to the new servers. Click \u0026ldquo;To another database\u0026rdquo; action on the lower right hand side of the menu.\nPick what mailbox database you want to move to. Notice that you will see both of your mailbox servers listed even though they are different versions. In previous incantations of Exchange, routing group connectors needed to be created between servers. Things have changed a bit with 2013 in terms of routing which is outside the scope of this post. See http://technet.microsoft.com/en-us/library/aa998825(v=exchg.150).aspx for more details on this subject.\nCreate a migration batch by entering a name. This is a nice new feature where you can create all your moves ahead of your migration and schedule them for whenever you want. You also have the option to move archive mailboxes along with your primary mailbox.\nSetup an email address to mail a move report when it\u0026rsquo;s done, and you can start the batch you just created, right away. In order to show the migration batches, I set it to be manual.\nNow if we look at the migration tab we can see the Migration-1 job that I created and it has 1 mailbox listed. Hit the Play button in the toolbar with your job selected and it will begin the job.\nOnce it\u0026rsquo;s done you\u0026rsquo;ll see it in your batches, along with a report.\nIf you have a large number of mailboxes that you plan on moving and want to do them all at once, I suggest using the Exchange Shell instead:\nGet-Mailbox -Database 2010Databasename -ResultSize Unlimited | New-MoveRequest -TargetDatabase 2013Databasename Microsoft Exchange 2010 to Exchange 2013 Transition part 1 Microsoft Exchange 2010 to Exchange 2013 Transition part 2 Microsoft Exchange 2010 to Exchange 2013 Transition part 4\n","permalink":"https://theithollow.com/2013/04/29/microsoft-exchange-2010-to-exchange-2013-transition-part-3/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox.jpg\"\u003e\u003cimg alt=\"R2D2Mailbox\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox-150x150.jpg\"\u003e\u003c/a\u003e\u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-1/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 1)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition part 1\u003c/a\u003e \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-2/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 2)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition part 2\u003c/a\u003e \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-4/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 4)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition part 4\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"migrate-mailboxes\"\u003eMigrate Mailboxes\u003c/h1\u003e\n\u003cp\u003eYou\u0026rsquo;re ready to migrate your mailboxes!  Go to the Recipient link, mailbox category and choose the mailbox(es) you want to migrate.  I\u0026rsquo;ll be migrating \u003ca href=\"mailto:Ferb@hollow.lab\"\u003eFerb@hollow.lab\u003c/a\u003e to the new servers.  Click \u0026ldquo;To another database\u0026rdquo; action on the lower right hand side of the menu.\u003c/p\u003e","title":"Microsoft Exchange 2010 to Exchange 2013 Transition (part 3)"},{"content":"Microsoft Exchange 2010 to Exchange 2013 Transition Part 1 Microsoft Exchange 2010 to Exchange 2013 Transition Part 2 Microsoft Exchange 2010 to Exchange 2013 Transition Part 3\nI want to take a second to explain that this series of posts on how to migrate to Exchange 2013 didn\u0026rsquo;t come without it\u0026rsquo;s share of difficulties.\nUpon the installation of Exchange 2013 I was able to connect to the Exchange Admin Console without issue. I documented all of the procedures from the first three posts. However after I restarted my Exchange Server I was unable to login anymore. I would continually get a blank page which I\u0026rsquo;ve seen others having the same issues.\nhttp://social.technet.microsoft.com/Forums/en-US/exchangesvrclients/thread/fb105a41-48df-4b85-ac07-ba94d767b9e8/ http://nethop.wordpress.com/2013/02/07/cannot-access-exchange-contorl-panel-ecp-in-exchange-server-2013/\nI was unable to fix this issue using any of the methods described and built a new Exchange 2013 server thinking I did something wrong. After the second Exchange Server did the exact same thing, I\u0026rsquo;ve decided that Microsoft needs to add a patch for whatever problem is happening. Maybe it\u0026rsquo;s a bug with IIS, Server 2012 or Exchange 2013 but I have not made any modifications to the IIS virtual server directories so I\u0026rsquo;m concluding that it\u0026rsquo;s a Microsoft bug.\nIf anyone has suggestions for other users, please post them below so that we can get this solved. I\u0026rsquo;ll update this post if I ever find a solid solution to this puzzle.\n","permalink":"https://theithollow.com/2013/04/29/microsoft-exchange-2010-to-exchange-2013-transition-part-4/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox.jpg\"\u003e\u003cimg alt=\"R2D2Mailbox\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/R2D2Mailbox-225x300.jpg\"\u003e\u003c/a\u003e\u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-1/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 1)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition Part 1\u003c/a\u003e \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-2/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 2)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition Part 2\u003c/a\u003e \u003ca href=\"/2013/04/microsoft-exchange-2010-to-exchange-2013-transition-part-3/\" title=\"Microsoft Exchange 2010 to Exchange 2013 Transition (part 3)\"\u003eMicrosoft Exchange 2010 to Exchange 2013 Transition Part 3\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI want to take a second to explain that this series of posts on how to migrate to Exchange 2013 didn\u0026rsquo;t come without it\u0026rsquo;s share of difficulties.\u003c/p\u003e","title":"Microsoft Exchange 2010 to Exchange 2013 Transition (part 4)"},{"content":" VMware Resource Pools are not something that should be thrown into every vSphere implementation! I considered not writing this post, because of all of the blogs I\u0026rsquo;ve seen that have written about this already. If you don\u0026rsquo;t believe me, check out a few of these:\nhttp://www.ntpro.nl/blog/archives/1512-dont-add-resource-pools-for-fun,-theyre-dangerous.html http://wahlnetwork.com/2012/02/01/understanding-resource-pools-in-vmware-vsphere/ http://www.yellow-bricks.com/2009/11/13/resource-pools-and-shares/ http://frankdenneman.nl/2010/05/18/resource-pools-memory-reservations/\nUnfortunately, I continue to hear resource pools being misunderstood. Don\u0026rsquo;t get me wrong, these are great tools and have a place in your arsenal, but they are used for a very specific reason.\nI’m rarely asked about resource pools because they seem very straightforward. The VMware customers I’ve worked with usually think it’s a good idea to put in a rule to handle situations when resource contention occurs. This usually means a pair of resource pools for Production and a second one for LabDevTest etc.\nProblem: In the event that I have resource contention, I want my production machines to have more access to the physical machine hardware.\nWitnessed solution: Two resource pools are created and one of which has the production servers in it. The production resource pool having a higher number of shares than the secondary resource pool\nProblem with the witnessed solution: A resource pool share setting stays the same no matter what number of virtual machines are housed in it. Let me provide a very simplistic example.\nPhysical Resources:\n- 10 CPUs\n- 100 GBs of RAM\nResource Pools\n- Production (High Shares of CPU and Memory – 4000)\n- Lab (Low Shares of CPU and Memory – 1000)\nVirtual Machines\n- 100 virtual machines in production resource pool\n- 1 virtual machine in the lab resource pool\nThe production Resource pool has 10 CPUs * 4000 shares = share value of 40000 and 100 GBs * 4000 shares = share value 400000\nThe lab resource pool has 10 CPUs * 1000 shares = share value of 10000 and 100GBs * 1000 shares = share value 10000\nSo to break this down we have Production Resource PoolLab Resource PoolCPU resource shares4000010000Memory resource shares400000100000\nMany customers at this point don’t see the issue yet. But, look what happens when you look at the amount of shares / virtual machine!\nProduction Resource PoolShares per VM Production Resource PoolLab Resource PoolShares per VM Production Lab PoolCPU resource shares4000040000/100=40001000010000/1=10000Memory resource shares400000400000/100=40000100000100000/1=100000\nNet results show that each virtual machine in the production resource pool gets 4000 shares of CPU and 40,000 shares of memory compared to the Lab resource pool virtual machines that get 10000 shares of CPU and 100,000 shares of memory!\nThis is not what customers are intending to do.\nThe answer to how to fix this can be found on Chris Wahls site http://wahlnetwork.com/2012/02/01/understanding-resource-pools-in-vmware-vsphere/ where Chris expertly demonstrates the correct way to figure out the math on resource pools and better yet, shows how to dynamically keep these pools in check by using a powerCLI script.\nMy point is, be sure you\u0026rsquo;re not just throwing in resource pools without doing some calculations before hand.\n","permalink":"https://theithollow.com/2013/04/24/resource-pools-are-not-for-everyone/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/resourcepool.png\"\u003e\u003cimg alt=\"resourcepool\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/resourcepool-300x173.png\"\u003e\u003c/a\u003e VMware Resource Pools are not something that should be thrown into every vSphere implementation!  I considered not writing this post, because of all of the blogs I\u0026rsquo;ve seen that have written about this already.  If you don\u0026rsquo;t believe me, check out a few of these:\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.ntpro.nl/blog/archives/1512-dont-add-resource-pools-for-fun,-theyre-dangerous.html\"\u003ehttp://www.ntpro.nl/blog/archives/1512-dont-add-resource-pools-for-fun,-theyre-dangerous.html\u003c/a\u003e \u003ca href=\"http://wahlnetwork.com/2012/02/01/understanding-resource-pools-in-vmware-vsphere/\"\u003ehttp://wahlnetwork.com/2012/02/01/understanding-resource-pools-in-vmware-vsphere/\u003c/a\u003e \u003ca href=\"http://www.yellow-bricks.com/2009/11/13/resource-pools-and-shares/\"\u003ehttp://www.yellow-bricks.com/2009/11/13/resource-pools-and-shares/\u003c/a\u003e \u003ca href=\"http://frankdenneman.nl/2010/05/18/resource-pools-memory-reservations/\"\u003ehttp://frankdenneman.nl/2010/05/18/resource-pools-memory-reservations/\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eUnfortunately, I continue to hear resource pools being misunderstood.  Don\u0026rsquo;t get me wrong, these are great tools and have a place in your arsenal, but they are used for a very specific reason.\u003c/p\u003e","title":"Resource Pools are NOT for Everyone"},{"content":"When I was a Systems Administrator, one of the things we wanted to know was if there were issues with our Active Directory environment. Things like directory health, stale computers, and if someone had modified the domain admins group were at the top of that list.\nThe scripts below were built in an attempt to give a quick overview of the Active Directory. These plugins were built on top of Alan Renouf\u0026rsquo;s vCheck ( @alanrenouf) which provides a great framework for the building of your own scripts. Check out his site if you haven\u0026rsquo;t already!\nTo install the vCheck, simply extract the contents of the zip file to a directory on your computer. If you are running this against a Server 2008 R2 server you\u0026rsquo;ll need to enable WinRM and install the AD Web Services. I ran my tests against a Server 2012 domain controller with no issues.\nIf you haven\u0026rsquo;t modified your signing settings, you may need to change your powershell options to \u0026ldquo;RemoteSigned. http://technet.microsoft.com/library/hh847748.aspx\nSet-ExecutionPolicy -ExecutionPolicy RemoteSigned Once you\u0026rsquo;ve extracted the components, simple run the vcheck.ps1 script which will ask you some questions to customize the script for your own environment.\nThe following items will be checked and give you a report:\nThe Current Flexible Single Master Operators Role locations Any user accounts that are currently locked out A quick count of Users and Computers The list of users in the Domain Admins group User accounts that are set with \u0026ldquo;Account does not expire\u0026rdquo; Stale Computers DCDiag That\u0026rsquo;s it! You can then sit back and run your report anytime you want need an update, or schedule the script and run it daily! Here is an example report.\nDownload the Active Directory vCheck Plugin here: ADvCheck.zip\nIt\u0026rsquo;s pretty easy to add additional plugins to suit your own needs as well, just write your PowerShell scripts and throw them in the plugins directory.\nThanks to Alan Renouf as well as PowerCLI man ( @PowerCliMan) for the tool!\n","permalink":"https://theithollow.com/2013/04/16/active-directory-vcheck/","summary":"\u003cp\u003eWhen I was a Systems Administrator, one of the things we wanted to know was if there were issues with our Active Directory environment.  Things like directory health, stale computers, and if someone had modified the domain admins group were at the top of that list.\u003c/p\u003e\n\u003cp\u003eThe scripts below were built in an attempt to give a quick overview of the Active Directory.  These plugins were built on top of \u003ca href=\"http://www.virtu-al.net\"\u003eAlan Renouf\u0026rsquo;s vCheck\u003c/a\u003e ( \u003ca href=\"http://www.twitter.com/alanrenouf\"\u003e@alanrenouf\u003c/a\u003e) which provides a great framework for the building of your own scripts.  Check out his site if you haven\u0026rsquo;t already!\u003c/p\u003e","title":"Active Directory vCheck"},{"content":"QLogic has introduced a new product that combines their already reliable Fibre Channel host bus adapters with solid state storage in order to do caching. Think Fusion-IO cards with a Fibre Channel HBA as well. (Yes I know that\u0026rsquo;s an over simplification)\nThe new QLogic cards come in 2 flavors. A 200GB SSD option and a 400GB SSD option, both of which are 8Gb Fibre Channel. I\u0026rsquo;ve been told that 8Gb was used to get started with this concept because it was already proven and solid, where as the 16Gb Fibre is much newer. I\u0026rsquo;m sure these cards will be a hit and 16Gb Fibre cards are in the works with even larger capacities.\nThe adapters work much like you would expect, where split writes are created, one to the storage device, and one to the SSD housed on the daughter card. Then read operations that are still in the SSD cache don\u0026rsquo;t have to be sent to the SAN. I\u0026rsquo;ve been told that these adapters can also be setup as targets so that they can share their cache between adapters and in the future may be able to mirror their cache which would be beneficial for virtualized environments.\nTraditionally, caching has been done on the SAN which requires the controllers to still do some work to fetch the data. Using the HBA to do the caching, you can eliminate the controller from having to retrieve this data leaving more room on the SAN for other things. Also, the caching done on the SAN might be caching for 100 servers. The data that is stored in that cache may not be useful to some of the servers in your environment so they don\u0026rsquo;t get any performance benefit out of it. Caching on the HBA can guarantee that a specific server is getting the cache benefits.\nIf caching isn\u0026rsquo;t done on the SAN and you need extremely high IOPS, Fusion-IO boards have been used in the past to great success. Unfortunately, this is like adding direct attached storage so it can only benefit one machine at a time unless you\u0026rsquo;re using some additional replication software.\nThese QLogic cards can give benefits of both direct attached storage as well as traditional SAN caching at the same time, which is clearly a nice benefit.\nCons Even though QLogic is advertising over 300,000 IOPS with these cards, they do have a few downsides as of right now, probably because they are so new.\nThey require a rack mount server to be installed in. If you have blades these cards won\u0026rsquo;t work for you. In addition, these cards require side by side PCI-Express ports available for the HBA and the daughter card which is attached by ribbon cable. This will limit the number of systems that are good candidates to try these out. Lastly, even though these cards are dual port, they become a single point of failure because both ports are on a single card. You could get two of them, but you\u0026rsquo;re looking at 4 PCI-Express slots being available and I\u0026rsquo;m guessing this will be pretty unlikely for most people. The good news is that since these are just used for caching, the data is still available on the SAN so you haven\u0026rsquo;t lost data because of a card failure.\nThoughts I think this is a great idea for the future, but probably isn\u0026rsquo;t going to be mainstream for a while until some of the limitations can be overcome. Look for Fusion-IO to do something similar to add Fibre Channel functionality to their cards as well.\nIf this is something interesting to you, check out QLogic\u0026rsquo;s site and request a demo. www.qlogic.com\n","permalink":"https://theithollow.com/2013/04/15/qlogic-10000-series-adapters/","summary":"\u003cp\u003eQLogic has introduced a new product that combines their already reliable Fibre Channel host bus adapters with solid state storage in order to do caching.  Think Fusion-IO cards with a Fibre Channel HBA as well.  (Yes I know that\u0026rsquo;s an over simplification)\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/10000series.jpg\"\u003e\u003cimg alt=\"10000series\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/10000series.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe new QLogic cards come in 2 flavors.  A 200GB SSD option and a 400GB SSD option, both of which are 8Gb Fibre Channel.  I\u0026rsquo;ve been told that 8Gb was used to get started with this concept because it was already proven and solid, where as the 16Gb Fibre is much newer.  I\u0026rsquo;m sure these cards will be a hit and 16Gb Fibre cards are in the works with even larger capacities.\u003c/p\u003e","title":"QLogic 10000 Series Adapters"},{"content":"My configuration is listed below if anyone is interested in the details.\nSimilar designs have been done before by both Chris Wahl @Chriswahl and prior to that by Phillip Jaenke @RootWyrm who called them \u0026quot; Baby Dragons\u0026quot;. I used their base config and made a few tweaks of my own based on pricing, part availability etc.\nPart List ESXi Servers - Quantity 2 Case: Lian Li PC-V351B MicroATX PSU: SeaSonic Platinum SS-400FL2 Fanless 400W RAM: Kingston 16GB (4 X 8GB) 240-Pinn DDR3 Unbufferred ECC Motherboard: Supermicro MBD-X9SCM-F-O CPU: Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz NICs: Intel EXPI9301CTBLK 1000Mbps PCI-Express, SuperMicro Dual Port Gigabit Card Boot: Kingston DataTraveler 101 G2 8GB USB 2.0 Local SSD: 64 GB Intel SSD Flex Server - Quantity 1 (Used for a Hyper-V server, VSA or 3rd ESXi Host) Case: HP Gen8 Microserver Storage: 4 480GB SSD\u0026rsquo;s from OCZ Storage Array Synology Array: 1- Synology DS1513+ Hard Drives: 5 1 TB Wester Digital Blue 7200 3.5 inch hard drives Networking Equipment Layer 3 Switch: Cisco WS03750G-24T Switch Firewall: Cisco ASA Wireless Router: Dlink Wireless N+ Router ESXi Server Notes: I\u0026rsquo;m not going to lie, when I saw these cases on Chris Wahl\u0026rsquo;s lab and had to have them. They look crazy sharp and I love the pull out Motherboard mounting option. There are other components in common but I have a feeling these were copied because of a similar taste for components and budget rather than lust! :)\nThe power supplies were a must for me. Fanless keeps things quiet, but if you are going with this design please look at Phillip Jaenke\u0026rsquo;s post about his Baby Dragon II build. The fan must be mounted so that it vents heat up (duh) but the Lian Li case doesn\u0026rsquo;t seem to want to mount it that way. Luckily the PSU is mesh and can be screwed in upside down pretty snuggly with no issues. Be aware that you\u0026rsquo;ll want good air flow.\nThe Motherboard is awesome and provides dual 1Gb Nics as well as an IPMI connection! I was skeptical that I wanted to pay extra for a server motherboard with IPMI but I am not sorry. It\u0026rsquo;s great and has an internal USB port for the Kingston USB boot device.\nI added an Intel Gigabit Nic which did not work out of the box. I needed to add a driver which was easy enough to do. There are some instructions for this on Virtual-Drive.in In retrospect I should have added a second one, but I got cheap. I\u0026rsquo;ll probably add another one later on.\nHP Gen8 MicroServer Actually used as a Provider vDC, a Hyper-V Server 2012 R2 server, or a NAS device.\nStorage Storage consists of a Synology DS411 slim full of 480GB SSDs and a Windows Server 2012 that is running both \u0026ldquo;Server for NFS\u0026rdquo; as well as \u0026ldquo;Microsoft iSCSI Software Target\u0026rdquo;. This allows for additional storage as well as an opportunity to test out some NFS and iSCSI methodologies.\nWindows Server 2012Whitebox Server iSCSI Share 240 GB SSD iSCSI Share 2TB 7200 RPM SATA NFS Share 3 TB 5400 RPM SATA Networking For networking, I have a WS-C3750G-24T that is used as my core switch and router. I\u0026rsquo;ve broken it up into Management, Storage, IPMI, vMotion, FT, Virtual Machine Vlans and then this switch bridges to a Cisco ASA5505.\nMy ASA then uplinks to my home Internet connection. If I want to bypass the pix and get straight into my lab I have a Wireless-N D-link router.\nLayout ","permalink":"https://theithollow.com/2013/04/09/new-baby-dragon-home-lab/","summary":"\u003cp\u003eMy configuration is listed below if anyone is interested in the details.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/lab2-pic.jpg\"\u003e\u003cimg alt=\"lab2-pic\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/lab2-pic.jpg\"\u003e\u003c/a\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/Lab2-rack.jpg\"\u003e\u003cimg alt=\"Lab2-rack\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/Lab2-rack.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eSimilar designs have been done before by both Chris Wahl \u003ca href=\"https://twitter.com/chriswahl\"\u003e@Chriswahl\u003c/a\u003e and prior to that by Phillip Jaenke \u003ca href=\"https://twitter.com/RootWyrm\"\u003e@RootWyrm\u003c/a\u003e who called them \u0026quot; \u003ca href=\"http://rootwyrm.us.to/2011/09/better-than-ever-its-the-babydragon-ii/\"\u003eBaby Dragons\u003c/a\u003e\u0026quot;.  I used their base config and made a few tweaks of my own based on pricing, part availability etc.\u003c/p\u003e\n\u003ch1 id=\"part-list\"\u003e\u003cstrong\u003ePart List\u003c/strong\u003e\u003c/h1\u003e\n\u003ch2 id=\"esxi-servers---quantity-2\"\u003eESXi Servers - Quantity 2\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eCase:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B004TO4CJ8/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B004TO4CJ8\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eLian Li  PC-V351B MicroATX\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003ePSU:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B003ZWQXUQ/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B003ZWQXUQ\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eSeaSonic Platinum SS-400FL2 Fanless 400W\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eRAM:\u003c/strong\u003e \u003ca href=\"http://www.newegg.com/Product/Product.aspx?Item=N82E16820239117\"\u003eKingston 16GB (4 X 8GB) 240-Pinn DDR3 Unbufferred ECC\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eMotherboard:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B004WKRDA4/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B004WKRDA4\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eSupermicro MBD-X9SCM-F-O\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eCPU:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B0085MQUTU/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B0085MQUTU\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eIntel Xeon E3-1230 V2 Ivy Bridge 3.3GHz\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eNICs:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B001CY0P7G/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B001CY0P7G\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eIntel EXPI9301CTBLK 1000Mbps PCI-Express\u003c/a\u003e, \u003ca href=\"http://www.amazon.com/gp/product/B001D4JYE0/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B001D4JYE0\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eSuperMicro Dual Port Gigabit Card\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eBoot:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B004TS1J18/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B004TS1J18\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eKingston DataTraveler 101 G2 8GB USB 2.0\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eLocal SSD:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B004W2JKWG/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B004W2JKWG\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003e64 GB Intel SSD\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"flex-server---quantity-1-used-for-a-hyper-v-server-vsa-or-3rd-esxi-host\"\u003eFlex Server - Quantity 1 (Used for a Hyper-V server, VSA or 3rd ESXi Host)\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eCase:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B00DDXS936/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B00DDXS936\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eHP Gen8 Microserver\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eStorage:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B00566FEUO/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B00566FEUO\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003e4 480GB SSD\u0026rsquo;s from OCZ\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"storage-array\"\u003eStorage Array\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eSynology Array:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B00CM9K7E6/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B00CM9K7E6\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003e1- Synology DS1513+\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eHard Drives:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B0088PUEPK/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B0088PUEPK\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003e5 1 TB Wester Digital Blue 7200 3.5 inch hard drives\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"networking-equipment\"\u003eNetworking Equipment\u003c/h2\u003e\n\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003eLayer 3 Switch:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B0000A043Y/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B0000A043Y\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eCisco WS03750G-24T Switch\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eFirewall:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B000JVTTPW/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B000JVTTPW\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eCisco ASA\u003c/a\u003e\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003eWireless Router:\u003c/strong\u003e \u003ca href=\"http://www.amazon.com/gp/product/B000LIFB7S/ref=as_li_ss_tl?ie=UTF8\u0026amp;camp=1789\u0026amp;creative=390957\u0026amp;creativeASIN=B000LIFB7S\u0026amp;linkCode=as2\u0026amp;tag=theithollowco-20\"\u003eDlink Wireless N+ Router\u003c/a\u003e\u003c/li\u003e\n\u003c/ul\u003e\n\u003cblockquote\u003e\n\u003c/blockquote\u003e\n\u003ch2 id=\"esxi-server-notes\"\u003e\u003cstrong\u003eESXi Server Notes:\u003c/strong\u003e\u003c/h2\u003e\n\u003cp\u003eI\u0026rsquo;m not going to lie, when I saw these cases on Chris Wahl\u0026rsquo;s lab and had to have them.  They look crazy sharp and I love the pull out Motherboard mounting option.  There are other components in common but I have a feeling these were copied because of a similar taste for components and budget rather than lust! :)\u003c/p\u003e","title":"Baby Dragon Home Lab"},{"content":"\nToday HP announced their new initiative called Project Moonshot.\nThis initiative takes converged infrastructure and puts it on steroids. Hewlett Packard identified that the amount of compute, power and cooling that is necessary to continue providing resources for Big Data, and mobile platforms is unsustainable at the current rate. It just isn\u0026rsquo;t feasible with the current technology to continue to throw the same servers into data centers without optimizing.\nHP\u0026rsquo;s new server line called Moonshot can now operate with 89% less power, 80% less space, 97% less complexity and is 77% cheaper.\nI assume that in order to really get to that 77% cheaper number, you would need to have a full chassis :)\nThe new 1500 chassis holds 45 hot swappable servers in a 4.3U space. I\u0026rsquo;m not sure what you do with the extra .7U but I\u0026rsquo;m sure you can think of something :) The servers are right sized for applications so that they provide only the amount of power they will need and use the new Intel Atom S1200 chips but additional vendors and models will be added in the future.\nThis is an interesting move for HP because it will mean that many of their current servers are not out of date and customer will no longer be willing to purchase them. But the benefits of such high compute resources with such a small footprint is compelling.\nIf you want to learn more, check out HP\u0026rsquo;s site for more information.\nhttp://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.aspx#tab=TABproducts\n","permalink":"https://theithollow.com/2013/04/08/new-from-hp-project-moonshot/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/04/moonshot0.jpg\"\u003e\u003cimg alt=\"moonshot0\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/04/moonshot0.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eToday HP announced their new initiative called Project Moonshot.\u003c/p\u003e\n\u003cp\u003eThis initiative takes converged infrastructure and puts it on steroids.  Hewlett Packard identified that the amount of compute, power and cooling that is necessary to continue providing resources for Big Data, and mobile platforms is unsustainable at the current rate.  It just isn\u0026rsquo;t feasible with the current technology to continue to throw the same servers into data centers without optimizing.\u003c/p\u003e","title":"New from HP:  Project Moonshot"},{"content":"I see a good number of IT shops with my job and in most cases the largest priority is system uptime. I might be there to install, troubleshoot, etc. but in the front off my mind is the idea that everything must stay up and running.\nIT departments are adding redundant WAN connections, server clusters, fault tolerance, failover devices, disaster recovery sites and redundancies at every level. But in some cases these departments are forgetting a pretty integral part of continuous uptime.\nMeet the forgotten single point of failure.\nDon’t forget about the System Engineers or Administrators. These guysgirls are there on almost a daily basis making sure that the redundancies are working correctly, running processes and monitoring the systems. Many companies can’t do without their “IT guy” for more than a day or two. This isn’t fair to the employee, and can be a serious risk to a company’s infrastructure.\nThink about it, how many people know how to fail over your production site to the disaster side if it happens? Do you want to leave such an important role in the hands of one person? Especially DR, since there is a chance that in a disaster, this person might not be able to do work stuff. People have families and homes that might take priority over work depending on the disaster.\nLet’s face it, I’m sure that these companies would like to have additional employees but it comes down to a cost thing. If this is the case, documentation is an absolute must. All procedures should be documented so that anyone can run the IT shop. Then again, if the engineer is spending this much time on level of detail, you might need another engineer in the first place!\nGood Engineers can be expense, but how much more expensive is it to lose data or business because of this single point of failure.\n","permalink":"https://theithollow.com/2013/04/01/biggest-single-point-of-failure/","summary":"\u003cp\u003eI see a good number of IT shops with my job and in most cases the largest priority is system uptime.  I might be there to install, troubleshoot, etc. but in the front off my mind is the idea that everything must stay up and running.\u003c/p\u003e\n\u003cp\u003eIT departments are adding redundant WAN connections, server clusters, fault tolerance, failover devices, disaster recovery sites and redundancies at every level.  But in some cases these departments are forgetting a pretty integral part of continuous uptime.\u003c/p\u003e","title":"Biggest Single Point of Failure"},{"content":"I\u0026rsquo;m often asked about how to provision virtual machine disks. This almost always comes down to, \u0026ldquo;Should I use thick or thin disks?\u0026rdquo; and then \u0026ldquo;Should I do thin provisioning on the array or on the hypervisor?\u0026rdquo;\nSo here we go: Thin vs Thick\nThin provisioning: Thin provisioned disks don\u0026rsquo;t allocate all of the space during the provisioning of the storage. Instead, they allocate the space on demand. This is a great way to get more bang for you buck out of your storage. Let\u0026rsquo;s take a closer look with an example.\nThe above picture shows a 100GB virtual disk, and 20GB of it is actually being used by the virtual machine. In a thinly provisioned disk, the hypervisor will only show 20GB of disks space used. The virtual machine on the other hand will still show a 100GB disk that is available to be used.\nPros:\nObviously the main reason to use thinly provisioned disks is to cut down on your storage costs. In the example we used earlier, we could create four more virtual machines each of which has 100GB available to them, and is only using a total of 20GB X 4 = 80GB.\nAlso, think about what happens if you start to do full clones. Now you\u0026rsquo;re only increasing your disk space based off what\u0026rsquo;s actually being used.\nCons:\nNow say that we did create four more virtual machines and we\u0026rsquo;re sitting at 80GB of used disk space. Each of those machines could grow to 100GB. If they all did grow unexpectedly, you could fill up the datastore and cause an outage.\nThick Provisioning: Thick provisioning comes in two flavors. Eager Zeroed and Lazy Zeroed.\nEager Zeroed allocates all of the disk space when you provision it and chews up the blocks it\u0026rsquo;s been assigned almost right away. It takes a short period of time during creation in order to write zeroes in all of the assigned blocks. This time has been dramatically reduced with the VAAI primitives.\nTo give a very simplified example of this, the below diagram shows twelve blocks. Four of them have data on them, but the rest are allocated and have 0\u0026rsquo;s written on them.\nLazy Zeroed allocates all of the disk space immediately in the vmfs file system, but doesn\u0026rsquo;t actually start using the disk blocks on the storage system until they are requested by the virtual machine. There is a small performance hit to zero the blocks before they can be written to.\nPros:\nThick provisioning will keep you from over provisioning your datastores and assure you dont\u0026rsquo; cause downtime. Thick provisioned Eager Zeroed will also have the best performance since all of the blocks will be pre-zeroed so they don\u0026rsquo;t have to be during normal operations.\nCons:\nThis type of disk will eat of your storage must faster and will likely waste disk space on empty blocks of data.\nWhat about Array Thin Provisioning? It gets a little more complex when you\u0026rsquo;re considering thin provisioning your storage array as well as your VMware datastores.\nThe best thing to do is to realize that the array doesn\u0026rsquo;t know what VMFS is doing on it. The array can just tell if blocks are empty or not.\nLet\u0026rsquo;s look at Thick Provisioned Eager Zeroed disks on a thin provisioned LUN. We look at the same blocks from our previous diagram only this time we put them on top of a LUN. This LUN is larger than the virtual disk size that was provisioned. If the array thin provisions, it must be at least as large as all of our blocks. Here, the arrow shows the disk savings the a LUN could gain by moving from a full sized LUN to a thin provisioned LUN.\nThin provisioned virtual disks on a thinly provisioned LUN can reduce the size by much more. This example shows the same four blocks that have data on them, but remember that thinly provisioned virtual disks don\u0026rsquo;t pre-allocate the rest of the space. So if we took the same size LUN that was full sized, and then thin provisioned it, we\u0026rsquo;d gain a lot more space.\nThick provisioned Lazy zeroed disks actually behave much like thinly provisioned disk in this instance. Remember that we pre-allocate all of the disk space, but we don\u0026rsquo;t zero it out. This means that our VMFS datastores won\u0026rsquo;t be over provisioned, but the storage array could be. Below we see the four data blocks and the additional allocated blocks, but since they\u0026rsquo;re not zeroed, the array doesn\u0026rsquo;t know anything is allocated. Remember, the array doesn\u0026rsquo;t know what VMware disks are doing.\nConclusions\nSo the answer to the question of \u0026ldquo;Are you thin or thick?\u0026rdquo; and \u0026ldquo;Where at?\u0026rdquo; is\u0026hellip; It depends. But at least now you hopefully understand the differences and can decide for yourself.\n","permalink":"https://theithollow.com/2013/03/26/are-you-thin-or-thick-where-at/","summary":"\u003cp\u003eI\u0026rsquo;m often asked about how to provision virtual machine disks.  This almost always comes down to, \u0026ldquo;Should I use thick or thin disks?\u0026rdquo; and then \u0026ldquo;Should I do thin provisioning on the array or on the hypervisor?\u0026rdquo;\u003c/p\u003e\n\u003cp\u003eSo here we go: Thin vs Thick\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/03/thinvsthick.png\"\u003e\u003cimg alt=\"thinvsthick\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/03/thinvsthick.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"thin-provisioning\"\u003eThin provisioning:\u003c/h2\u003e\n\u003cp\u003eThin provisioned disks don\u0026rsquo;t allocate all of the space during the provisioning of the storage.  Instead, they allocate the space on demand.  This is a great way to get more bang for you buck out of your storage.  Let\u0026rsquo;s take a closer look with an example.\u003c/p\u003e","title":"Are you thin or thick?  Where at?"},{"content":"I\u0026rsquo;ve recently had to label more network cables than I care to discuss, but found my mind wondering over the best method to label these cables. I\u0026rsquo;ve come up with three different ways to label networking cables and wanted to get some thoughts from other Engineers about how they go about this.\nMethod 1: Same label on both sides This method creates 2 labels that are identical and puts one label on each side of the cable. This give the advantage that if you\u0026rsquo;re running multiple batches of cables all at once, you can determine exactly which cable you\u0026rsquo;re working with.\nMethod 2: Label the side it\u0026rsquo;s plugged in on In this method, you put a label on each end for where it\u0026rsquo;s plugged in. To me this seems to be a silly way to label Ethernet cables because you can obviously see where it\u0026rsquo;s plugged in. The only reason I can fathom to do it this way is if you\u0026rsquo;re moving equipment around a lot and want to remember where each cable goes when you need to plug it back in.\nMethod 3: Label opposite sides This is my preferred way to label, but I\u0026rsquo;m flexible :)\nPut a label on each side of an Ethernet cable describing where the other side of the cable is plugged in. This adds the benefit of being able to immediately tell from looking at one end, where the other end of the cable is. I like it better than Method 1 because once everything is plugged in, you may have to go hunting for the other label just to find out where it\u0026rsquo;s plugged in at. This also has the benefit that if you\u0026rsquo;re unplugging your devices, you can still tell where the cables should be plugged back in, although you must think a little and do everything backwards.\nWhat do you think? I\u0026rsquo;d love to hear some feedback on this!\n[poll id=\u0026ldquo;2\u0026rdquo;]\n","permalink":"https://theithollow.com/2013/03/21/how-should-network-cables-be-labled/","summary":"\u003cp\u003eI\u0026rsquo;ve recently had to label more network cables than I care to discuss, but found my mind wondering over the best method to label these cables.  I\u0026rsquo;ve come up with three different ways to label networking cables and wanted to get some thoughts from other Engineers about how they go about this.\u003c/p\u003e\n\u003ch2 id=\"method-1-same-label-on-both-sides\"\u003eMethod 1:  Same label on both sides\u003c/h2\u003e\n\u003cp\u003eThis method creates 2 labels that are identical and puts one label on each side of the cable.  This give the advantage that if you\u0026rsquo;re running multiple batches of cables all at once, you can determine exactly which cable you\u0026rsquo;re working with.\u003c/p\u003e","title":"How should Network Cables be Labeled?"},{"content":"One of the benefits of using HP Virtual Connect in C-class blade Chassis is the ability to have MAC Addresses and WWNs set on a server bay as opposed to the physical server. I\u0026rsquo;m sure you\u0026rsquo;re aware that each device that has a network card has a Media Access Control (MAC) address which is a burned in identifier that makes that NIC unique.\nHP decided that it might be nice to control those MAC Addresses in their blade chassis. Before you setup any server profiles, you have the option to choose \u0026ldquo;Virtual Connect Assigned MAC Addresses\u0026rdquo;. These are addresses that are assigned to each server bay so that no matter what blade is put into the bay, the MAC addresses will stay the same. You might find this very useful in the case of a failed blade. If you receive a new blade from HP and throw it into the same bay, it will retain all of the same MAC Addresses and thus look the same to your switches.\nIn the screenshot below you can see that I\u0026rsquo;ve chosen the static factory default MAC Addresses and have deployed server profiles already so I\u0026rsquo;m not allowed to change the setting.\nIf you choose the \u0026ldquo;Virtual Connect Assigned MAC Addresses\u0026rdquo; option you will need to make sure that no other blade chasses are using the same range. Obviously network switches will not like having multiple devices on their network with the same MAC Addresses. To handle this, HP requires that you pic a range of addresses. You have 64 pre-defined ranges to choose from and they are listed below. (from the HP Support Docs) HP Pre-Defined MAC RangesMAC StartMAC EndHP Defined 100-17-A4-77-00-0000-17-A4-77-03-FFHP Defined 200-17-A4-77-04-0000-17-A4-77-07-FFHP Defined 300-17-A4-77-08-0000-17-A4-77-0B-FFHP Defined 400-17-A4-77-0C-0000-17-A4-77-0F-FFHP Defined 500-17-A4-77-10-0000-17-A4-77-13-FFHP Defined 600-17-A4-77-14-0000-17-A4-77-17-FFHP Defined 700-17-A4-77-18-0000-17-A4-77-1B-FFHP Defined 800-17-A4-77-1C-0000-17-A4-77-1F-FFHP Defined 900-17-A4-77-20-0000-17-A4-77-23-FFHP Defined 1000-17-A4-77-24-0000-17-A4-77-27-FFHP Defined 1100-17-A4-77-28-0000-17-A4-77-2B-FFHP Defined 1200-17-A4-77-2C-0000-17-A4-77-2F-FFHP Defined 1300-17-A4-77-30-0000-17-A4-77-33-FFHP Defined 1400-17-A4-77-34-0000-17-A4-77-37-FFHP Defined 1500-17-A4-77-38-0000-17-A4-77-3B-FFHP Defined 1600-17-A4-77-3C-0000-17-A4-77-3F-FFHP Defined 1700-17-A4-77-40-0000-17-A4-77-43-FFHP Defined 1800-17-A4-77-44-0000-17-A4-77-47-FFHP Defined 1900-17-A4-77-48-0000-17-A4-77-4B-FFHP Defined 2000-17-A4-77-4C-0000-17-A4-77-4F-FFHP Defined 2100-17-A4-77-50-0000-17-A4-77-53-FFHP Defined 2200-17-A4-77-54-0000-17-A4-77-57-FFHP Defined 2300-17-A4-77-58-0000-17-A4-77-5B-FFHP Defined 2400-17-A4-77-5C-0000-17-A4-77-5F-FFHP Defined 2500-17-A4-77-60-0000-17-A4-77-63-FFHP Defined 2600-17-A4-77-64-0000-17-A4-77-67-FFHP Defined 2700-17-A4-77-68-0000-17-A4-77-6B-FFHP Defined 2800-17-A4-77-6C-0000-17-A4-77-6F-FFHP Defined 2900-17-A4-77-70-0000-17-A4-77-73-FFHP Defined 3000-17-A4-77-74-0000-17-A4-77-77-FFHP Defined 3100-17-A4-77-78-0000-17-A4-77-7B-FFHP Defined 3200-17-A4-77-7C-0000-17-A4-77-7F-FFHP Defined 3300-17-A4-77-80-0000-17-A4-77-83-FFHP Defined 3400-17-A4-77-84-0000-17-A4-77-87-FFHP Defined 3500-17-A4-77-88-0000-17-A4-77-88-FFHP Defined 3600-17-A4-77-8C-0000-17-A4-77-8F-FFHP Defined 3700-17-A4-77-90-0000-17-A4-77-93-FFHP Defined 3800-17-A4-77-94-0000-17-A4-77-97-FFHP Defined 3900-17-A4-77-98-0000-17-A4-77-9B-FFHP Defined 4000-17-A4-77-9C-0000-17-A4-77-9F-FFHP Defined 4100-17-A4-77-A0-0000-17-A4-77-A3-FFHP Defined 4200-17-A4-77-A4-0000-17-A4-77-A7-FFHP Defined 4300-17-A4-77-A8-0000-17-A4-77-AB-FFHP Defined 4400-17-A4-77-AC-0000-17-A4-77-AF-FFHP Defined 4500-17-A4-77-B0-0000-17-A4-77-B3-FFHP Defined 4600-17-A4-77-B4-0000-17-A4-77-B7-FFHP Defined 4700-17-A4-77-B8-0000-17-A4-77-BB-FFHP Defined 4800-17-A4-77-BC-0000-17-A4-77-BF-FFHP Defined 4900-17-A4-77-C0-0000-17-A4-77-C3-FFHP Defined 5000-17-A4-77-C4-0000-17-A4-77-C7-FFHP Defined 5100-17-A4-77-C8-0000-17-A4-77-CB-FFHP Defined 5200-17-A4-77-CC-0000-17-A4-77-CF-FFHP Defined 5300-17-A4-77-D0-0000-17-A4-77-D3-FFHP Defined 5400-17-A4-77-D4-0000-17-A4-77-D7-FFHP Defined 5500-17-A4-77-D8-0000-17-A4-77-DB-FFHP Defined 5600-17-A4-77-DC-0000-17-A4-77-DF-FFHP Defined 5700-17-A4-77-E0-0000-17-A4-77-E3-FFHP Defined 5800-17-A4-77-E4-0000-17-A4-77-E7-FFHP Defined 5900-17-A4-77-E8-0000-17-A4-77-EB-FFHP Defined 6000-17-A4-77-EC-0000-17-A4-77-EF-FFHP Defined 6100-17-A4-77-F0-0000-17-A4-77-F3-FFHP Defined 6200-17-A4-77-F4-0000-17-A4-77-F7-FFHP Defined 6300-17-A4-77-F8-0000-17-A4-77-FB-FFHP Defined 6400-17-A4-77-FC-0000-17-A4-77-FF-FF\nMore useful (in my opinion) than being able to set MAC Addresses is the ability to set your WWNs from pre-defined ranges. This is great if you\u0026rsquo;re a consultant because you can have your zoning script ready to go before you even show up. See the example at the bottom of this post.\nIn the same manner, HP has pre-defined WWN ranges for your fibre channel connections. Those are also listed below. (from the HP Support Docs) HP Pre-Defined WWN RangesWWN StartWWN EndHP Defined 150:06:0B:00:00:C2:62:0050:06:0B:00:00:C2:65:FFHP Defined 250:06:0B:00:00:C2:66:0050:06:0B:00:00:C2:69:FFHP Defined 350:06:0B:00:00:C2:6A:0050:06:0B:00:00:C2:6D:FFHP Defined 450:06:0B:00:00:C2:6E:0050:06:0B:00:00:C2:71:FFHP Defined 550:06:0B:00:00:C2:72:0050:06:0B:00:00:C2:75:FFHP Defined 650:06:0B:00:00:C2:76:0050:06:0B:00:00:C2:79:FFHP Defined 750:06:0B:00:00:C2:7A:0050:06:0B:00:00:C2:7D:FFHP Defined 850:06:0B:00:00:C2:7E:0050:06:0B:00:00:C2:81:FFHP Defined 950:06:0B:00:00:C2:82:0050:06:0B:00:00:C2:85:FFHP Defined 1050:06:0B:00:00:C2:86:0050:06:0B:00:00:C2:89:FFHP Defined 1150:06:0B:00:00:C2:8A:0050:06:0B:00:00:C2:8D:FFHP Defined 1250:06:0B:00:00:C2:8E:0050:06:0B:00:00:C2:91:FFHP Defined 1350:06:0B:00:00:C2:92:0050:06:0B:00:00:C2:95:FFHP Defined 1450:06:0B:00:00:C2:96:0050:06:0B:00:00:C2:99:FFHP Defined 1550:06:0B:00:00:C2:9A:0050:06:0B:00:00:C2:9D:FFHP Defined 1650:06:0B:00:00:C2:9E:0050:06:0B:00:00:C2:A1:FFHP Defined 1750:06:0B:00:00:C2:A2:0050:06:0B:00:00:C2:A5:FFHP Defined 1850:06:0B:00:00:C2:A6:0050:06:0B:00:00:C2:A9:FFHP Defined 1950:06:0B:00:00:C2:AA:0050:06:0B:00:00:C2:AD:FFHP Defined 2050:06:0B:00:00:C2:AE:0050:06:0B:00:00:C2:B1:FFHP Defined 2150:06:0B:00:00:C2:B2:0050:06:0B:00:00:C2:B5:FFHP Defined 2250:06:0B:00:00:C2:B6:0050:06:0B:00:00:C2:B9:FFHP Defined 2350:06:0B:00:00:C2:BA:0050:06:0B:00:00:C2:BD:FFHP Defined 2450:06:0B:00:00:C2:BE:0050:06:0B:00:00:C2:C1:FFHP Defined 2550:06:0B:00:00:C2:C2:0050:06:0B:00:00:C2:C5:FFHP Defined 2650:06:0B:00:00:C2:C6:0050:06:0B:00:00:C2:C9:FFHP Defined 2750:06:0B:00:00:C2:CA:0050:06:0B:00:00:C2:CD:FFHP Defined 2850:06:0B:00:00:C2:CE:0050:06:0B:00:00:C2:D1:FFHP Defined 2950:06:0B:00:00:C2:D2:0050:06:0B:00:00:C2:D5:FFHP Defined 3050:06:0B:00:00:C2:D6:0050:06:0B:00:00:C2:D9:FFHP Defined 3150:06:0B:00:00:C2:DA:0050:06:0B:00:00:C2:DD:FFHP Defined 3250:06:0B:00:00:C2:DE:0050:06:0B:00:00:C2:E1:FFHP Defined 3350:06:0B:00:00:C2:E2:0050:06:0B:00:00:C2:E5:FFHP Defined 3450:06:0B:00:00:C2:E6:0050:06:0B:00:00:C2:E9:FFHP Defined 3550:06:0B:00:00:C2:EA:0050:06:0B:00:00:C2:ED:FFHP Defined 3650:06:0B:00:00:C2:EE:0050:06:0B:00:00:C2:F1:FFHP Defined 3750:06:0B:00:00:C2:F2:0050:06:0B:00:00:C2:F5:FFHP Defined 3850:06:0B:00:00:C2:F6:0050:06:0B:00:00:C2:F9:FFHP Defined 3950:06:0B:00:00:C2:FA:0050:06:0B:00:00:C2:FD:FFHP Defined 4050:06:0B:00:00:C2:FE:0050:06:0B:00:00:C3:01:FFHP Defined 4150:06:0B:00:00:C3:02:0050:06:0B:00:00:C3:05:FFHP Defined 4250:06:0B:00:00:C3:06:0050:06:0B:00:00:C3:09:FFHP Defined 4350:06:0B:00:00:C3:0A:0050:06:0B:00:00:C3:0D:FFHP Defined 4450:06:0B:00:00:C3:0E:0050:06:0B:00:00:C3:11:FFHP Defined 4550:06:0B:00:00:C3:12:0050:06:0B:00:00:C3:15:FFHP Defined 4650:06:0B:00:00:C3:16:0050:06:0B:00:00:C3:19:FFHP Defined 4750:06:0B:00:00:C3:1A:0050:06:0B:00:00:C3:1D:FFHP Defined 4850:06:0B:00:00:C3:1E:0050:06:0B:00:00:C3:21:FFHP Defined 4950:06:0B:00:00:C3:22:0050:06:0B:00:00:C3:25:FFHP Defined 5050:06:0B:00:00:C3:26:0050:06:0B:00:00:C3:29:FFHP Defined 5150:06:0B:00:00:C3:2A:0050:06:0B:00:00:C3:2D:FFHP Defined 5250:06:0B:00:00:C3:2E:0050:06:0B:00:00:C3:31:FFHP Defined 5350:06:0B:00:00:C3:32:0050:06:0B:00:00:C3:35:FFHP Defined 5450:06:0B:00:00:C3:36:0050:06:0B:00:00:C3:39:FFHP Defined 5550:06:0B:00:00:C3:3A:0050:06:0B:00:00:C3:3D:FFHP Defined 5650:06:0B:00:00:C3:3E:0050:06:0B:00:00:C3:41:FFHP Defined 5750:06:0B:00:00:C3:42:0050:06:0B:00:00:C3:45:FFHP Defined 5850:06:0B:00:00:C3:46:0050:06:0B:00:00:C3:49:FFHP Defined 5950:06:0B:00:00:C3:4A:0050:06:0B:00:00:C3:4D:FFHP Defined 6050:06:0B:00:00:C3:4E:0050:06:0B:00:00:C3:51:FFHP Defined 6150:06:0B:00:00:C3:52:0050:06:0B:00:00:C3:55:FFHP Defined 6250:06:0B:00:00:C3:56:0050:06:0B:00:00:C3:59:FFHP Defined 6350:06:0B:00:00:C3:5A:0050:06:0B:00:00:C3:5D:FFHP Defined 6450:06:0B:00:00:C3:5E:0050:06:0B:00:00:C3:61:FF\nVirtual Connect Enterprise Manager uses a different set of MAC Addresses and WWNs. It has the ability to hand out over 131,000 addresses and manages more than one Virtual Connect Domain. (from the HP Support Docs) VCEM Defined MAC RangeMAC StartMAC EndVCEM Defined00-21-5A-9B-00-0000-21-5A-9C-FF-FF\nVCEM Defined WWN RangeWWN StartWWN EndVCEM Defined50:01:43:80:02:A3:00:0050:01:43:80:02:A4:FF:FF\nExample Zoning config (Cisco) switch(config)#fcalias name Blade1 vsan 100 switch(config-fcalias)# member wwnn 50:06:0B:00:00:C2:62:00 switch(config-fclias)# exit\nswitch(config)#fcalias name Blade2 vsan 100 switch(config-fcalias)# member wwnn 50:06:0B:00:00:C2:62:02 switch(config-fclias)# exit\nswitch(config)# zone name Bay1-SP1 vsan 100 switch(config-zone)# member fcalias Blade1 switch(config-zone)# member fcalias StorageProcessor1 switch(config-zone)# exit switch(config)#\nswitch(config)# zoneset BladeSet vsan 100 switch(config-zoneset)# member Bay1-SP1 switch(config-zoneset)# member Bay2-SP1 switch(config-zoneset)# exit\nswitch(config)# zoneset activate name BladeSet vsan 100\nswitch(config)# write mem\n","permalink":"https://theithollow.com/2013/03/18/hp-virtual-connect-mac-addresses-and-wwns/","summary":"\u003cp\u003eOne of the benefits of using HP Virtual Connect in C-class blade Chassis is the ability to have MAC Addresses and WWNs set on a server bay as opposed to the physical server.  I\u0026rsquo;m sure you\u0026rsquo;re aware that each device that has a network card has a Media Access Control (MAC) address which is a burned in identifier that makes that NIC unique.\u003c/p\u003e\n\u003cp\u003eHP decided that it might be nice to control those MAC Addresses in their blade chassis.  Before you setup any server profiles, you have the option to choose \u0026ldquo;Virtual Connect Assigned MAC Addresses\u0026rdquo;.  These are addresses that are assigned to each server bay so that no matter what blade is put into the bay, the MAC addresses will stay the same.  You might find this very useful in the case of a failed blade.  If you receive a new blade from HP and throw it into the same bay, it will retain all of the same MAC Addresses and thus look the same to your switches.\u003c/p\u003e","title":"HP Virtual Connect MAC Addresses and WWNs"},{"content":"\nOne of the new features I really wanted to check out in Server 2012 was the ability to setup a highly available DHCP server.\nPrior to Windows 2012 if you wanted to setup a highly available DHCP solution, you only had a couple of options.\n1. You could setup up a split scope, which required you to setup identical DHCP scopes on two servers, and then adding exclusion ranges on each of them so they didn\u0026rsquo;t both hand out the same IP Addresses. Usually this was done in an 80/20 fashion.\n2. Introduce windows clustering, which required shared storage and sharing IP Addressses, let alone the additional licenses that would need to be purchased for an Enterprise version of Windows Server.\n3. Create a standby server, where the DHCP configurations were the same on two servers, but the standby server didn\u0026rsquo;t have an activated scope. During a failure, the standby server could quickly be activated.\nIntroducing Server 2012 DHCP Failover\nNow with Server 2012 you can setup multiple DHCP Servers very quickly, and set them up in either a load balanced configuration or a hot standby config. In both configurations, the two DHCP servers are sharing their database updates so that they are both up to date and in sync!\nDHCP Load Balancing If you setup load balancing, you can configure something similar to split scopes, where each DHCP server is handing out a percentage of the IP Addresses. The difference between this setup and old split scope setup, is this uses a hash value. So if the MAC address of the client machine requesting an IP, has a hash value of 1 Server 1 hands it an IP address and similarly for Server 2.\nIn the event that one of the DHCP Servers is down, this hash value is no longer used and the available server hands out the IP Addresses for all clients.\nDHCP Standby The DHCP Hot Standby configuration works just like you\u0026rsquo;d expect it to. You specify one server as the Active server and the other as a Passive server. The Active server hands out all of the IP Addresses unless a failure occurs. If the active server is down, the passive server begins handing out all of the IP addresses.\nSetup After you configure your normal scope settings, right click on either the IPv4 or IPv6 server and choose \u0026ldquo;Configure Failover\u0026hellip;\u0026rdquo;\nIf you have multiple scopes, you can either choose all of them or only some of them.\nAdd the server that will serve as your secondary DHCP Server.\nChoose what kind of failover mode you want to use. In my example we\u0026rsquo;re using load balancing with a 50/50 split.\nAs always with wizards, review your settings and click finish.\nWhen the setup is finished, you\u0026rsquo;ll get a list of things that were done and if any of them failed.\nIf you look at your scope settings now, you can modify your settings for failover if necessary.\nThat\u0026rsquo;s it! Very easy to configure.\nIf you for whatever reason need to undo your DHCP failover, all you need to do is choose the \u0026ldquo;Deconfigure Failover\u0026rdquo; option and it undoes everything. It\u0026rsquo;s very simple.\nThis is a very good way to copy your DHCP settings to a new server as well. It beats backing up and restoring from the old days.\nKudos Microsoft.\n","permalink":"https://theithollow.com/2013/03/11/windows-server-2012-dhcp-high-availability/","summary":"\u003cp\u003e\u003cimg alt=\"169\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/03/169.png\"\u003e\u003c/p\u003e\n\u003cp\u003eOne of the new features I really wanted to check out in Server 2012 was the ability to setup a highly available DHCP server.\u003c/p\u003e\n\u003cp\u003ePrior to Windows 2012 if you wanted to setup a highly available DHCP solution, you only had a couple of options.\u003c/p\u003e\n\u003cp\u003e1.  You could setup up a split scope, which required you to setup identical DHCP scopes on two servers, and then adding exclusion ranges on each of them so they didn\u0026rsquo;t both hand out the same IP Addresses.  Usually this was done in an 80/20 fashion.\u003c/p\u003e","title":"Windows Server 2012 DHCP High Availability"},{"content":"I often hear Port Address Translation (PAT)referred to as Network Address Translation (NAT). Its a pretty common to hear this and is really not a big deal because the two are similar and I know what is meant. But to clear things up I decided to put together a quick post.\nNetwork Address Translation NAT is the process of \u0026ldquo;translating\u0026rdquo; an IP Address in a router or firewall. This is most commonly done to present a private IP Address into a Public IP Address that is accessible on the Internet. For instance, you may want to have your E-mail server have a public address so that it can route mail.\nHow does it work?\nThe router will have a mapping of the internal and external IP Address Mappings. When any traffic from the inside interface travels over the outside interface, the router changes the source IP Address. When the return traffic gets back to the router the destination address will be the outside IP Address and will then be \u0026ldquo;translated\u0026rdquo; back to the internal IP Address.\nOne of the obvious limitations to this is that you have to have a static IP Address on your servers, and you must have a NAT address for each one of them. This doesn\u0026rsquo;t do you a whole lot of good if you\u0026rsquo;re trying to save on Public IP Addresses that you need to register. In comes Port Address Translation.\nPort Address Translation PAT works in a very similar manner to the description of NAT above. The difference being that all of the internal machines can share a single translation address.\nWhen an inside machine sends traffic to the router, the router builds a table with the inside IP Address, the source port and then uses a translated address and a new port ID. This way it can track where the traffic came from, so when the destination machine returns the traffic, the router knows where to send it.\nDynamic NAT Dynamic NAT uses a pool of Public IP Addresses for translation. This allows you to \u0026ldquo;overcommit\u0026rdquo; your IP Addresses because you aren\u0026rsquo;t using all of them at the exact same time. An very simplistic example might be having 5 PCs inside your network, and having only 3 Public Addresses available.\nHere, each time one of the inside machines wanted to traverse the router, the router would look to see what Public IP Addresses are available and assign an unused one.\nIn the below example there are 2 machines on the inside of the network. When they traverse the firewall, a translation is done and the firewall marks the 209.252.1.1 address as in use. If the 192.1.1.2 machine needed to traverse the firewall as well during this time, only the other two IP Addresses in the NAT Pool could be used.\nObviously this could create an issue if have more traffic than your NAT Pool can handle, as well as causing issues because your servers keep changing their outside IP Addresses. This might not be a good technique to use for a mail server.\nSome of the translation methods mentioned in this article may be used simultaneously. It\u0026rsquo;s not uncommon for a company to use NAT for specific servers such as Email, Terminal Services Gateways etc, and PAT for their desktops.\nAlso, each of these methods has it\u0026rsquo;s own little purpose\u0026hellip;until we\u0026rsquo;re on IPv6 that is.\n","permalink":"https://theithollow.com/2013/03/05/nat-vs-pat/","summary":"\u003cp\u003eI often hear Port Address Translation (PAT)referred to as Network Address Translation (NAT).  Its a pretty common to hear this and is really not a big deal because the two are similar and I know what is meant.  But to clear things up I decided to put together a quick post.\u003c/p\u003e\n\u003ch2 id=\"network-address-translation\"\u003eNetwork Address Translation\u003c/h2\u003e\n\u003cp\u003eNAT is the process of \u0026ldquo;translating\u0026rdquo; an IP Address in a router or firewall.  This is most commonly done to present a private IP Address into a Public IP Address that is accessible on the Internet.  For instance, you may want to have your E-mail server have a public address so that it can route mail.\u003c/p\u003e","title":"NAT vs PAT"},{"content":"\nI was recently integrating Veeam Backups with HP Data Protector for a backup project when I found a great Powershell command that I didn\u0026rsquo;t know about. Invoke-Command -comp [computername] –scriptblock {script}\nIf you’re familiar with PSExec.exe this is an equivalent powershell command, but if you’re not, this command will allow you to execute something on another machine.\nVeeam has the ability to call a script when a backup job completes, but I needed a different server to execute that script.\nSo here was my solution:\nIn my backup job, under the advanced tab, I had Veeam execute a script upon completion.\nC:WindowsSystem32WindowsPowerShellv1.0powershell.exe -noninteractive -file \u0026ldquo;C:scriptsbackupscript.ps1\u0026rdquo;\nThis script executes a powershell console which then calls a script that I have on my Veeam server.\nThe Veeam server script runs the magic! This script calls the Invoke-command –comp [computername] which runs whatever is in the –scriptblock {} section on the remote computer!\nInvoke-Command -comp Dataprotector.hollow.lab -scriptblock {\\dataprotectorscriptsbackupscript.bat}\nSo the script I had on the Dataprotector server ran a backup to tape of my Veeam backups. But you could use this for anything you needed to.\nI should mention that in order for this to work correctly, you need to have the WinRM service running on your remote server, and you need to enable the Remote Powershell options.\nLog into the Remote server and run Enable-PSRemoting –force\nPlease see the Microsoft TechNet article about this if you’re running this on servers that are not part of the same domain. http://technet.microsoft.com/en-us/magazine/ff700227.aspx\nI hope this information is useful to you, and a big thanks to Veeam Support for pointing me in the right direction on this!\n","permalink":"https://theithollow.com/2013/02/26/invoke-posh/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/02/powercli.png\"\u003e\u003cimg alt=\"powercli\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/02/powercli.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI was recently integrating Veeam Backups with HP Data Protector for a backup project when I found a great Powershell command that I didn\u0026rsquo;t know about.  \u003cstrong\u003e\u003cem\u003eInvoke-Command -comp [computername] –scriptblock {script}\u003c/em\u003e\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eIf you’re familiar with PSExec.exe this is an equivalent powershell command, but if you’re not, this command will allow you to execute something on another machine.\u003c/p\u003e\n\u003cp\u003eVeeam has the ability to call a script when a backup job completes, but I needed a different server to execute that script.\u003c/p\u003e","title":"Invoke PoSH"},{"content":"It\u0026rsquo;s hard to believe but theITHollow.com is now 1 year old! The first year has been great and I feel like I\u0026rsquo;ve probably learned more writing it than the readers have learned from reading it. Thank you for a great first year and if you keep reading it, I\u0026rsquo;ll keep writing it.\n","permalink":"https://theithollow.com/2013/02/25/happy-1-year-birthday-to-theithollow-com/","summary":"\u003cp\u003eIt\u0026rsquo;s hard to believe but theITHollow.com is now 1 year old!  The first year has been great and I feel like I\u0026rsquo;ve probably learned more writing it than the readers have learned from reading it.  Thank you for a great first year and if you keep reading it, I\u0026rsquo;ll keep writing it.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/02/cake.png\"\u003e\u003cimg alt=\"cake\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/02/cake.png\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Happy 1 Year Birthday to theITHollow.com"},{"content":"There was some exciting news released today so I wanted to get it in a post in case you hadn\u0026rsquo;t heard about it yet.\nThe HP Global Partner Conference is going on in Vegas and they release some new gear to accentuate their converged infrastructure model.\nA new BladeSystem Platinum was announced which will include the options for infiniband, 16Gb Fibre Channel, and 40Gb Ethernet! If you\u0026rsquo;d like more information about the new BladeSystem Platinum please check out HP\u0026rsquo;s video.\nhttp://www.youtube.com/watch?feature=player_detailpage\u0026amp;v=5T9ojeEYvu8\nIn the networking arena, HP released a new 2920 switch which combines both wired and wireless networking.\nAnd in the storage sector, HP released two new StoreVirtual (lefthand) devices. I urge you to check out Calvin Zito\u0026rsquo;s blog if you want more information about these two devices. Calvin is always \u0026ldquo;in the know\u0026rdquo; on the HP storage devices. Follow him on twitter at @HPStorageGuy. The basics of the new StoreVirtual devices are in the video from Calvin\u0026rsquo;s site.\nhttp://www.youtube.com/watch?feature=player_embedded\u0026amp;v=Av_Qc7gMSdY\nAlong with the new StoreVirtual devices, HP released StoreSystem Storage which give HP the ability to do File and Block storage, which combines their 3PAR thin provisioning as well as the StoreOnce Deduplication to their storage portfolio\nNetapp today also announced their new EF540 FlashRay array. The flash array market has thus far been dominated by smaller startup companies so this new array released by Netapp finally may give companies the confidence to switch to flash based storage. You may be thinking, \u0026ldquo;Flash isn\u0026rsquo;t new to the big storage guys?\u0026rdquo;, but adding flash disks to an existing array is kind of like putting lipstick on a pig. \u0026ldquo;It\u0026rsquo;s still a pig.\u0026rdquo; The new Netapp array has been built from the ground up to be used for flash disks.\n","permalink":"https://theithollow.com/2013/02/19/february-19th-2013-announcements/","summary":"\u003cp\u003eThere was some exciting news released today so I wanted to get it in a post in case you hadn\u0026rsquo;t heard about it yet.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/02/images.jpg\"\u003e\u003cimg alt=\"images\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/02/images.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe HP Global Partner Conference is going on in Vegas and they release some new gear to accentuate their converged infrastructure model.\u003c/p\u003e\n\u003cp\u003eA new BladeSystem Platinum was announced which will include the options for infiniband, 16Gb Fibre Channel, and 40Gb Ethernet!  If you\u0026rsquo;d like more information about the new BladeSystem Platinum please check out HP\u0026rsquo;s video.\u003c/p\u003e","title":"February 19th 2013 Announcements"},{"content":"If you\u0026rsquo;re an engineer and you\u0026rsquo;re trying to get more experience with a variety of different storage devices, you might find yourself in a bit of a pickle. Most customers settle one one or two storage vendors and that\u0026rsquo;s it. So if you work for one of these companies you can learn EMC or Netapp, etc. I highly doubt your company would be interested in purchases a few different types of storage devices so that you can learn them as they are quite expensive.\nIf you want to test out your skills in a non-production environment or just want to see how things work, I suggest downloading simulators to get some hands on experience. This might be a good idea if you\u0026rsquo;re considering purchasing a product as well.\nBelow is a list of simulators and where you can find them. I\u0026rsquo;ve also added a few server and network simulators as well in case you are interested. Happy Testing!\nMay require a PowerLink account\nVNX Celerra Clariion CX (requires a form filled out) Navisphere (requires a form filled out) May require a NOW account\nONTAP 8.X HP EVA P6000 StoreVirtualLefthandP4000 - 60dayTrial HP Virtual Connect Simulator - (For Channel Partners only. Login required) Cisco UCS Simulator(CCO Login required) ZFS GNS3- A great CCNA study tool! If you have any other resources for simulators, please post them in the comments! I\u0026rsquo;m sure there are other great simulators out there that will be useful to the community.\n","permalink":"https://theithollow.com/2013/02/19/storage-simulators/","summary":"\u003cp\u003e\u003cimg alt=\"sims\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/02/sims.jpg\"\u003eIf you\u0026rsquo;re an engineer and you\u0026rsquo;re trying to get more experience with a variety of different storage devices, you might find yourself in a bit of a pickle.  Most customers settle one one or two storage vendors and that\u0026rsquo;s it.  So if you work for one of these companies you can learn EMC or Netapp, etc.  I highly doubt your company would be interested in purchases a few different types of storage devices so that you can learn them as they are quite expensive.\u003c/p\u003e","title":"Virtual Simulators"},{"content":"\nOh Noes! I sense lolcats in this post.\nI\u0026rsquo;ve been seeing Category 6a cable if a few datacenters recently and thought it might be a good idea to review when and why we would use this type of cabling.\nWiring The Category 6a cabling is wired the same as Category 5e at 1000BaseTX speeds. Note: that you can get away with splitting two sets of pairs off of Cat5e, but this only allows 100BaseT Ethernet.\nPinouts come in either T568A or T568B. B seems to be more widely used.\nBorrowed directly from Wikipedia. :)\nHow Does Cat6a Provide 10GBaseT? You might be wondering, \u0026ldquo;How can Cat6a provide higher bandwidth than Cat5e if the pinouts and twisted pairs are the same?\u0026rdquo;\nThe answer is found in how the cable is prepared. Category 6a wire uses 22AWG wire size as opposed to the Cat5e size of 24AWG. The additional size allows for less of a loss of signal and in Ethernet terms, this means an increase in speed. Category 5e reaches transmitions of 100MHz where Category 6a can reach 500 MHz. (Cat6 reaches the 250 MHz which limits the distance it can run to less than the 100 Meters that we\u0026rsquo;ve become accustomed to)\nThe second property that makes Cat6a different is the amount of insulation around each wire. Cat6a wires have a longitudinal separator. This separator insulates each of the four wire pairs from crosstalk with the other twisted pairs. By eliminating the noise from other twisted pairs, a much cleaner electrical signal can reach it\u0026rsquo;s destination.\nWhen should I use Cat6a? Being an engineer that\u0026rsquo;s excited about newer technologies, I would always want to work with the latest standards so I can get the largest benefit. But when it comes to Category 6a cabling, this isn\u0026rsquo;t really a good practice. There are several factors that might make you consider using the older Cat5e cabling.\nPrice- Category 6a cabling is much more expensive than category 5e. Especially if you don\u0026rsquo;t see your network growing to 10Gb speeds any time in the near future, stick with old reliable.\nDistance would calculate into this equation as well. Longer runs means more materials, so perhaps your long runs should be Cat5e and short runs like server to switch or switch to switch could be Cat6a.\nCorners- Cat6a has more insulation which makes the cables thicker. This makes the cables more rigid and more difficult to bend. The additional bending difference may mean you should stick with Cat5e.\nTie Downs- Be careful if you\u0026rsquo;ve got minimal space to hold your cables such as a small area of a server rack. Many datacenter engineers like to use tie downs to secure cables to the sides of the server rack which makes them look nice, but part of why Cat6a provides higher speeds is because of the extra insulation. Crimping these cables down with a tie down can negate some of that insulation making it less effective.\nCurrent Network equipment- Obviously if your switches are only cabable of 1000BaseT then Cat6a cables may be a waste of money.\n","permalink":"https://theithollow.com/2013/02/12/when-to-use-cat-6a/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/02/IMG_20130202_152607.jpg\"\u003e\u003cimg alt=\"Oh Noes!  I sense lolcats in this post.\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/02/IMG_20130202_152607.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eOh Noes! I sense lolcats in this post.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve been seeing Category 6a cable if a few datacenters recently and thought it might be a good idea to review when and why we would use this type of cabling.\u003c/p\u003e\n\u003ch2 id=\"wiring\"\u003eWiring\u003c/h2\u003e\n\u003cp\u003eThe Category 6a cabling is wired the same as Category 5e at 1000BaseTX speeds.  Note: that you can get away with splitting two sets of pairs off of Cat5e, but this only allows 100BaseT Ethernet.\u003c/p\u003e","title":"When to use Cat 6a"},{"content":" Disaster Recovery has never been easier to manage than it is right now. Virtualization has given engineers a tremendous tool to allow us to almost effortlessly move workloads between datacenters. Now that we’re virtualizing workloads, we’re now capable of standing up exact copies of our servers in two offices and have them up and running in very short RTOs.\nIn the past year we’ve seen two major storms hit the East Coast causing severe power outages as well as making commutes difficult or impossible for users to get to work. Thanks to the cloud we have many more mobile users than we used to and even if they’re not considered mobile, their servers may not be located in their office. Cloud presents some great options for disaster recovery that should be taken advantage of, no matter what your geographic location. If you’re in a SMB and you don’t have a DR plan, GET ONE NOW!\nYou don’t have to use old tape backups anymore. VMware is leveraging the inherent abilities that cloud provides to allow you to do quick migrations of mission critical servers. VMware Site Recovery Manager is a favorite tool of mine and not only allows you to build most of your DR plan in it, but test it during production hours without affecting your business.\nVMware vCloud Connector now allows you to join multiple clouds together. If you have a main office on the East coast, setup a second private cloud on the West coast. If you’re worried about a big storm, move the workloads and keep on humming. What about a smaller SMB that can’t afford to have a second office? Try a public cloud provider such as HPcloud, Amazon, or even vcloud.vmware.com (beta). You can get a second network setup relatively cheap and only pay for what you’re using. This will allow you to get your DR Site up and running for a minimal of cost.\nHow much will it really cost you if you have a disaster and the business can’t function for a prolonged period of time?\n","permalink":"https://theithollow.com/2013/02/11/1584-2/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/02/soggycat.png\"\u003e\u003cimg alt=\"soggycat\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/02/soggycat.png\"\u003e\u003c/a\u003e Disaster Recovery has never been easier to manage than it is right now.  Virtualization has given engineers a tremendous tool to allow us to almost effortlessly move workloads between datacenters.  Now that we’re virtualizing workloads, we’re now capable of standing up exact copies of our servers in two offices and have them up and running in very short RTOs.\u003c/p\u003e\n\u003cp\u003eIn the past year we’ve seen two major storms hit the East Coast causing severe power outages as well as making commutes difficult or impossible for users to get to work.  Thanks to the cloud we have many more mobile users than we used to and even if they’re not considered mobile, their servers may not be located in their office.  Cloud presents some great options for disaster recovery that should be taken advantage of, no matter what your geographic location.  If you’re in a SMB and you don’t have a DR plan, GET ONE NOW!\u003c/p\u003e","title":"Are you Prepared for Disaster?"},{"content":"\nVMware slot sizes are an important topic if you\u0026rsquo;re concerned with how many ESXi hosts are required to run your environment.\nWhat is a Slot? To begin this post, we need to understand what a slot is. A slot is the minimum amount of CPU and memory resources required for a single VM in an ESXi cluster. Slot size is an important concept because it affects admission control.\nA VMware ESXi cluster needs a way to determine how many resources need to be available in the event of a host failure. This slot calculation gives the cluster a way to reserve the right amount of resources.\nHow are Slots Sized? The slot has two parts, the CPU component and the memory component. Each of them has its own calculation. If there are no virtual machine resource reservations in the cluster, then the slot size (for ESXi 5 at least) is 32 Mhz for CPU and 0 MBs + overhead for memory. (I\u0026rsquo;ve used 80 MBs as my memory overhead in the examples)\nOn to an incredibly simplistic diagram\u0026hellip;\nIn the example below we have 2 ESXi hosts that have the same amount of resources available for virtual machines. There are different sized VMs, but none of them have a reservation. Doing a quick calculation we can determine that 384 slots are available on each host.\nCPU Component: 4 X 3.0 GHz / 32 MHz = 384 slots\nMemory Component: 49 GBs / 80 MBs = 627 slots\nWe take the lower value between the CPU slot size and the memory slot size to determine the number of virtual machines that can be started up under admission control. So therefore we could safely start 384 machines on these ESXi hosts, have one fail, and have the other host start all of them.\n(I should mention that it\u0026rsquo;s unlikely that you could get 384 vms on one of these hosts. That would be a great consolidation ratio.)\nProblem Scenario\nWhat if you have a single large VM with a reservation, but the rest of the virtual machines are relatively small.\nLet\u0026rsquo;s look at the same environment, but this time let\u0026rsquo;s make the larger VM have a reservation on it.\nCPU Component: 4 X 3.0 GHz / 2000 MHz = 6 slots\nMemory Component: 49 GBs / 4024 MBs = 12 slots\nAdmission control is going to tell us that only 6 slots are available on host B, so it will only allow 6 VMs on host A to be powered on. Since I\u0026rsquo;m using a simplistic diagram with only two hosts, we know that these VMs will still fit on the host but since we use the largest slot size to determine how much we can fail over admission control will stop us from powering on VMs.\nWhat are our options? Option 1 - Don\u0026rsquo;t use reservations unless their is a good reason to do so.\nOption 2 - We can manually configure the slot size on the cluster.\nNavigate to the cluster settings and go to the HA Section, Click Edit and you\u0026rsquo;ll have the option of modifying the slot size. Note that if you do this, some of your VMs will require multiple slots to run. For instance the large VM we used in our example might take more than 1 slot depending on what size you make it. The button below the slot size configuration may help you determine how many VMs will be affected by this change. View Your Slot Size If you\u0026rsquo;re curious about what the slot size is on your system, look at your cluster summary. There will be an item listed for slot size.\nSummary If you\u0026rsquo;re in a situation where you think you need to add extra ESXi hosts to your cluster because you can\u0026rsquo;t power on virtual machines without exceeding your admission control rules, take a look at your slot sizes first. It may save you some money on a host you don\u0026rsquo;t really need.\nDo you want more information on the subject? Take a look at either Frank Denneman or Duncan Epping\u0026rsquo; s blogs, or their book\n","permalink":"https://theithollow.com/2013/02/05/slotsize/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/01/slots.jpg\"\u003e\u003cimg alt=\"slots\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/01/slots.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eVMware slot sizes are an important topic if you\u0026rsquo;re concerned with how many ESXi hosts are required to run your environment.\u003c/p\u003e\n\u003ch2 id=\"what-is-a-slot\"\u003e\u003cstrong\u003eWhat is a Slot?\u003c/strong\u003e\u003c/h2\u003e\n\u003cp\u003eTo begin this post, we need to understand what a slot is.  A slot is the minimum amount of CPU and memory resources required for a single VM in an ESXi cluster.  Slot size is an important concept because it affects admission control.\u003c/p\u003e\n\u003cp\u003eA VMware ESXi cluster needs a way to determine how many resources need to be available in the event of a host failure.  This slot calculation gives the cluster a way to reserve the right amount of resources.\u003c/p\u003e","title":"Understanding VMware Slot Sizes"},{"content":"I recently decided to give VMware Horizon a shot and found the install to be a little confusing so this gives me a good opportunity to lay it all out so that others can try it out for themselves. A big \u0026ldquo;thank you\u0026rdquo; goes out to Raj Jethnani for a helping hand with this post. If you\u0026rsquo;d like to follow him on twitter his link is here: @rajtech\nFor those of you who don\u0026rsquo;t know, Horizon is a nifty SAAS platform for you to present thinapp applications too. I could see many organizations benefit from this technology in the near future.\nBefore you start Download the Horizon Connector, Horizon Service appliance, and Horizon Agent from the VMware site. If you plan to use thinapps (and you should really do this) you\u0026rsquo;ll also want to download this as well, but it\u0026rsquo;s outside of the scope of this guide.\nDecide on two IP addresses that will be used for the Connector and the Horizon Service appliances.\nCreate two DNS records (one for each appliance). You should have both (A) and (PTR) records for each appliance as well. For the connector I use something generic like (horizonconnector.hollow.lab) but for the service appliance I would use an organization name like ITHollow.hollow.lab.\nCreate a couple of users, a group, and maybe even an OU in Active Directory. The Connector uses AD for authentication and will need an account to bind to AD to do lookups. A group to put users in is helpful as well. The OU is just nice for housekeeping.\nMake sure any users that will use the Horizon service have a First name, Last name and email address configured in Active Directory. This goes for the Bind user as well. If any of this information is missing, Horizon won\u0026rsquo;t work.\nImport the OVF Files I\u0026rsquo;m not going to go through the process of importing OVF files into vSphere, but if you need some assistance you can take a look at http://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.vmadmin.doc_41/vsp_vm_guide/working_with_ovf_templates/t_import_a_virtual_appliance.html\nHorizon Service Appliance Setup Power on the Horizon Service Appliance and open a VM Console. Set a password for both the root and sshuser. Then configure your networking.\nThis next step is crucial. Configure your hostname and it MUST match the organizational name you created in DNS earlier. Don\u0026rsquo;t think you can leave the hostname as is and just point the DNS record to this server. That won\u0026rsquo;t work, not that I did that or anything.\nSet the Time Zone on your appliance. This was another step I skipped and then later realized that time is obviously a crucial element in authentication. Go into the Time Zone settings and follow the prompts.\nHorizon Connector Appliance Setup The Horizon Connector Appliance is setup in a very similar fashion to the Service Appliance. Power on the appliance and then open a VM Console to begin the setup.\nCreate a root password (I know at this point you\u0026rsquo;re using the same password you used in the appliance but I won\u0026rsquo;t tell on you!)\nConfigure the networking settings just as you did for the Service Appliance (with a different IP Address of course). Be sure to enter the correct hostname here as well. Remember that you created the DNS records already.\nJust like you did with the Horizon Service Appliance, you must set the Time Zone when you are done.\nThe Appliances have now been setup. The next step is configuring them and you can find these instructions in my second post. /2012/01/vmware-horizon-install-guide-part-2 VMware Horizon Install Guide (part 2) VMware Horizon Install Guide (part 3) VMware Horizon Install Guide (part 4)\n","permalink":"https://theithollow.com/2013/01/28/vmware-horizon-install-guide-part-1/","summary":"\u003cp\u003eI recently decided to give VMware Horizon a shot and found the install to be a little confusing so this gives me a good opportunity to lay it all out so that others can try it out for themselves.  A big \u0026ldquo;thank you\u0026rdquo; goes out to Raj Jethnani for a helping hand with this post.  If you\u0026rsquo;d like to follow him on twitter his link is here: \u003ca href=\"https://twitter.com/rajtech\"\u003e@rajtech\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eFor those of you who don\u0026rsquo;t know, Horizon is a nifty SAAS platform for you to present thinapp applications too.  I could see many organizations benefit from this technology in the near future.\u003c/p\u003e","title":"VMware Horizon Install Guide (part 1)"},{"content":"VMware Horizon Install Guide (part 1) VMware Horizon Install Guide (part 3) VMware Horizon Install Guide (part 4)\nConfigure the Horizon Service Now that the appliances are setup, it\u0026rsquo;s time to get busy configuring them. Go to the web address of the Horizon service that you configured (from part 1)\nIn our case this was http://theithollow.hollow.lab\nThe first page isn\u0026rsquo;t very interesting, just begin the wizard.\nThe second page is almost less interesting, because you have to put in your license key that cost you money. It has to be done, so enter it here.\nCreate the organization. The name is going to match the DNS record we setup earlier for the service appliance. Create a password and choose a logo if you wish.\nThe next screen requires a token to be generated. There is an option to skip this and do it later, but my strong suggestion is to do this now and copy the token to notepad for use later. I was unable to find where I needed to generate a token when I skipped this step.\nBe sure to save this activation token someplace where you can retrieve it. It will be needed to correctly communicate with the connector appliance.\nThe initial service setup is complete! Well, kind of anyway.\nYou can now login to the operator dashboard.\nThe user is operator and the password, you setup in part 1. We\u0026rsquo;ll be moving on here though so this is a step for part 3.\nConfigure the Horizon Connector Let\u0026rsquo;s login to the Horizon Connector web portal to configure this. http://horizonconnector.hollow.lab in our case.\nSet a new password.\nRemember that activation code we got from the horizon service setup? Enter that activation code here and click next. Don\u0026rsquo;t select the use SSL box on this screen. Adding SSL certs is outside the scope of this post.\nNow we setup the connection to our Active Directory. Choose the directory type, and either an IP address or the FQDN of a Domain Controller. Depending on your setup you may need to select SSL here, but by default you don\u0026rsquo;t need it. If you\u0026rsquo;re unsure, leave it unchecked.\nThe Search attribute should be sAMAccountName.\nThe Base DN is where you want to start the directory searches from. In my case I used the top of my domain tree dc=hollow,dc=lab\nNext up we need to choose a Bind DN. This is an account that is able to read Active Directory. This does not need to be an administrator account, but it is in my case. You might use a service account to keep password changes from breaking this.\nBind password, is the password for the Bind account.\nWe\u0026rsquo;ll begin the Setup wizard in the next post!\nVMware Horizon Install Guide (part 3)\n","permalink":"https://theithollow.com/2013/01/28/vmware-horizon-install-guide-part-2/","summary":"\u003cp\u003e\u003ca href=\"/2013/01/VMware-horizon-install-guide-part-1\"\u003eVMware Horizon Install Guide (part 1)\u003c/a\u003e \u003ca href=\"/2013/01/vmware-horizon-install-guide-part-3\"\u003eVMware Horizon Install Guide (part 3)\u003c/a\u003e \u003ca href=\"/2013/01/vmware-horizon-install-guide-part-4\"\u003eVMware Horizon Install Guide (part 4)\u003c/a\u003e\u003c/p\u003e\n\u003ch2 id=\"configure-the-horizon-service\"\u003eConfigure the Horizon Service\u003c/h2\u003e\n\u003cp\u003eNow that the appliances are setup, it\u0026rsquo;s time to get busy configuring them.  Go to the web address of the Horizon service that you configured (from part 1)\u003c/p\u003e\n\u003cp\u003eIn our case this was \u003ca href=\"http://theithollow.hollow.lab\"\u003ehttp://theithollow.hollow.lab\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe first page isn\u0026rsquo;t very interesting, just begin the wizard.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/?attachment_id=1436\"\u003e\u003cimg alt=\"service4\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/01/service4.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe second page is almost less interesting, because you have to put in your license key that cost you money.  It has to be done, so enter it here.\u003c/p\u003e","title":"VMware Horizon Install Guide (part 2)"},{"content":"VMware Horizon Install Guide (part 1) VMware Horizon Install Guide (part 2) VMware Horizon Install Guide (part 4)\nIn the last part we configured the Horizon Connector, now we\u0026rsquo;re going to run the setup wizard in order to assign applications and select users.\nHorizon Connector Setup Wizard Let\u0026rsquo;s begin the setup wizard\nJoin the domain. Enter the domain name and enter the username and password that has permissions to add a machine to the domain.\nChoose to enable Windows Authentication.\nDo not select use SSL. Enter the Internal name for the connector. We used horizonconnector.hollow.lab\nNow we enter the external name. We used theithollow.hollow.lab. Note that his isn\u0026rsquo;t really an external name, but it does match the DNS entries we created in Part1. If you are using a real external name, make sure the DNS entries match this name.\nOnce you enter the external host name, click the generate SSL certificate. Once that\u0026rsquo;s done you will want to copy the SSL Certificate information. Save it to notepad and save it as a .cer file. You will need this in part 4 of this post.\nEnable Windows apps and point the path to a file share with the thinapps. Note: Your apps need to be in individual folders. Don\u0026rsquo;t throw all the apps into the root directory.\nYou can click next on this screen. This is why your horizon users must have a first name, last name and email address listed in active directory.\nUser selection is done next. Enter a DN or multiple DNs that contain your horizon users.\nYou can skip this screen if you want, or add some groups.\nConfigure how often you want to sync with Active Directory.\nView your users and click save.\nYour setup is complete. Now we can login to the horizon.\n","permalink":"https://theithollow.com/2013/01/28/vmware-horizon-install-guide-part-3/","summary":"\u003cp\u003e\u003ca href=\"/2013/01/VMware-horizon-install-guide-part-1\"\u003eVMware Horizon Install Guide (part 1)\u003c/a\u003e \u003ca href=\"/2013/01/vmware-horizon-install-guide-part-2\"\u003eVMware Horizon Install Guide (part 2)\u003c/a\u003e \u003ca href=\"/2013/01/vmware-horizon-install-guide-part-4\"\u003eVMware Horizon Install Guide (part 4)\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIn the last part we configured the Horizon Connector, now we\u0026rsquo;re going to run the setup wizard in order to assign applications and select users.\u003c/p\u003e\n\u003ch2 id=\"horizon-connector-setup-wizard\"\u003eHorizon Connector Setup Wizard\u003c/h2\u003e\n\u003cp\u003eLet\u0026rsquo;s begin the setup wizard\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/?attachment_id=1417\"\u003e\u003cimg alt=\"connector4\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/01/connector4.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eJoin the domain.  Enter the domain name and enter the username and password that has permissions to add a machine to the domain.\u003c/p\u003e","title":"VMware Horizon Install Guide (part 3)"},{"content":"VMware Horizon Install Guide (part 1) VMware Horizon Install Guide (part 2) VMware Horizon Install Guide (part 3)\nIn the last part, we finished setting up the basics of Horizon Connector and the Service Portal.\nNow we can login to the Service Portal to assign our applications.\nApplication Prerequisites A couple of more prerequisites should be done before we log in.\nFirst, install the Horizon agent on all (or a few test machines) clients that will be using the thinapp packages that are published with Horizon.\nDuring the install make sure to enter the horizon service portal url (http://theithollow.hollow.lab)\nYou\u0026rsquo;ll see the agent on the bottom right hand corner of your task bar.\nNext up, make sure that you have added both the horizon connector and the horizon service urls to your local intranet sites in Internet Explorer.\nLastly, since we\u0026rsquo;re not using a trusted certificate authority in this post, we\u0026rsquo;ll need to import the Horizon.cer file that we created earlier. Right click on the .cer file and choose \u0026ldquo;Install Certificate\u0026rdquo;. Add this to the Enterprise Trust keystore.\nManage Users and Apps Login to the Service Portal that was configured in the first 3 parts. http://theithollow.hollow.lab in our example.\nYou will notice our sweet logo, but also that you have no applications listed. Since you\u0026rsquo;ve logged in with an Administrative Account, you can click Admin at the top left corner.\nGo to the Users and Group tab at the top of the page. You\u0026rsquo;ll notice that your users have no application enrollments. Click Add on the right side of the page.\nChoose which thinapp package to add. The deployment can either be automatic or user defined. I\u0026rsquo;ve chosen automatic so that the apps are immediately shown to users.\nIf we go back and login again, we\u0026rsquo;ll now see our thinapp package on the front page. Any users who\u0026rsquo;ve had this app published to them can then stream the app.\nFrom here, you can open your apps from the Horizon Agent as well. A folder should be put on the desktop of the clients and their apps should appear. Users may think they\u0026rsquo;re launching local applications. Pretty sweet!\nYou can even publish your apps to mobile devices such as Android and iPhones.\nhttps://play.google.com/store/apps/details?id=com.vmware.horizon.android\u0026amp;hl=en https://itunes.apple.com/us/app/vmware-horizon-workspace/id582810532?mt=8\nTroubleshooting I had several issues getting all of this to work correctly on my first install. Hopefully some of these tips will help out.\nBe sure to always use the DNS names of your appliances. IP addresses can get you into trouble. Check the time synchronization on your appliances. If users are missing applications, make sure they have permissions to the thinapps repository that you pointed Horizon too. If you want everyone to be able to run these, add \u0026ldquo;Authenticated Users\u0026rdquo; to the share. Only read permission should be necessary. If apps are still not appearing, right click on the Horizon Agent on the task bar and choose \u0026ldquo;Sync Now\u0026rdquo; Make sure the Bind User is in the list of Horizon Users that can use the site. Make sure the URLs are in the trusted sites list Make sure to import the certificate into the Enterprise Trust keystore I hope this post was useful to you. I think it could be a very valuable product to distribute apps to end users!\n","permalink":"https://theithollow.com/2013/01/28/vmware-horizon-install-guide-part-4/","summary":"\u003cp\u003e\u003ca href=\"/2013/01/VMware-horizon-install-guide-part-1\"\u003eVMware Horizon Install Guide (part 1)\u003c/a\u003e \u003ca href=\"/2013/01/vmware-horizon-install-guide-part-2\"\u003eVMware Horizon Install Guide (part 2)\u003c/a\u003e \u003ca href=\"/2013/01/vmware-horizon-install-guide-part-3\"\u003eVMware Horizon Install Guide (part 3)\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIn the last part, we finished setting up the basics of Horizon Connector and the Service Portal.\u003c/p\u003e\n\u003cp\u003eNow we can login to the Service Portal to assign our applications.\u003c/p\u003e\n\u003ch2 id=\"application-prerequisites\"\u003eApplication Prerequisites\u003c/h2\u003e\n\u003cp\u003eA couple of more prerequisites should be done before we log in.\u003c/p\u003e\n\u003cp\u003eFirst, install the Horizon agent on all (or a few test machines) clients that will be using the thinapp packages that are published with Horizon.\u003c/p\u003e","title":"VMware Horizon Install Guide (part 4)"},{"content":"\nMany times I see new virtualization admins add too many vCPUs to virtual machines after they\u0026rsquo;ve converted their physical machines. I believe the reason for this is a simple misunderstanding that more is not always better in this case.\nWith physical servers, the more is better approach seems to work fine. If you have a quad core processor it\u0026rsquo;s better than a dual core and if you have a dual processor server it\u0026rsquo;s better than a single socket. When it comes to virtual machines extra processors can actually make a VM perform worse than having too few processors.\nTo explain why this is the case, I\u0026rsquo;ll use some graphics.\nFirst lets look at what happens when you have a virtual machine with a single virtual CPU on a physical host with multiple processors or cores.\nOn the right side we have physical CPUs. The CPUs in the blue box are not currently being used for any workloads. The CPUs in the red box are actively being used by another process. The virtual machine on the left side has 1 vCPU and when it needs to access the processor, can request processor time from either of the two available processors.\nThe next example shows the same scenario, except this time the virtual machine has two vCPUs. When it tries to request processor time, it can utilize both of the two available physical processors.\nThe next scenario shows where the problem occurs. Now we have the same virtual machine as the above example, with two vCPUs but now three of the four physical processors are already busy. Now when the virtual machine goes to request resources it has to wait for the physical processors to be available, even though there is an unused physical processor. This VM requires two processors to be available simultaneously. If we had a virtual machine with only one vCPU, it could run a process, but not a VM with two vCPUs.\nIf you were to run ESXTOP on the host that this was occurring, you might notice a high %RDY time.\nI guess the moral of this story is that sometimes you need additional processing power, but if you don\u0026rsquo;t need it, don\u0026rsquo;t assign it.\n","permalink":"https://theithollow.com/2013/01/21/the-effect-of-too-many-virtual-cpus/","summary":"\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2013/01/128874905223940199.jpg\"\u003e\u003cimg alt=\"128874905223940199\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/01/128874905223940199.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eMany times I see new virtualization admins add too many vCPUs to virtual machines after they\u0026rsquo;ve converted their physical machines.    I believe the reason for this is a simple misunderstanding that more is not always better in this case.\u003c/p\u003e\n\u003cp\u003eWith physical servers, the more is better approach seems to work fine.  If you have a quad core processor it\u0026rsquo;s better than a dual core and if you have a dual processor server it\u0026rsquo;s better than a single socket.  When it comes to virtual machines extra processors can actually make a VM perform worse than having too few processors.\u003c/p\u003e","title":"The Effect of Too Many Virtual CPUs"},{"content":"I decided to check out the new HP Performance Viewer and found it to be pretty useful. The appliance comes as an OVF so it\u0026rsquo;s great for importing into your vSphere environment. Once it\u0026rsquo;s installed you can go to the management URL and all you have to do is provide the name of the vCenter and login credentials. That\u0026rsquo;s all for the configuration!\nAt this point I\u0026rsquo;d give the appliance some time to gather statistics, but if you just can\u0026rsquo;t wait I\u0026rsquo;ll give you some of the details from my install.\nThe first screen I looked at showed my ESXi hosts based on the total amount of memory. (This screen reminded me of a VMTurbo Appliance I once download)\nYou can drill down into any one of those hosts to get configuration summaries and performance information.\nThese performance statistics can be shown on virtual machines, resources pools, clusters, data stores and hosts. I actually used it to troubleshoot some storage issues recently.\nIf you\u0026rsquo;d like to download the HP Performance Viewer, you can find it here: HP Performance Viewer Download\n","permalink":"https://theithollow.com/2013/01/14/hp-performance-viewer/","summary":"\u003cp\u003eI decided to check out the new HP Performance Viewer and found it to be pretty useful.  The appliance comes as an OVF so it\u0026rsquo;s great for importing into your vSphere environment.  Once it\u0026rsquo;s installed you can go to the management URL and all you have to do is provide the name of the vCenter and login credentials.  That\u0026rsquo;s all for the configuration!\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2013/01/hp-perf1.png\"\u003e\u003cimg alt=\"hp-perf1\" loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2013/01/hp-perf1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eAt this point I\u0026rsquo;d give the appliance some time to gather statistics, but if you just can\u0026rsquo;t wait I\u0026rsquo;ll give you some of the details from my install.\u003c/p\u003e","title":"HP Performance Viewer"},{"content":"I created this page to give quick links to various resources that are often used as a quick reference. Feel free to bookmark this page if it\u0026rsquo;s useful to you.\nVMware Reference Material ESXTOP Metrics and Usage - http://www.yellow-bricks.com/esxtop/ ESXCLI Commands - http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.ref.doc_50%2Fesxcli_software.html PowerCLI Quick Reference - http://virtu-al.net/Downloads/PowerCLIQuickReference.pdf\nNetwork Reference Materials IPv4 Subnetting Chart - IPv4 Subnet Chart from Cisco IPv6 Subnetting Chart - IPv6 Subnet Chart from Crucial Cisco Command Reference - Cisco IOS Fundamentals Command reference Cisco Security Command Reference - Cisco IOS Security Command Reference HP Switch Command Line Reference - HP Switch CLI Reference\n","permalink":"https://theithollow.com/reference-material/","summary":"\u003cp\u003eI created this page to give quick links to various resources that are often used as a quick reference.  Feel free to bookmark this page if it\u0026rsquo;s useful to you.\u003c/p\u003e\n\u003ch1 id=\"vmware-reference-material\"\u003eVMware Reference Material\u003c/h1\u003e\n\u003cp\u003e\u003cstrong\u003eESXTOP Metrics and Usage\u003c/strong\u003e - \u003ca href=\"http://www.yellow-bricks.com/esxtop/\"\u003ehttp://www.yellow-bricks.com/esxtop/\u003c/a\u003e \u003cstrong\u003eESXCLI Commands\u003c/strong\u003e - \u003ca href=\"http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.ref.doc_50%2Fesxcli_software.html\"\u003ehttp://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.ref.doc_50%2Fesxcli_software.html\u003c/a\u003e \u003cstrong\u003ePowerCLI Quick Reference\u003c/strong\u003e - \u003ca href=\"http://virtu-al.net/Downloads/PowerCLIQuickReference.pdf\"\u003ehttp://virtu-al.net/Downloads/PowerCLIQuickReference.pdf\u003c/a\u003e\u003c/p\u003e\n\u003ch1 id=\"network-reference-materials\"\u003eNetwork Reference Materials\u003c/h1\u003e\n\u003cp\u003e\u003cstrong\u003eIPv4 Subnetting Chart\u003c/strong\u003e - \u003ca href=\"https://www.google.com/url?sa=t\u0026amp;rct=j\u0026amp;q=\u0026amp;esrc=s\u0026amp;source=web\u0026amp;cd=12\u0026amp;ved=0CHgQFjAL\u0026amp;url=https%3A%2F%2Flearningnetwork.cisco.com%2Fservlet%2FJiveServlet%2Fdownload%2F43496-7572%2FSubNet_Chat.doc\u0026amp;ei=gwzyUL38LOT42gXV6oCgDQ\u0026amp;usg=AFQjCNH3TQjj3sd9XLLGG_TD3XSMQf6E6w\u0026amp;sig2=0NLXvP9KLK5q7ONRQ4s8GQ\u0026amp;bvm=bv.1357700187,d.b2I\"\u003eIPv4 Subnet Chart from Cisco\u003c/a\u003e \u003cstrong\u003eIPv6 Subnetting Chart\u003c/strong\u003e - \u003ca href=\"http://www.crucial.com.au/blog/2011/04/15/ipv6-subnet-cheat-sheet-and-ipv6-cheat-sheet-reference/\"\u003eIPv6 Subnet Chart from Crucial\u003c/a\u003e \u003cstrong\u003eCisco Command Reference\u003c/strong\u003e - \u003ca href=\"http://www.cisco.com/en/US/docs/ios/fundamentals/command/reference/cf_cr.pdf\"\u003eCisco IOS Fundamentals Command reference\u003c/a\u003e \u003cstrong\u003eCisco Security Command Reference\u003c/strong\u003e - \u003ca href=\"http://www.cisco.com/en/US/docs/ios/security/command/reference/sec_book.html\"\u003eCisco IOS Security Command Reference\u003c/a\u003e \u003cstrong\u003eHP Switch Command Line Reference\u003c/strong\u003e - \u003ca href=\"http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00292569/c00292569.pdf\"\u003eHP Switch CLI Reference\u003c/a\u003e\u003c/p\u003e","title":"Reference Material"},{"content":" I recently took the VCAP5 - DCD exam and wanted to share my experience for anyone who is preparing for this exam as well.\nStudy In my opinion this is a fairly difficult exam to do any sort of preparation for. Most of my preparation was just every day design that I\u0026rsquo;ve acquired over the years. I think the biggest trick for a lot of administrators is to switch from a mode of thinking about things in a breakfix method, but rather as a holistic design methodology. I prefer to look at a design as a \u0026ldquo;pie in the sky\u0026rdquo; approach where I put all the best solutions I can come up with to meet a design requirement, and then start to modify those based on any constraints that might be known.\nIf you want some good study material for this Certification I would clearly start with the blueprint. VCAP5-DCD blueprint.\nIf you need more instruction about any of the specific areas or just find that you need a refresher I would suggest Nick Marshall\u0026rsquo;s site: VirtualNetworkDesign.com\nFormat I read a little bit about the format before my exam and found conflicting info about it. At the time of my exam, once you finish a question, you are NOT allowed to go back. There is no opportunity to mark questions for review and go back later. This does make the exam a bit difficult but I believe they changed the format to avoid some confusion with the design questions.\nMy test had 100 questions total and I had about 4 hours to complete them. Time was a factor which I\u0026rsquo;ll talk about later. Six of the questions were design questions which were a Visio type exercise. I highly recommend looking at a demo before the exam so you don\u0026rsquo;t spend time on learning the formats during the test. VCAP5-DCD EXAM UI Demo\nThe rest of the questions were a fairly even mix of multiple choice and drag and drop questions. These drag and drop questions would of course leave the caveat that not all of the answer may be used, and some may be used more than once.\nMy Tips Get a good nights sleep before the exam, and use the restroom before the test. This is a four hour exam and I needed most of that time to finish.\nCaffeine: Did I mention it\u0026rsquo;s four hours?\nExam Opinions I found the exam to be too long. The exam does make you think about design situations but it could be shortened and still be able to determine whether or not you know enough to posses a certification. By the time I was finishing the exam, I think I was clicking answers without really thinking too much about the questions.\nGood Luck to anyone sitting this exam. Hopefully my experience was useful to you.\n","permalink":"https://theithollow.com/2013/01/10/my-vmware-certified-advanced-professional-5-datacenter-design-experience/","summary":"\u003cp\u003e\u003ca href=\"/2013/01/my-vmware-certified-advanced-professional-5-datacenter-design-experience/vcap5-dcd/\"\u003e\u003cimg alt=\"VCAP5-DCD\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2013/01/VCAP5-DCD.jpg\"\u003e\u003c/a\u003e I recently took the VCAP5 - DCD exam and wanted to share my experience for anyone who is preparing   for this exam as well.\u003c/p\u003e\n\u003ch1 id=\"study\"\u003eStudy\u003c/h1\u003e\n\u003cp\u003eIn my opinion this is a fairly difficult exam to do any sort of preparation for.  Most of my preparation was just every day design that I\u0026rsquo;ve acquired over the years.  I think the biggest trick for a lot of administrators is to switch from a mode of thinking about things in a breakfix method, but rather as a holistic design methodology.  I prefer to look at a design as a \u0026ldquo;pie in the sky\u0026rdquo; approach where I put all the best solutions I can come up with to meet a design requirement, and then start to modify those based on any constraints that might be known.\u003c/p\u003e","title":"My VMware Certified Advanced Professional 5 - Datacenter Design Experience"},{"content":"I used to love the fact that with my old Netapp FAS2040 that I\u0026rsquo;d get a phone call about replacing a failed drive almost before I received the alert about the drive in the first place. Phone home seemed genius to me and as it turns out, Hewlett Packard has this capability for their equipment.\nFull disclosure: As many of you know, I currently work for an HP Partner so my advice may be a bit biased. I can tell you that I wouldn\u0026rsquo;t put a product on this site which I didn\u0026rsquo;t like so please don\u0026rsquo;t think that I\u0026rsquo;m just trying to push HP products. You may see more HP related articles from me, only because I encounter them more frequently than others.\nThe HP Phone home product is called \u0026ldquo;HP Insight Remote Support\u0026rdquo; and is free for download. This product really doesn\u0026rsquo;t do a whole lot in your environment, but does exactly what you\u0026rsquo;d expect it to do. It scans your environment, monitors your equipment for errors and uploads that data to HP. All of your serial numbers are uploaded so that HP can determine if the equipment is under contract or warranty and could automatically create a ticket for you if you wish.\nThe install is very straight forward; just download the product from https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=REMOTESUPPORT and it\u0026rsquo;s a typical next, next finish install. Once the install is finished it will create a shortcut on your desktop to launch the tool in a browser menu.\nLog in with a user that has Admin rights on the server. This could be a domain account as long as it\u0026rsquo;s an admin on this particular machine.\nThe first time you log in, you can go through the wizards, but you can always skip these and do it manually if you\u0026rsquo;d rather.\nAdd some credentials that will be used to log into the equipment. You\u0026rsquo;ll notice that you have options for SANiQ (lefthand OS), SNMP, Telnet, http(s), Command view etc. Make sure to add credentials for iLO Remote Insight Board Command Language (RIBCL). These are the credentials used to log into iLO which are important to get the serial numbers and health of your systems. More than one credential could be used for these and you can specify an order to which you\u0026rsquo;d like to have them tried during the discovery process.\nThe next step is to determine what machines to scan. You can use Active Directory or IP Addresses.\nWait for the discovery process to finish finding your equipment.\nThe second wizard will be to setup the host. This includes things like making sure you have connectivity to HP\u0026rsquo;s site.\nThe next screen will want to know contact information. This will be used for ticketing and communication in the event of a hardware failure. You will also be asked to setup a site. You can have more than one site with different addresses. The addresses obviously are used for shipping information.\nNext you can decide to register your host with HP, and whether or not you want to be able to view your data on HP\u0026rsquo;s \u0026ldquo;Insight Online\u0026rdquo; portal.\nAn optional step is to add HP Partners to the portal, so in the event that warranties etc. are about to expire, the channel partners can get involved. If you\u0026rsquo;d rather, you can ignore this and leave only HP in the loop.\nWhen you\u0026rsquo;re finished, you\u0026rsquo;ll be able to see the list of devices and their status. The Gen8 servers are being monitored without any additional configuration. The older servers I needed to configure additional SNMP credentials before the \u0026ldquo;Monitoring and Collections\u0026rdquo; was working correctly.\nSome devices such as a P4800 will need additional contract information added. This is because the devices are all part of a package. If you go into the device and edit the warranty and Contract information, you can modify the SAID or CarePackID and it will begins showing the correct information again.\nLastly, now that you\u0026rsquo;re up and running, you can logon to the \u0026ldquo;Insight Online\u0026rdquo; portal and see the status of your devices as well.\nHP Insight Remote Support collects basic configuration information from your devices to help HP resolve problems more quickly and accurately. No business information is collected and the data is managed according to the HP Data Privacy policy. Configuration data collected includes:\n1. Server model\n2. Processor model \u0026amp; speed\n3. Storage capacity \u0026amp; speed\n4. Memory capacity \u0026amp; speed\n5. Firmware/BIOS\n6. Operating System\n7. HP Integrated Lights-Out (iLO) presence\n8. Power management\n","permalink":"https://theithollow.com/2013/01/07/hp-insight-remote-support/","summary":"\u003cp\u003eI used to love the fact that with my old Netapp FAS2040 that I\u0026rsquo;d get a phone call about replacing a failed drive almost before I received the alert about the drive in the first place.  Phone home seemed genius to me and as it turns out, Hewlett Packard has this capability for their equipment.\u003c/p\u003e\n\u003cp\u003eFull disclosure: As many of you know, I currently work for an HP Partner so my advice may be a bit biased.  I can tell you that I wouldn\u0026rsquo;t put a product on this site which I didn\u0026rsquo;t like so please don\u0026rsquo;t think that I\u0026rsquo;m just trying to push HP products.  You may see more HP related articles from me, only because I encounter them more frequently than others.\u003c/p\u003e","title":"HP Insight Remote Support"},{"content":"In my last post I explained a memory reclamation technique called Transparent Page Sharing. This post is dedicated to the Balloon driver method.\nThe first thing to be clear about is that Memory Ballooning is a technique that is only engaged when the host is running low on physical memory. If you have a host with 60 GB of physical memory available and the virtual machines are only allocated a total of 30GB of memory, then you may never need to know what memory ballooning is all about. However if you are over committing your hosts then this is an important topic to review.\nMemory that is allocated to a virtual machine might not all be actively used. Think about it, if 4 GB is assigned to a machine, the applications may only be using 2GB of it actively. As far as an ESXi host is concerned though, 4GB of memory is basically off limits because it\u0026rsquo;s assigned it to a VM. VMware ballooning basically consists of the host asking for some of that memory back.\nRemember that one of the things we like most about virtualization is that the host doesn\u0026rsquo;t know what the guest OS is doing. At the same time, the guest OS doesn\u0026rsquo;t realize that it\u0026rsquo;s running inside of a virtual machine either. In order for the host to request memory back from the guest OS it needs to use the balloon driver (vmmemctl.sys) to communicate this information.\nWhen the ESXi host runs low on memory it uses the balloon driver to determine what memory the virtual machines can give up to prevent the host from paging to disk.\nFor more information about memory ballooning please check out the Memory Resource Management document put out by VMware. http://www.vmware.com/files/pdf/perf-vsphere-memory_management.pdf\n","permalink":"https://theithollow.com/2012/12/26/vmware-ballooning-explained/","summary":"\u003cp\u003eIn my last post I explained a memory reclamation technique called \u003ca href=\"/2012/12/memory-de-duplication-in-vmware/\"\u003eTransparent Page Sharing\u003c/a\u003e.  This post is dedicated to the Balloon driver method.\u003c/p\u003e\n\u003cp\u003eThe first thing to be clear about is that Memory Ballooning is a technique that is only engaged when the host is running low on physical memory.  If you have a host with 60 GB of physical memory available and the virtual machines are only allocated a total of 30GB of memory, then you may never need to know what memory ballooning is all about.  However if you are over committing your hosts then this is an important topic to review.\u003c/p\u003e","title":"VMware Ballooning explained"},{"content":"One of the companies I worked for got a Netapp filer and I loved the fact that it would dedupe the data that was sitting on disk. I got over 40% more storage just by having that sweet little feature on. I was thinking, \u0026ldquo;How awesome would it be to dedupe my memory?\u0026rdquo; Getting more memory out of my servers would be a nice thing. Well as it turns out, VMware does this already, but they call it \u0026ldquo;Transparent Page Sharing.\u0026rdquo;\nVMware uses a technique called Transparent Page Sharing, which looks at each of the virtual machines on a host and creates a hash on each of the memory pages. If the pages have an identical hash value as another page, from any virtual machine, the pages are de-duplicated by pointing the virtual machine pages at the same memory location on the physical host.\nFrom the picture above you can see that VMs have memory pages in the Host\u0026rsquo;s memory, as you would imagine, but the pages in blue share the same hash value and can then be shared. This is one of the mechanisms that VMware uses to allow you to over commit the memory. For instance it\u0026rsquo;s possible that the host in the above example has 4GB of RAM and each VM has 2GB of RAM. If 2GB of the RAM have identical hash values, it\u0026rsquo;s possible that the host\u0026rsquo;s memory could be overcommited by 2GB and still not swap. This is a pretty extreme example though. I think it would be quite difficult to reach the equivalent of the example that I provided.\nIn case you\u0026rsquo;re wondering what happens when one of the VMs needs to change the page that it\u0026rsquo;s sharing with other VMs; VMware uses a \u0026ldquo;Copy on Write\u0026rdquo; procedure where the memory location would be copied to a private memory location for the VM to write to.\n","permalink":"https://theithollow.com/2012/12/17/memory-de-duplication-in-vmware/","summary":"\u003cp\u003eOne of the companies I worked for got a Netapp filer and I loved the fact that it would dedupe the data that was sitting on disk.  I got over 40% more storage just by having that sweet little feature on.  I was thinking, \u0026ldquo;How awesome would it be to dedupe my memory?\u0026rdquo;  Getting more memory out of my servers would be a nice thing.  Well as it turns out, VMware does this already, but they call it \u0026ldquo;Transparent Page Sharing.\u0026rdquo;\u003c/p\u003e","title":"Memory De-duplication in VMware"},{"content":"Jumbo frames can be useful to optimize IP networks, especially in storage networking. This post should help to explain why using jumbo frames can be useful.\nI\u0026rsquo;m not Jumbo, I\u0026rsquo;m just big boned!\nFirst, let\u0026rsquo;s define what we mean by the term jumbo frame. As you can imagine it\u0026rsquo;s bigger than a normal frame.\nA Jumbo frame simply means any frame with an MTU larger than 1500 bytes. What exactly does that mean? To really understand that we need to look at an Ethernet frame. The diagram below shows a hastily thrown together Ethernet frame and most of the frame we\u0026rsquo;re not concerned with for this topic. Parts of the frame are used for determining where the frame is headed, where it came from and to make sure it arrived intact. The section we\u0026rsquo;re looking at is the \u0026ldquo;Data\u0026rdquo; or \u0026ldquo;Payload\u0026rdquo; section of the frame.\nNow we can see that every single frame is going to have the same structure, so they will all have a preamble, SFD, Destination, etc\u0026hellip; All of this is considered overhead in order to deliver the Data. So if a standard Ethernet frame is 1538 bytes and 1500 of those are the payload, we can calculate the efficiency.\nEfficiency = Data Size / Frame Size\nEfficiency = 1500 bytes / 1538 bytes\nEfficiency = 97.5%\nA Jumbo frame still has to have all of the same segments of an Ethernet frame, but we can increase the size of the Data section. If we change the Data section to 9000 bytes instead of 1500, we should get more efficiency.\nEfficiency = Data Size / Frame Size\nEfficiency = 9000 bytes / 9038 bytes\nEfficiency = 99.5%\nSo now it\u0026rsquo;s fairly easy to see that a larger payload can give you some benefits with large data transfers for things like storage networking. It should be noted that all devices in the same layer 2 network path need to be configured for the same size MTU. If one device, such as a switch, is using a 1500 MTU frame and your server sending frames to it at 9000 MTU the switch will drop the frame.\nAlso, if you are using jumbo frames accross networks via a router, you\u0026rsquo;ll want to have the same size MTU or you will have IP Fragmentation. IP Fragmentation in this case would take the 9000 MTU frame sent by the server and break it into smaller pieces in order to send the frame from the switch at 1500 MTU.\nIP fragmentation would cause something like the example below, where a single frame has to be broken up into smaller frames to reach it\u0026rsquo;s destination.\nThis IP fragmentation only happens inside of a router. Remember that all devices on the same layer 2 network must have the same MTU size. So the server and the router on the same Layer 2 network need to be the same size MTU, and the router to say another router need to have the same size MTU. That example would be below.\n","permalink":"https://theithollow.com/2012/12/11/jumbo-frames/","summary":"\u003cp\u003eJumbo frames can be useful to optimize IP networks, especially in storage networking.  This post should help to explain why using jumbo frames can be useful.\u003c/p\u003e\n\u003cfigure\u003e\n    \u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/12/jumbo0.png\"\n         alt=\" I\u0026#39;m not Jumbo, I\u0026#39;m just big boned!\" width=\"354\"/\u003e \u003cfigcaption\u003e\n            \u003cp\u003eI\u0026rsquo;m not Jumbo, I\u0026rsquo;m just big boned!\u003c/p\u003e\n        \u003c/figcaption\u003e\n\u003c/figure\u003e\n\n\u003cp\u003eFirst, let\u0026rsquo;s define what we mean by the term jumbo frame.  As you can imagine it\u0026rsquo;s bigger than a normal frame.\u003c/p\u003e\n\u003cp\u003eA Jumbo frame simply means any frame with an MTU larger than 1500 bytes.  What exactly does that mean?  To really understand that we need to look at an Ethernet frame.   The diagram below shows a hastily thrown together Ethernet frame and most of the frame we\u0026rsquo;re not concerned with for this topic.  Parts of the frame are used for determining where the frame is headed, where it came from and to make sure it arrived intact.  The section we\u0026rsquo;re looking at is the \u0026ldquo;Data\u0026rdquo; or \u0026ldquo;Payload\u0026rdquo; section of the frame.\u003c/p\u003e","title":"Jumbo Frames"},{"content":"HP Enterprise class storage has just entered the mid range market. Today HP announced the HP 3PAR StoreServ 7000 class which includes two devices; the HP 3PAR 7200 and the HP 3PAR 7400. The 7200 starts at $25k for the 2U device and the 7400 (seen below) is less than $40K for a 4U device.\nI\u0026rsquo;m very excited about this announcement because now HP has a storage device with the features that everybody wants and it\u0026rsquo;s now affordable for a smaller sized organization. HP has seemingly targeted one of it\u0026rsquo;s own devices with this announcement (the HP EVA) since it has been very popular with the mid-range business. They\u0026rsquo;ve even included some tools to migrate data from the EVA to the new 3PAR. I seriously doubt that the EVA will entirely go away, but the new big brother is going to steal some of their thunder.\nIf you\u0026rsquo;re unfamiliar with some of the 3PAR features, it does thin provisioning via hardware enabled ASICs, which HP is touting can save 10X is storage space (I assume over non-thinned provisioned data. If you switch from an array that already thin provisions, I doubt this type of space savings will still be achievable.)\nThe coolest feature to me is the ability to \u0026ldquo;autonomically\u0026rdquo; move data around between disk types based on the data most utilized. It\u0026rsquo;s nice to have a storage system that performance tune all by itself. Now you can still setup your storage tiers, but don\u0026rsquo;t have to worry about watching utilization and moving it around all the time. HP 3PAR will do this for you.\nIf you\u0026rsquo;re in the market for some new storage, the HP 3PAR 7000 is certainly worth a look.\nTo get more information about the new devices please check out HP\u0026rsquo;s annoucement page, Calvin Zito\u0026rsquo;s blog or on twiter: @HPStorageGuy.\n","permalink":"https://theithollow.com/2012/12/03/hp-3par-for-midrange-business/","summary":"\u003cp\u003eHP Enterprise class storage has just entered the mid range market.  Today HP announced the HP 3PAR StoreServ 7000 class which includes two devices;  the HP 3PAR 7200 and the HP 3PAR 7400.   The 7200 starts at $25k for the 2U device and the 7400 (seen below) is less than $40K for a 4U device.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"/2012/12/03/hp-3par-for-midrange-business/attachment/7400/\"\u003e\u003cimg alt=\"7400\" loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/12/7400.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;m very excited about this announcement because now HP has a storage device with the features that everybody wants and it\u0026rsquo;s now affordable for a smaller sized organization.  HP has seemingly targeted one of it\u0026rsquo;s own devices with this announcement (the HP EVA) since it has been very popular with the mid-range business.  They\u0026rsquo;ve even included some tools to migrate data from the EVA to the new 3PAR.  I seriously doubt that the EVA will entirely go away, but the new big brother is going to steal some of their thunder.\u003c/p\u003e","title":"HP 3PAR for midrange business"},{"content":"When you team NICs together in ESXi 5 you can pick from a variety of load balancing techniques to determine how traffic should flow over the adapters. You might think that setting up software iSCSI initiators in ESXi would be done in a similar manner. Add a VMkernel to a vSwitch, add a couple of adapters and set a teamingfailover policy. It turns out that this is not the case. You could setup a software iSCSI initiator this way, but it won\u0026rsquo;t provide you the teaming or failover you\u0026rsquo;ve intended.\nWhen the software iSCSI initiator does a discovery it will use the first adapter it finds. Once it\u0026rsquo;s discovered a storage array, it continues to use this adapter for all traffic. If you want to have any load balancing, you\u0026rsquo;ll need to setup some port bindings.\nThe first thing to do is setup your vSwitch with a pair (or more) of network adapters. Next, you have to create not one VMkernel but two. Once done, it should look something similar to the picture below. I\u0026rsquo;m using a distributed switch so your setup might look slightly different.\nNext we need to modify the uplinks for each VMkernel port group. You must have one and only one active uplink for each iSCSI port group. I tend to leave my teaming policy alone on the storage vSwitch so I can use it for NFS or something else and still have standard failover, but you must override the failover settings for each of the iSCSI adapters.\nNotice that for the Active Uplinks I only have dvUplink1. The same thing needs to be done to the second VMkernel port group only this time it should have the Active Uplink set to dvUplink2 instead.\nOnce you\u0026rsquo;ve set your uplinks the next step is to bind the software iSCSI initiator to a port group. Right click on the Software iSCSI initiator and choose properties. GO into the Network Configuration Tab and click the Add\u0026hellip; button.\nYou can then select the port group to bind too. If you\u0026rsquo;ve setup the teaming incorrectly, you won\u0026rsquo;t be able to do this successfully.\nRescan the adapters and you\u0026rsquo;re ready to roll.\n","permalink":"https://theithollow.com/2012/12/03/software-iscsi-load-balancing-in-esxi-5/","summary":"\u003cp\u003eWhen you team NICs together in ESXi 5 you can pick from a variety of load balancing techniques to determine how traffic should flow over the adapters.  You might think that setting up software iSCSI initiators in ESXi would be done in a similar manner.  Add a VMkernel to a vSwitch, add a couple of adapters and set a teamingfailover policy.  It turns out that this is not the case.  You could setup a software iSCSI initiator this way, but it won\u0026rsquo;t provide you the teaming or failover you\u0026rsquo;ve intended.\u003c/p\u003e","title":"Software iSCSI load balancing in ESXi 5"},{"content":"I just found out that I\u0026rsquo;ve passed the VMware Certified Advanced Professional 5 - Datacenter Administration exam and wanted to share my experience.\nWhen I first set out to take on this exam, I was apprehensive about it because of the number of possible questions that could be asked on it. The blueprint was quite large and covered basically everything related to vSphere. I got some helpful advice from a friend who told me that instead of worrying about if I could pass the exam, think about it like vSphere Olympics. It\u0026rsquo;s a chance to show off how much you know. It was a subtle change, but a different mindset really helped me.\nThe exam is different from most of the multiple choice, drag and drop, choose the correct answer exams that I\u0026rsquo;ve been accustomed to scheduling. This is a live lab setup so that question 1 requires configurations to an actual vSphere environment and question 2 may be impacted by how you answer question 1. I must mention, that it is possible to break the lab environment so that you can\u0026rsquo;t perform certain tasks later on in the test. I made a mistake during my exam and thought that it would surely cost me a second try, but was able to score enough points anyway.\nOne of the other parts that I really liked about this exam format was that the official VMware documentation is available to be referenced during the test. I\u0026rsquo;ve often thought that most certifications test how much you can memorize and those questions don\u0026rsquo;t really test a real world scenario. Many times, if I know where to look up the information, I don\u0026rsquo;t bother memorizing it. The exam doesn\u0026rsquo;t allow for much time for each question however so you won\u0026rsquo;t be able to reference the documentation very often and still have time to answer the questions. It is nice to have it there in case you get stuck though.\nStudy Hard I used a variety of study guides when preparing for this test. Chris Wahl has an excellent checklist to use to organize your thoughts and determine your weak areas. http://wahlnetwork.com/2012/07/02/the-vcap5-dca-study-sheet/\nPatrick Kremer and Tim Antonowicz have some excellent suggestions on how to attack the questions. Things like number all the questions and categorizing them so that you can get the maximum amount of points. Tim Antonowicz\u0026rsquo;s blog - http://whiteboardninja.wordpress.com/2012/09/19/vcap5-dca-testing-strategy-and-tips/ and Patrick Kremer\u0026rsquo;s blog http://www.patrickkremer.com/?p=565\nThe guys over at the vBrownbag have provided some really great content for the VCAP5-DCA. I referenced several of the brownbags when I was unsure about content. This might be a great place to start studying each section. I also recommend checking out the brownbags on Wednesday nights.\nLastly I\u0026rsquo;ll mention that having a lab to test different scenarios in is a must. This exam requires a good deal of real world or at least lab experience to do well on. Just having book knowledge will likely mean that it will take you too long to accomplish the goals of each question. If you are looking for a fairly cheap way to build a lab, check out the post on my home lab.\nGood Luck to you! I really recommend trying this exam. The real world scenarios are very useful to a VMware administrator, and the knowledge that you gain while preparing to take the exam will undoubtedly make you a better technician. To me certifications are part of the job and a way to prove you know what you\u0026rsquo;re talking about, and at the same time studying for these exams helps to increase the depth of your knowledge.\n","permalink":"https://theithollow.com/2012/11/21/my-vcap5-dca-experience/","summary":"\u003cp\u003eI just found out that I\u0026rsquo;ve passed the VMware Certified Advanced Professional 5 - Datacenter Administration exam and wanted to share my experience.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/11/vcap5dca.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/11/vcap5dca.jpg\"\u003e\u003c/a\u003e When I first set out to take on this exam, I was apprehensive about it because of the number of possible questions that could be asked on it.  The \u003ca href=\"http://mylearn.vmware.com/register.cfm?course=139202\"\u003eblueprint\u003c/a\u003e was quite large and covered basically everything related to vSphere.  I got some helpful advice from a friend who told me that instead of worrying about if I could pass the exam, think about it like vSphere Olympics.  It\u0026rsquo;s a chance to show off how much you know.  It was a subtle change, but a different mindset really helped me.\u003c/p\u003e","title":"My VCAP5-DCA Experience"},{"content":"Over the past few weeks, I\u0026rsquo;ve been hearing a lot of customers having issues logging into vCenter after upgrading to vSphere 5.1. I upgraded the lab and had some issues as well, but was able to fix the issues and wanted to share what I\u0026rsquo;ve learned. As you may know version 5.1 of vSphere requires the SSO service to be installed before vCenter can be upgraded. SSO is required for this version and cannot be skipped.\nAfter my upgrade, I was unable to log into the vSphere client using my Active Directory credentials and received an error message that stated: \u0026ldquo;You do not have permission to login to the server\u0026rdquo;. If I went to the new vSphere web client at https://vcentername:9443/vsphere-client I wasn\u0026rsquo;t able to login either.\nTo fix this issue I logged into the vSphere web client again, but this time used the user: admin@system-domain. No, this is not a variable like admin@system-vmware.com or something. I then specified the SSO password that I entered during the installation of SSO for 5.1.\nOnce you\u0026rsquo;re logged in, you can go to SignOn and Discovery \u0026ndash;\u0026gt; Configuration.\nYou\u0026rsquo;ll need to add your identity source to the list.\nOnce you can see your Active Directory source listed, you can then add your groups that are allowed to login from AD.\nAt this point you should be able to select the SSO Users and Groups and then add the accounts that should be allowed to login to the clients.\nAs for why the issue happened to me in the first place, I\u0026rsquo;m not quite sure. Chris Wahl has a post about upgrading that explains that it\u0026rsquo;s very important for your PTR records to be setup correctly for your domain and the VMware best practices guide mentions this as well as time sync with the domain. In my case, neither of these seemed to be my issue. There may be another common cause for this that is affecting users, but I\u0026rsquo;m not aware of what it is. Hopefully this post explains how to work around the issue if it is affecting you. If you have more information about these SSO issues and how you fixed them, please post info in the comments.\n","permalink":"https://theithollow.com/2012/11/13/vsphere-5-1-sso-issues/","summary":"\u003cp\u003eOver the past few weeks, I\u0026rsquo;ve been hearing a lot of customers having issues logging into vCenter after upgrading to vSphere 5.1.  I upgraded the lab and had some issues as well, but was able to fix the issues and wanted to share what I\u0026rsquo;ve learned.  As you may know version 5.1 of vSphere requires the SSO service to be installed before vCenter can be upgraded.  SSO is required for this version and cannot be skipped.\u003c/p\u003e","title":"vSphere 5.1 SSO issues"},{"content":"I recently tried out the HP Insight Control plugin for vCenter and was very pleased about the added functionality that was provided in my vSphere client.\nhttp://h18013.www1.hp.com/products/servers/management/integration.html\nThis plugin gives you additional control of your HP servers and storage that are being used by your vSphere environment. Like other storage vendors, the install will configure your VASA plugin, and will also allow you to do things such as create datastores and snapshots on the storage array from the vSphere Client.\nThe best feature that is added with this plugin is end to end mapping of the networking. From portgroup to physical switch, including the HP Virtual Connect Modules if they exist. One of the biggest questions I hear from customers is how the virtual NIC\u0026rsquo;s map to a physical upstream switch using Virtual Connect Modules. This plugin does a nice job of mapping that information.\nIn addition, you can update firmware and get a full picture of what your ESXi host looks like.\nYou\u0026rsquo;ll also see that the number of sensors that are displayed dramatically increases. You should see details about all the little pieces of your hosts.\nThe overview page, will have little blue buttons at the bottom of the page that will allow you to connect directly to the iLO port of the host, the HP System Insight Management server, Onboard Administrator and any other services you may have.\nYou just have to ignore the fact that it displays \u0026ldquo;VMWare\u0026rdquo; instead of \u0026ldquo;VMware\u0026rdquo;. If that\u0026rsquo;s not a deal breaker for you, then check it out.\n","permalink":"https://theithollow.com/2012/11/05/hp-insight-control-for-vcenter/","summary":"\u003cp\u003eI recently tried out the HP Insight Control plugin for vCenter and was very pleased about the added functionality that was provided in my vSphere client.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://h18013.www1.hp.com/products/servers/management/integration.html\"\u003ehttp://h18013.www1.hp.com/products/servers/management/integration.html\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThis plugin gives you additional control of your HP servers and storage that are being used by your vSphere environment.  Like other storage vendors, the install will configure your VASA plugin, and will also allow you to do things such as create datastores and snapshots on the storage array from the vSphere Client.\u003c/p\u003e","title":"HP Insight Control for vCenter"},{"content":" I\u0026rsquo;ve written posts in the past regarding LUN masking on a storage array, but it is possible to mask a path directly from your vSphere environment. I feel that if at all possible the masking should be handled at array level because the array is closest to the disk. Let\u0026rsquo;s face it, if vSphere shouldn\u0026rsquo;t see a LUN for one reason or another, then why is the array presenting it in the first place?\nBut for the purposes of this post we\u0026rsquo;ll assume that we\u0026rsquo;re unable to contact the storage team or the array itself is somehow unable to mask a path. vSphere can mask a path for you in the Pluggable Storage Architecture.\nLet\u0026rsquo;s take a look at how this can be accomplished. In my home lab you can see that I have a small datastore named TEST_PSA which we\u0026rsquo;ll mask.\nIf we go into the properties we\u0026rsquo;ll see the runtime name, which is how I\u0026rsquo;ll identify this LUN to be masked. Please take note that there are more ways to identify devices other than the runtime name. You could for instance mask all paths that are presented from an EMC array by using a vendor identifier.\nYou can also gather more information from your devices by using the esxcli commands provided by VMware.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] esxcli storage core device list [/sourcecode]\nNow we need to find an open claimrule to be used.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] esxcli storage core claimrule list [/sourcecode]\nThe list shown above is a set of rules on what to do with storage devices that are found by ESXi. Since we want to mask a device, we need to add a new rule. I can see from this list that rule 102 is not being used so let\u0026rsquo;s use this.\nSo now let\u0026rsquo;s create our rule.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] esxcli storage core claimrule add -r 102 -t location -C 0 -T 3 -L 0 -P MASK_PATH [/sourcecode]\nTo breakdown this command we added rule 102 (-r 102) type location (-t location) device C0:T3:L0 (-C 0 -T 3 -L0) and plugin MASK_PATH\nOnce the command has been run we can view the claimrules again and we\u0026rsquo;ll notice that a new rule has been added.\nThis is not the end of the configuration however, the class is of type \u0026ldquo;file\u0026rdquo;. This means the rule has been created, but not yet enabled. To get it up and working, we need to load the rules and then run them.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] esxcli storage core claimrule load [/sourcecode]\nNow we can see that the class is of type \u0026ldquo;runtime\u0026rdquo; so it\u0026rsquo;s now active.\nRun the claimrules\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] esxcli storage core claimrule run [/sourcecode]\nI rebooted my ESXi host and when it came back up, it no longer claimed the device TEST_PSA as we had hoped.\n","permalink":"https://theithollow.com/2012/10/30/vmware-path-masking/","summary":"\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/10/images.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/10/images.jpg\"\u003e\u003c/a\u003e  I\u0026rsquo;ve written posts in the past regarding \u003ca href=\"/2012/03/12/lun-masking-vs-zoning/\" title=\"Lun Masking vs Zoning\"\u003eLUN masking\u003c/a\u003e on a storage array, but it is possible to mask a path directly from your vSphere environment.  I feel that if at all possible the masking should be handled at array level because the array is closest to the disk.  Let\u0026rsquo;s face it, if vSphere shouldn\u0026rsquo;t see a LUN for one reason or another, then why is the array presenting it in the first place?\u003c/p\u003e","title":"VMware Path Masking"},{"content":"As you may well know, when installing VMware ESXi on an HP server, it is best practice to get a specific image of the hypervisor with the vendor\u0026rsquo;s drivers included. This will prevent issues such as having missing network cards once you\u0026rsquo;ve installed ESXi. But what about keeping the server up to date?\nMany companies update their servers on a monthly basis for compliance reasons or just best practices. It has been my experience that VMware patches are usually deployed at the same time. VMware Update Manager (VUM) can push updates to the ESXi hosts with the latest patches from VMware, but did you know that you can also use it to patch HP Drivers and CIM providers?\nOpen the VMware Update Manager plugin and go to the download settings. Choose \u0026ldquo;Add Download Source\u0026hellip;\u0026rdquo;\nAdd the Source URL http://vibsdepot.hp.com/index.xml and give it a description.\nWhen you\u0026rsquo;re done, you\u0026rsquo;ll see an additional download source added. You can then choose to download the updates immediately by hitting the download now button, or simply wait for the next scheduled download.\nObtaining the Updates is done. Now, if you want to assign these updates to ESXi hosts, you\u0026rsquo;ll need to create a new baseline.\nWe would choose a dynamic baseline so that we can patch hosts on a monthly basis with any new patches.\nChoose the Hewlett-Packard Company and whatever VMware product\nChoose any patches to exclude from the baseline.\nAdd any additional patches.\nNow once you\u0026rsquo;ve created the baseline, you\u0026rsquo;ll need to attach the baseline to a host or cluster.\nIn case you\u0026rsquo;re wondering, yes you can use these vibs with a custom VMware Image. Chris Wahl has recently posted about doing just that.\nhttp://wahlnetwork.com/2012/10/22/utilizing-vsphere-esxi-image-builder-with-partner-software-depots/\n","permalink":"https://theithollow.com/2012/10/22/updating-hp-esxi-hosts-with-vum/","summary":"\u003cp\u003eAs you may well know, when installing VMware ESXi on an HP server, it is best practice to get a specific image of the hypervisor with the vendor\u0026rsquo;s drivers included.  This will prevent issues such as having missing network cards once you\u0026rsquo;ve installed ESXi.  But what about keeping the server up to date?\u003c/p\u003e\n\u003cp\u003eMany companies update their servers on a monthly basis for compliance reasons or just best practices.  It has been my experience that VMware patches are usually deployed at the same time.  VMware Update Manager (VUM) can push updates to the ESXi hosts with the latest patches from VMware, but did you know that you can also use it to patch HP Drivers and CIM providers?\u003c/p\u003e","title":"Updating HP ESXi Hosts with VUM"},{"content":"I\u0026rsquo;d been hearing a lot of buzz about PHD Virtual after seeing them at VMworld and the Chicago VMUG Conference and thought that I’d try them out. I was quite pleased with their product and recommend that you check them out if you’re looking for virtual backup solutions. I know that the big player in the market seems to be Veeam so if you\u0026rsquo;d like a comparison of features, check out this information from ITComparison.com to get a non-biased opinion.\nPHD was very easy to install. A standard installer executable for the machine you’ll be using to manage the backups, and then you need to import an OVF file. When you launch your PHD client for the first time, you’ll need to enter credentials for your vCenter server and a location for your backups to be stored. This can be a virtual disk attached to your virtual appliances, an NFS share, or a CIFS share.\nOnce your initial setup is done, you can start creating backups. Go to Jobs and click backup, which will take you through a wizard that will setup your backups.\nChoose your VM(s)\nChoose your appliance. (yes this is so that you can load balance between appliances)\nChoose either a Full backup, or incremental.\nSet your schedule\nAdd additional options such as adding retention policies.\nQuiesce vss aware applications andor truncate logs. This is a very nice feature if you need to backup a database application!\nReview your summary, and complete.\nYou can see your backups running and what kind of dedupe ratio you’re getting.\nThere are some other great features of this product such as replication, instant VM recovery which mounts the VM directly from backup storage, file recovery etc.\nWhat I found useful is the ability to migrate your settings from one appliance to another, by either using the migration wizard, or through a standard exportimport process.\nIt’s a nice product and worth a look, if you’re shopping for backup software.\n","permalink":"https://theithollow.com/2012/10/15/phd-virtual-review/","summary":"\u003cp\u003eI\u0026rsquo;d been hearing a lot of buzz about PHD Virtual after seeing them at VMworld and the Chicago VMUG Conference and thought that I’d try them out.  I was quite pleased with their product and recommend that you check them out if you’re looking for virtual backup solutions.  I know that the big player in the market seems to be Veeam so if you\u0026rsquo;d like a comparison of features, check out this information from \u003ca href=\"http://www.itcomparison.com/Backup/VirtualPHDvsVeeam/VirtualPHDvsVeeam.htm\" title=\"ITComparison.com\"\u003eITComparison.com\u003c/a\u003e to get a non-biased opinion.\u003c/p\u003e","title":"PHD Virtual Review"},{"content":"I know the subject of Microsoft licensing makes most administrators want to crawl under a desk and hide when the topic comes up, but it\u0026rsquo;s important to understand a few things if you\u0026rsquo;re going to be standing up a VMware View deployment, or any VDI project.\nDuring the install of a Windows 7 operating system you of course have to enter a license key. Once the OS has been installed and booted up, it needs to activate. Product activation is necessary so that Microsoft can make sure that the software is only installed on the number of PCs that were licensed to use it. This product activation can be done via a network connection or via telephone. When you enter your assigned volume license keys during installation you have two types of keys that you can enter:\nMedia Access Key (MAK)\nKey Management Server Keys (KMS)\nMedia Access Keys activate by communicating directly with Microsoft either over the Internet or via Telephone. Microsoft then keeps track of the number of licenses that have been activated. The other option involves standing up your own Key Management Server which will handle the activation for you. A Key Management Server can be very useful if you don\u0026rsquo;t have Internet access for your desktops to activate with Microsoft. KMS is the only supported way to do product activations for a View deployment. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1026556\nSpinning up VMs over and over again will require product activations which will likely throw of a flag with Microsoft due to the numbers of activations coming from a single product key. MAK licensing does have limits.\nFrom the VLMK FAQ. http://www.microsoft.com/Licensing/existing-customers/product-activation-faq.aspx\nAre there usage limits on MAKs? Yes. MAKs allow a predetermined number of activations. This number depends on the type of agreement you have. The number of activations can be revised (at the request of the customer or of Microsoft) to accommodate your regular usage.\nYou can find the number of activations remaining on a MAK by going to the VLSC, or by using the Volume Activation Management Tool (VAMT). If the existing activation limit on your MAK is inadequate for your deployment, contact the Microsoft Activation Center.\nSetting up the KMS server is beyond the scope of this post, but it should be mentioned that once you\u0026rsquo;ve got the activation server you just need to enter your KMS code into the license key section during the Windows installation. If you\u0026rsquo;re looking for the KMS key for your operating system, you can find them on the technet site. http://technet.microsoft.com/en-us/library/ff793406.aspx\nIf you\u0026rsquo;ve just found this article and you\u0026rsquo;ve already created your golden image, there is good use. You can use slmgr to modify your product key.\nFirst check to see if your OS is using MAK by opening a command prompt and run slmgr /dli\nTo change your product key run slmgr /ipk KEY12-KEY12-KEY12-KEY12-KEY12\n","permalink":"https://theithollow.com/2012/10/08/microsoft-licensing-with-vmware-view-composer/","summary":"\u003cp\u003eI know the subject of Microsoft licensing makes most administrators want to crawl under a desk and hide when the topic comes up, but it\u0026rsquo;s important to understand a few things if you\u0026rsquo;re going to be standing up a VMware View deployment, or any VDI project.\u003c/p\u003e\n\u003cp\u003eDuring the install of a Windows 7 operating system you of course have to enter a license key.  Once the OS has been installed and booted up, it needs to activate.  Product activation is necessary so that Microsoft can make sure that the software is only installed on the number of PCs that were licensed to use it.  This product activation can be done via a network connection or via telephone.  When you enter your assigned volume license keys during installation you have two types of keys that you can enter:\u003c/p\u003e","title":"Microsoft Licensing with VMware View Composer"},{"content":"Recently my friend Raj (@rajtech) and I presented two sessions at the Chicago VMware Users Conference in Schaumburg, IL.\nI\u0026rsquo;ve promised several people that attended the event, that I\u0026rsquo;d post the slides. They are below.\nSRM Presentation Chicago VMUG Conference 2012 DS Presentation Chicago VMUG Conference 2012 ","permalink":"https://theithollow.com/2012/10/02/chicago-vmug-conference-2012-presentations/","summary":"\u003cp\u003eRecently my friend Raj (@rajtech) and I presented two sessions at the Chicago VMware Users Conference in Schaumburg, IL.\u003c/p\u003e\n\u003cp\u003eI\u0026rsquo;ve promised several people that attended the event, that I\u0026rsquo;d post the slides.  They are below.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/10/new-merged-srm-session-presentation-v31.pdf\"\u003eSRM Presentation Chicago VMUG Conference 2012\u003c/a\u003e \u003ca href=\"http://shanksnet.files.wordpress.com/2012/10/srm.png\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/10/srm.png?w=300\"\u003e\u003c/a\u003e \u003ca href=\"http://shanksnet.files.wordpress.com/2012/10/new-vds-session-presentation-final.pdf\"\u003eDS Presentation Chicago VMUG Conference 2012\u003c/a\u003e \u003ca href=\"http://shanksnet.files.wordpress.com/2012/10/vds.png\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/10/vds.png?w=300\"\u003e\u003c/a\u003e\u003c/p\u003e","title":"Chicago VMUG Conference 2012 Presentations"},{"content":" I signed up for the VMware Users Group last year at VMworld. I don\u0026rsquo;t remember, why I decided to do it but I assume that it had something to do with a free T-shirt. Since then, I\u0026rsquo;ve been to several meetings, all at my local Chicago VMUG chapter www.chicagovmug.com . At first I was pretty skeptical, but it turns out it\u0026rsquo;s been one of the best things I\u0026rsquo;ve done for my career.\nObviously, VMUG is a great way to get some insight on how your colleagues use VMware to accomplish their work goals. It\u0026rsquo;s also a pretty good way to see some newer technologies and meet with vendors that you might not otherwise have an opportunity to see. I\u0026rsquo;ve seen some great training sessions as well, which has helped to expand my expertise in certain subjects.\nIt turns out however, that the most useful things I learned were when I really got involved in the meetings. I\u0026rsquo;ve spoken at a group meeting, and will be presenting two more sessions at the Chicago VMware Users Group Conference this week. I don\u0026rsquo;t necessarily enjoy speaking in front of large groups, but at a certain point I felt that I should share what I knew. (I just hope the other attendees find that information useful.) Speaking takes up a fair amount of time when you consider any research you do, and fact checking your slide deck, but I\u0026rsquo;ve learned a lot more from preparing to speak at a session, than listening to one.\nI\u0026rsquo;ve met some great people in the VMUG program that have plenty of experiences to share. I created this blog after some encouragement from some VMUG members. I can say for sure, that I\u0026rsquo;ve learned all kinds of things from posting on this blog. (gramar and speling excludeded) I recently switched jobs and could not believe the support and assistance I received from members of my local users groups.\nI guess the point of this post was to encourage others to join their local VMUG chapter, and not only be a member, but get involved. There are some really great benefits to these groups, and most of them are intangible. Put yourself out there and see what happens. You might learn something wonderful about yourself.\nTo get involved, go to http://www.vmug.com/l/pw/rs.\nIf you\u0026rsquo;re in the Chicago Area this week, make sure to register for the Chicago VMUG Conference, and come say hi. If you\u0026rsquo;ve gotten that far, make sure to check out the Session on SRM with Raj Jethnani and myself. \u0026ndash; No heckling.\n","permalink":"https://theithollow.com/2012/09/22/vmug-benefits/","summary":"\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/09/vmuglogo.png\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/09/vmuglogo.png\"\u003e\u003c/a\u003e I signed up for the VMware Users Group last year at VMworld.  I don\u0026rsquo;t remember, why I decided to do it but I assume that it had something to do with a free T-shirt.  Since then, I\u0026rsquo;ve been to several meetings, all at my local Chicago VMUG chapter \u003ca href=\"http://www.chicagovmug.com\"\u003ewww.chicagovmug.com\u003c/a\u003e .  At first I was pretty skeptical, but it turns out it\u0026rsquo;s been one of the best things I\u0026rsquo;ve done for my career.\u003c/p\u003e","title":"VMUG Benefits"},{"content":"I recently got my hands on a pair of HP P4300s in the lab and wanted to see how the performance was with Network RAID. One of the most read posts on this site is on Understanding RAID Penalty and I was curious to see how Network RAID played into this equation.\nBasic Setup I have 2 HP P4300s, each with eight 15k SAS drives in a RAID 5 configuration. This means that I should have a total of 1400 RAW IOPS (8 disks * 175 IOPS) on each lefthand node. Since I have 2 of them, I\u0026rsquo;m calculating 2800 RAW IOPS. In order to get some real world functional IOPS, we\u0026rsquo;ll assume that we have 50% Reads and 50% Writes, and don\u0026rsquo;t forget to take out the RAID Penalty for the RAID 5. Let\u0026rsquo;s plug this into our Functional IOPS equation to get:\n2 Nodes * (1400 Raw IOPS * .5 / 4) + 2 Nodes * (1400 * .5) = 1750 IOPS\nSo now we\u0026rsquo;ve found what the total amount of IOPS we should get for a RAID 5 implementation of 2 HP Lefthand Nodes. I can assume that adding Network RAID 0 to this should be about this number, because traditional RAID 0 has a write penalty of 1. My quick ninja math tells me that dividing anything by 1 returns me the same number so we\u0026rsquo;re set.\nWhat about using Network RAID 10? Traditional RAID 10 has a write penalty of 2 so we must modify our calculations a bit to account for an additional write penalty.\n2 Nodes * (1400 Raw IOPS * .5 / 4 / 2 ) + 2 Nodes * (1400 * .5) = 1575 IOPS\nUp until this point, it\u0026rsquo;s all been theory. I was looking for some research on this and couldn\u0026rsquo;t find any, so I may be completely wrong here. If you have any different findings or data please feel free to share it. If I find out I\u0026rsquo;m posting bogus data here, I\u0026rsquo;ll remove this post.\nResults So I pulled out IOMETER and and set my worker thread to 64 outstanding IOs, a Maximum 8000000 Sectors (4Gb), and a 4k 50% Read scenario.\nNetwork RAID 0 Network RAID 10 Now clearly I\u0026rsquo;m getting more IOPS than I\u0026rsquo;d calculated, but that appears to be due to the controller cache on the P4300. You can however see that there was a pretty big difference in the number of IOPS that we got when using Network RAID 10 vs what we got when using Network RAID 0.\nIt\u0026rsquo;s seemed logical to me that the IOPS would go down, but I thought I\u0026rsquo;d put it in writing in case someone else was looking for similar information.\nI did graph out what several different RAID configuration should look like while using Network RAID. Having a traditional RAID 0 with any Network RAID has the biggest impact on IOPS. On the other hand, Traditional RAID 6 with Network RAID doesn\u0026rsquo;t affect performance as much.\nThe Y axis is IOPS, and the X Axis is the traditional RAID Penalty.\n","permalink":"https://theithollow.com/2012/09/16/network-raid-penalty/","summary":"\u003cp\u003eI recently got my hands on a pair of HP P4300s in the lab and wanted to see how the performance was with Network RAID.  One of the most read posts on this site is on \u003ca href=\"/2012/03/21/understanding-raid-penalty/\"\u003eUnderstanding RAID Penalty\u003c/a\u003e and I was curious to see how Network RAID played into this equation.\u003c/p\u003e\n\u003ch2 id=\"basic-setup\"\u003eBasic Setup\u003c/h2\u003e\n\u003cp\u003eI have 2 HP P4300s, each with eight 15k SAS drives in a RAID 5 configuration.  This means that I should have a total of 1400 RAW IOPS (8 disks * 175 IOPS) on each lefthand node.  Since I have 2 of them, I\u0026rsquo;m calculating 2800 RAW IOPS.  In order to get some real world functional IOPS, we\u0026rsquo;ll assume that we have 50% Reads and 50% Writes, and don\u0026rsquo;t forget to take out the RAID Penalty for the RAID 5.  Let\u0026rsquo;s plug this into our Functional IOPS equation to get:\u003c/p\u003e","title":"Network RAID Penalty"},{"content":"I have been recently thrown into the world of HP Storage, and have been trying to learn all of the storage techniques that are in the HP product line. I noticed that I couldn\u0026rsquo;t find anything that really did a compare and contrast of each of the products so I started to put one together. Anything I couldn\u0026rsquo;t understand, I asked a great guy named Calvin Zito (@hpstorageguy) to give me a hand with. He was more than gracious so follow him on Twitter.\nHere were the results. (sorry for the poor resolution. Click on the picture or the link to see it in a viewable webpage.\nHPStorageComparision @HPStorageGuy\u0026rsquo;s video blog \u0026ldquo;Around the Storage Block\u0026rdquo; on the X5000\nhttp://www.youtube.com/watch?v=x59luwirFKA\nAgain, thanks to Calvin for his help with this. Visit his blog at http://www.hp.com/storage/blogs ","permalink":"https://theithollow.com/2012/09/10/hp-storage-comparisons-sept-2012/","summary":"\u003cp\u003eI have been recently thrown into the world of HP Storage, and have been trying to learn all of the storage techniques that are in the HP product line.  I noticed that I couldn\u0026rsquo;t find anything that really did a compare and contrast of each of the products so I started to put one together.  Anything I couldn\u0026rsquo;t understand, I asked a great guy named Calvin Zito (@hpstorageguy) to give me a hand with.  He was more than gracious so follow him on Twitter.\u003c/p\u003e","title":"HP Storage Comparisons (Sept. 2012)"},{"content":"After attending VMworld this year, I decided I needed to try to understand VXLANs a little better. Based off of the basic concept that it stretches a layer two broadcast domain over layer three networks, I was worried that I knew how this was accomplished.\nWhat is VXLAN? VXLAN stands for Virtual Extensible LAN and is a fairly new method of making the datacenter network elastic. Suppose for example that you want to be able to move your virtual machines from your own server room to a co-location and then to a public cloud depending on what the load was on your environment. In order to do this without causing downtime, you\u0026rsquo;d need a way for your layer two ethernet frames to continue getting from your clients to your servers even, if a router is in that path.\nHow VXLANs work Lets remember that computers use Ethernet frames to communicate with one another on the same broadcast domain. If a two machines are separated by a router, they will need to use IP communications to exchange information. Switches will hold MAC addresses in a table in order to cut down on broadcasts. If machine A is trying to reach machine B and the MAC address is known, the switch will forward the frame out the corresponding switch port. If the MAC address is unknown it will flood the frame or broadcast it out all switch ports in order to find the correct MAC address. Routers will drop these broadcast frames.\nVXLANs work by encapsulating the Ethernet frames inside of an IP Packet. Encapsulation is certainly nothing new, VPNs use encapsulation to carry secured traffic between destinations, but VXLANs may change how LANs are thought of and built. Now if the MAC Address is not known by the switch (or the vSwitch) a multicast packet is sent to the router which can then be sent to other machines in the same VXLAN Segment.\nIn this first example, we see what happens when machine A tries to communicate with machine B and they are on the same VLAN but separated by a router. We see that once the ethernet broadcast gets to the router, the packet is dropped.\nIn the second example we assume that machine A and B are on the same VXLAN segment. We can assume that machine C is not on the same VXLAN Segment.\nVXLAN Benefits The benefits of VXLANs could be utilized in a variety of ways, including making public and private clouds into Hybrid Clouds that have endless resources. Say for instance that your private cloud has 100 GB of available memory to use. If this is being used up by production and more resources are needed, simply vMotion your virtual machines to a public cloud that you\u0026rsquo;re doing pay-per-use on. When you provision more memory or cut down on resources, then they can be moved back. If you\u0026rsquo;re using VXLANs then the vm you moved could be on the same layer two address which would be likely not even noticeable by users.\nVXLANS can also be very useful for multi-tenancy. Now the same IP addresses can exist on the same layer 2 netowork because they\u0026rsquo;re wrapped in an extra layer of encapsulation. Not only this, but in a multi-tenant environment you may find out that you\u0026rsquo;re quickly runnin out of VLANs. VXLANS can help solve this problem.\nVXLANs essentially allow you to modify your infrastructure quickly to meet operational needs, without having to wait to modify physical network resources.\nMy Thoughts The software designed networking that VMware has in mind with the VXLAN idea is certainly an interesting one and perhaps has merit in specific use cases. However, my thoughts are that it should only be used for the specific use cases. In my opinion multicast traffic for a datacenter shouldn\u0026rsquo;t be passed ove a WAN unless absolutely necessary. It will increase collisions over a likely smaller WAN connection. It also may cause some security problems since many packet inspection utilities aren\u0026rsquo;t ready for this type of encapsulation.\nPlease share your thoughts on the subject. I\u0026rsquo;d love to hear them.\n","permalink":"https://theithollow.com/2012/09/03/vxlans-a-good-idea/","summary":"\u003cp\u003eAfter attending VMworld this year, I decided I needed to try to understand VXLANs a little better.  Based off of the basic concept that it stretches a layer two broadcast domain over layer three networks, I was worried that I knew how this was accomplished.\u003c/p\u003e\n\u003ch2 id=\"what-is-vxlan\"\u003eWhat is VXLAN?\u003c/h2\u003e\n\u003cp\u003eVXLAN stands for Virtual Extensible LAN and is a fairly new method of making the datacenter network elastic.  Suppose for example that you want to be able to move your virtual machines from your own server room to a co-location and then to a public cloud depending on what the load was on your environment.  In order to do this without causing downtime, you\u0026rsquo;d need a way for your layer two ethernet frames to continue getting from your clients to your servers even, if a router is in that path.\u003c/p\u003e","title":"A Quick Thought on VXLANs"},{"content":" VMworld 2012 was in San Francisco this year and the weather was beautiful. San Francisco was a lovely host and the Moscone Center proved to be very capable of handling the large crowds that were around for the event.\nThe Solutions Exchange was massive. It included companies like HP, EMC, Netapp as well as some startup companies like Tintri, PHD Virtual and a very new Cloud Physics which was the talk of VMworld this year. Check them out at http://www.cloudphysics.com.\nWhen you\u0026rsquo;re tired of the Solutions Exchange, and that takes a while with the number of vendors there, there were some really great sessions presented by some of the best bloggers, VMware employees and vendors. The picture below is a session I attended with presenters Frank Denneman @frankdenneman and Rawlinson Rivera @PunchingClouds speaking about Resource Pools, a fairly misunderstood subject. This session made the top 10 of VMworld sessions this year. http://download3.vmware.com/vmworld/2012/top10/vsp1683.pdf\nSessions aren\u0026rsquo;t the only way to learn at VMworld. There were Hands On Labs to allow you to try out the technologies on your own. This year there were some technical glitches so the theme \u0026ldquo;Right Here Right Now\u0026rdquo; probably didn\u0026rsquo;t apply to the Hands On Labs part of the conference. There were some very long wait times periodically, but the labs were very good when they were working and I believe they may be available online after the event. The HOL included a zero client and two large monitors for every station. A great way to learn in my opinion. Cheers to the HOL team for all of their hard work no matter what the outcomes were.\nMany folks were unable to present sessions due to a limited number of slots open so the folks at the #vBrownbag crew got the #unconference going. Here at the \u0026ldquo;hang space\u0026rdquo; there were several presentations on different technologies as well as interviews and all kinds of shenanigans. Check out http://professionalvmware.com to see the updated content as well as the weekly brownbag that provides great content. Also the autolab is a great tool to grab from this site. Below you\u0026rsquo;ll see the stage setup at the hang space and a couple of knuckleheads from the vbrownbag crew sporting some theITHollow.com stickers. Also, I\u0026rsquo;m not sure those are their real name tags.\nIf you are thinking of going to VMworld next year, or even this year in Barcelona, I definitely recommend it. It\u0026rsquo;s a great way to meet other people in your industry, learn about some great products that you don\u0026rsquo;t get to see much and learn lots.\nP.S. If you\u0026rsquo;re going to head down to the Solutions Exchange, make sure you have plenty of space in your suitcase. You might end up with a lot of swag.\n","permalink":"https://theithollow.com/2012/09/01/vmworld-2012-right-here-right-now/","summary":"\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/09/vmworld2012-1.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/09/vmworld2012-1.jpg?w=300\"\u003e\u003c/a\u003e VMworld 2012 was in San Francisco this year and the weather was beautiful.  San Francisco was a lovely host and the Moscone Center proved to be very capable of handling the large crowds that were around for the event.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/09/vmworld2012-2.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/09/vmworld2012-2.jpg?w=300\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eThe Solutions Exchange was massive.  It included companies like HP, EMC, Netapp as well as some startup companies like Tintri, PHD Virtual and a very new Cloud Physics which was the talk of VMworld this year.  Check them out at \u003ca href=\"http://www.cloudphysics.com\"\u003ehttp://www.cloudphysics.com\u003c/a\u003e.\u003c/p\u003e","title":"VMworld 2012 Right Here Right Now"},{"content":"I gave an overview of how HP blades are mapped to Virtual Connect Interconnect Modules in my last post. /2012/08/09/hp-virtual-connect-basics This post focus more on understanding the networks created through HP Virtual Connect Manager.\nIn the last post I described out blade NICs map to the Interconnect Bays in the back of an HP C7000 Chassis using the downlinks. Now let\u0026rsquo;s talk about how those NICs can get added to a specific Network. HP calls these networks inside of a c7000 chassis \u0026ldquo;vNets\u0026rdquo;.\nvNets allow the chassis to pass traffic between blades and to the external uplinks in the Virtual Connect Interconnet Modules. These vNets can be created by clicking on the \u0026ldquo;Ethernet Networks\u0026rdquo; link from VC. Here, you can setup your uplinks, give it a name, and a description. Think of this much like a switched network, if two machines are on the same network, then they can communicate with each other (of course with correct settings, and without firewalls, etc).\nLet\u0026rsquo;s look at an example of a set of networks we\u0026rsquo;d like to build in our c7000 Chassis. In this example we have four blades. All of them should connect to VLAN2 which is a vMotion, Live Migration or some sort of cluster heartbeat network. Two of our blades will need to connect to a network called VLAN1 (yes I know this is a security concern, it\u0026rsquo;s just an example) which is a network used by servers other than in our blade chassis. And lastly, we have two blades that need access to three other networks, such as VLAN3, VLAN4, VLAN5 which are also used outside of the blade chassis.\nThe picture below should give a good idea about we\u0026rsquo;re trying to build. As you can see, we have two vNets that have redundant uplinks, and a single vNet with zero uplinks.\nVLAN2 vNET There is no reason for that traffic to ever leave the chassis. Only the blades inside of our chassis would need to use this network. So we can create our network, call it VLAN2, don\u0026rsquo;t assign any uplinks and then in our server profiles, assign each blade to use this network. Obviously, this is only relevant if you have a single chassis. If there are multiple enclosures, then your vMotion or Live Migration Network (vNET2) would need to have uplinks to vmotion between the enclosures.\nVLAN1 vNET Next, we want to setup the VLAN1 vNET which does require access outside of the chassis. Let\u0026rsquo;s create another network, give it a name, and add some uplinks. Assign this server profile to blade1 and blade 2.\nVLAN3,4,5 vNET This network require a bit more configuration. In this case, we have one blade NIC, that is connected to a network with three different types of tagged packets that could be on it. In this case, we need to create a Shared Uplink Set (SUS). You can create a SUS by clicking the link on the left hand side of Virtual Connect Manager, right under the \u0026ldquo;Ethernet Networks\u0026rdquo; link we\u0026rsquo;ve been using in the previous examples.\nTo create a SUS, you give it a name and add uplinks like you would a regular vNet, but in this situation you also need to add the networks that will share this set of uplinks. In the case below I\u0026rsquo;ve added three networks and entered the associated VLAN ID.\nOnce you\u0026rsquo;ve created your SUS, you can go into your server profiles and choose \u0026ldquo;Multiple Networks\u0026rdquo; under the network name.\nWhen you select Multiple Networks, you\u0026rsquo;ll be brought to a screen to allow you to select which networks will be mapped to the NIC.\nWhen it\u0026rsquo;s all said and done, I have networks that look like this, and match the original design.\n","permalink":"https://theithollow.com/2012/08/14/hp-virtual-connect-networks/","summary":"\u003cp\u003eI gave an overview of how HP blades are mapped to Virtual Connect Interconnect Modules in my last post.  \u003ca href=\"/2012/08/09/hp-virtual-connect-basics\"\u003e/2012/08/09/hp-virtual-connect-basics\u003c/a\u003e  This post focus more on understanding the networks created through HP Virtual Connect Manager.\u003c/p\u003e\n\u003cp\u003eIn the last post I described out blade NICs map to the Interconnect Bays in the back of an HP C7000 Chassis using the downlinks.  Now let\u0026rsquo;s talk about how those NICs can get added to a specific Network.  HP calls these networks inside of a c7000 chassis \u0026ldquo;vNets\u0026rdquo;.\u003c/p\u003e","title":"HP Virtual Connect Networks"},{"content":"HP Virtual Connect is a great way to handle network setup for an HP Blade Chassis. When I first started with Virtual Connect it was very confusing for me to understand where everything was, and how the blades connected to the interconnect bays. This really is fairly simple, but might be confusing to anyone that\u0026rsquo;s new to this technology. Hopefully this post will give newcomers the tools they need to get started.\nDownlinks The HP interconnect modules have downlink and uplink ports. The uplink ports are pretty obvious, as they have a port on them that can be connected to a switch or another device. The downlink ports are less obvious. The downlinks exist between the interconnects and the blade bays. For example, in a c7000 chassis there are 16 server bays so an HP Flex-10 interconnect would have 16 downlink ports, one for each blade.\nIn the picture below of an HP VC Flex-10 Enet Module, there are 8 uplink ports, which are visible, as well as 16 downlink ports which are not visible, for a total of 24 ports.\nBlade Mapping Now that we\u0026rsquo;ve seen that each blade has connections to the interconnect via the downlink ports, lets take a closer look at how we see what NICs are mapped to which interconnect bay. HP Blades have two Lan On Motherboard (LOM) ports as well as room for two mezzanine cards. The mezzanine cards can contain a variety of different types of PCI devices, but in many cases they are populated with either NICS or HBAs.\nThe LOMs and Mezz Cards map in a specific order to the interconnect bays.\nLOM1 - Interconnect Bay 1\nLom2 - Interconnect Bay 2\nMezz1 - Interconnect Bay 3 (and 4 if it\u0026rsquo;s a dual port card)\nMezz2 - Interconnect Bay 5 (and 6 if it\u0026rsquo;s a dual port card, 7 and 8 if it\u0026rsquo;s a quad port card)\nThe picture below should help to understand how the HP Blades map to the interconnect bays. This example uses dual port mezzanine cards.\nLOM Ports with Flex-10 An additional thing can happen if you\u0026rsquo;ve got LOM FlexNICs as well as a Flex-10 Ethernet Module or Flex Fabric interconnect module. You can subdivide the LOM NICs into 4 Logical NICs. From here, your hypervisor or operating system will see 8 NICs instead of the original 2 NICs that would normally be there. This is an especially nice feature if you\u0026rsquo;re running virtualization, as you should now have plenty of network cards for vMotion, Fault Tolerance, Production Networks, and management networks.\nAs you can see from the following screenshot, the LOM NICs will be seperated into 4 Logical NICs labled 1-a, 1-b \u0026hellip; 2-d.\nI should also mention that if the interconnect modules are Flex Fabric, the LOM-1b and LOM-2b could be either an HBA or a NIC, your choice.\nI know that these concepts seem fairly straight forward now, but to a beginner this is some very useful information to get started with HP Virtual Connect. I hope to have some more blog posts in the future about configuring networking with Virtual Connect.\n","permalink":"https://theithollow.com/2012/08/09/hp-virtual-connect-basics/","summary":"\u003cp\u003eHP Virtual Connect is a great way to handle network setup for an HP Blade Chassis.  When I first started with Virtual Connect it was very confusing for me to understand where everything was, and how the blades connected to the interconnect bays.  This really is fairly simple, but might be confusing to anyone that\u0026rsquo;s new to this technology.  Hopefully this post will give newcomers the tools they need to get started.\u003c/p\u003e","title":"HP Virtual Connect Basics"},{"content":"I recently took a closer look at Veeam to do some replication work. I\u0026rsquo;ve used Veeam to do VMware backups, but never really considered it to do any replication work. Most of the time VMware Site Recovery Manager is my tool of choice to do replication if my storage array can\u0026rsquo;t do it. But Veeam makes a great alternative for doing replication.\nThe current version of Veeam can re-ip, run on a schedule, do bandwidth throttling, as well as remapping networks.\nThis brief post just shows how you can setup replication between two sites and seed the destination site via sneaker net. This should make things go much faster during your first time replicating the virtual machine to the secondary site.\nFirst thing\u0026rsquo;s first, create a backup using Veeam. When you\u0026rsquo;re done, you need to copy the backup files and transport them to the remote site. Put them in your datastore that will house your replicas from the primary site. Once this is done, lets create a new replication job.\nCreate a Replication Job Click on Replication Job and give it a name. The important thing, on this first screen is to select the Low Bandwidth setting. It will later allow you to choose a replica repository which is housing your Veeam Backup you\u0026rsquo;ve taken to the secondary site.\nOn the next screen you will select your virtual machine that should be replicated.\nThe next window will let you add more virtual machines as well as exclude VMs. Exclusions can be very handy if you select an entire cluster to replicate.\nThe following screen is where you\u0026rsquo;ll select your destinations.\nSetup the Replication Job Settings. Here you\u0026rsquo;re allowed to select source and destination proxy servers. These servers handle the transfers of data between sites. This is processor intensive so it\u0026rsquo;s a good idea to have several of these and balance your transfers. You\u0026rsquo;ll also be selecting the destination for where the replicas will be stored.\nFind the backup files that were moved to the secondary site. Choose the backup repository that holds the backup files. After this click detect to make sure that the VM can be matched up with the backup replica.\nAs one would expect, you are able to set a replication schedule for your job as well as setting time frames where the backup is allowed to run.\nOnce you\u0026rsquo;ve reviewed all of your settings and completed setting up the job, run it. You\u0026rsquo;ll see the VM show up in the inventory at the destination.\nConclusions Veeam Replication doesn\u0026rsquo;t do some things that VMware SRM does very nicely, like allowing you to push a button and spin up your DR Site, but this is a very nice tool if you don\u0026rsquo;t see the need to get the SRM licenses. With a little PowerCLI knowledge you could very easily power on you DR Site with a [double] click of a script.\nVeeam does have an advantage though, that you could replicate a single VM to multiple datacenters which is something that SRM has issues with.\nI\u0026rsquo;ll be using Veeam much more, now that I\u0026rsquo;ve seen what it can really do. There are alot of circumstances when I could see the Veeam Backup and Replication software be very useful.\n","permalink":"https://theithollow.com/2012/08/06/veeam-replication-for-vsphere/","summary":"\u003cp\u003eI recently took a closer look at Veeam to do some replication work.  I\u0026rsquo;ve used Veeam to do VMware backups, but never really considered it to do any replication work.  Most of the time VMware Site Recovery Manager is my tool of choice to do replication if my storage array can\u0026rsquo;t do it.  But Veeam makes a great alternative for doing replication.\u003c/p\u003e\n\u003cp\u003eThe current version of Veeam can re-ip, run on a schedule, do bandwidth throttling, as well as remapping networks.\u003c/p\u003e","title":"Veeam Replication for vSphere"},{"content":"Now that we\u0026rsquo;ve entered the virtualization age, we\u0026rsquo;ve become accustomed to moving workloads between hosts in order to get better performance. We\u0026rsquo;re so used to it, that VMware DRS will move workloads around automatically and many administrators don\u0026rsquo;t even care what host is running their virtual machines. Hosts are now more like a resource container, where we move our servers to the resource that is most available.\nVMware lets us take DRS one step further, where if we have extra resources available that aren\u0026rsquo;t being used, we can power off the hosts in order to save on power consumption. If we have 50 hosts running, but only using the resources of 30 of them, let\u0026rsquo;s power off the remaining 20 hosts to save on power and cooling. Over a year, these types of savings can really add up.\nSet up DPM The first step to setting up DPM should likely be to configure the power management settings on the host. Let\u0026rsquo;s think about it, if we\u0026rsquo;re going to be powering off servers, we\u0026rsquo;re also going to need a way to automatically start up the server so that it\u0026rsquo;s resources can be utilized again when needed.\nThere are three options to do this: Wake-on-LAN, IPMI, or iLO. If you are using either IPMI, or iLO you\u0026rsquo;ll need to configure the login informatin and IP Addresses of the Baseboard Management Controller. In the example below, I\u0026rsquo;m using iLO to connect to my host.\nGo to the Hosts \u0026ndash;\u0026gt; Configuration tab and select Power Management. Here you can set an IP Address and login info for each of your hosts.\nOnce your hosts have been configured then the cluster settings can be modified to utilize DPM.\nDPM Cluster Settings The DPM Cluster settings are much like DRS Settings. In fact they are under the vSphere DRS group in the cluster settings window. You can see that there is an off, Manual, or Automatic option, much like DRS has. \u0026ldquo;Off\u0026rdquo; does nothing with power management, \u0026ldquo;Manual\u0026rdquo; will make recommendations about what hosts can be powered off or should be powered on, and \u0026ldquo;Automatic\u0026rdquo; will power onoff machines by itself.\nYou can also set the threshold so that you can be more or less aggressive with your power saving options. If you are very conservative in your DPM Threshold, vCenter only powers on servers when needed, it will not power off hosts that aren\u0026rsquo;t being used. On the contrary, the most aggressive setting will power off servers if the resource utilization becomes lower than the target range, but will power on servers when needed.\nI would be careful using either of the extremes. Too low and you might not be powering off servers and getting any benefits of DPM, but too high and you might be powering servers off and on very often which might cause more power usage from spinning up servers over and over.\nDPM Host Options As you would expect, you can customize your DPM Settings per host. Go to the host options and set how you want your DPM Hosts to act with regards to power management. This is also a good place to see if you\u0026rsquo;ve tested putting a host into Standby mode already.\nEntering Standby Mode If you\u0026rsquo;ve set your DPM Cluster to Automatic, you\u0026rsquo;re basically done but if you want to test out your hosts, or you\u0026rsquo;ve put your hosts into manual mode you can always shut down your hosts from vCenter. Right click the host and choose \u0026ldquo;Enter Standby Mode.\u0026rdquo; This will effectively migrate any virtual machines on the host and power off the server.\nWhen your server is powered off from using Standby Mode, it doesn\u0026rsquo;t appear disconnected in vCenter like it might if it was rebooted. Instead it shows up as a host that is in standby. This allows you to right click the host again and choose \u0026ldquo;Power On\u0026rdquo; to boot the host and get it ready for more workloads.\n","permalink":"https://theithollow.com/2012/07/31/vmware-dpm-green-datacenters/","summary":"\u003cp\u003eNow that we\u0026rsquo;ve entered the virtualization age, we\u0026rsquo;ve become accustomed to moving workloads between hosts in order to get better performance.  We\u0026rsquo;re so used to it, that VMware DRS will move workloads around automatically and many administrators don\u0026rsquo;t even care what host is running their virtual machines.  Hosts are now more like a resource container, where we move our servers to the resource that is most available.\u003c/p\u003e\n\u003cp\u003eVMware lets us take DRS one step further, where if we have extra resources available that aren\u0026rsquo;t being used, we can power off the hosts in order to save on power consumption.  If we have 50 hosts running, but only using the resources of 30 of them, let\u0026rsquo;s power off the remaining 20 hosts to save on power and cooling.  Over a year, these types of savings can really add up.\u003c/p\u003e","title":"VMware DPM Green Datacenters"},{"content":"Sometimes we need a quick set of statistics to see what is going on inside a vSphere host. Sort of like using Microsoft\u0026rsquo;s task manager on a Windows server, we can quickly take a look at what some performance stats on the VMware hosts. A couple of the tools to do this are the esxtop and resxtop commands.\nEsxtop and resxtop are basically the same with the exception that esxtop must be run directly on the vSphere host by connecting via SSH. Resxtop can be run remotely from the vMA perhaps. Below is a screenshot of the two tools running side by side. Aside from the refresh rates not being matched up, you can see that they are both showing the same information.\nOn the left is the esxtop and on the right the resxtop.\nOnce you\u0026rsquo;ve picked your method of logon, just run the esxtop or resxtop command to bring up the statistics. By default, it will show the CPU information in Interactive mode. To change what metrics you are looking at, you can press keys on your keyboard. If you hit the \u0026ldquo;h\u0026rdquo; key it will display the help which will allow you to see your other options.\nAs you can see you can also select:\nc: CPU\ni:interrupt\nm: memory\nn:network\nd: disk adapter\nu: disk device\nv:disk VM\np: power mgmt\nFor a description of each of the metrics I would suggest taking a look at Duncan Epping\u0026rsquo;s blog. He has put a lot of great information about esxtop on his site which comes in very handy. http://www.yellow-bricks.com/esxtop/\nYou can also look at the vSphere documentation ESXTOP Docs.\nIn addition to running esxtop in interactive mode, it can also be run in both batch mode, and replay mode.\nBatch Mode Batch mode can be called by running (r)esxtop -B -d 5 -n 5 \u0026gt; test.csv. The -d switch is used for the number of seconds between refreshes, the -n switch is the number of interations. So in the example I used, the esxtop command would run for about 25 seconds. (5 seconds delay * 5 iterations)\nThis is great if you want to see some long term statistics or might want to run the esxtop command during the middle of the night when you\u0026rsquo;re trying to catch up on some sleep.\nReplay Mode Replay mode can be called by running (r)esxtop -R .\nIn this situation you\u0026rsquo;re using the vm-support command to gather information about your ESX host. Once the vm-support file is created, you can run the esxtop command up against the vm-support file to see the performance metrics of the host, as if it were running. This might be very useful if you\u0026rsquo;re helping someone troubleshoot some performance issues but don\u0026rsquo;t have access to their system. They can run the vm-support command to gather the information necessary, and you can then run esxtop against that support file.\nI must say that I had several issues with getting this to work in my lab. I had several issues related to disk space and ended up running the vm-support command so that it would save the output in one of my datastores. The 5Gbs of local storage I had for the ESXi host wasn\u0026rsquo;t enough for this support file. Beware, these support files can be quite large.\nRun the vm-support command first:\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;]vm-support -s -i 10 -d 60 \u0026gt;/vmfs/volumes/[datastore]/supportfile.tgz.[/sourcecode]\nOnce the vm-support command is done (this may take some time, depending out how long the duration setting is) you need to extract the information from the tgz file to a directory. Try running:\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;]tar -zxvf supportfile.tgz[/sourcecode]\nfrom the /vmfs/volumes/[datastore]/ directory. This will create a new directory with all of the files form the tgz file. Now finally, we can run the esxtop -r ","permalink":"https://theithollow.com/2012/07/24/using-esxtop-and-resxtop-to-obtain-performance-metrics/","summary":"\u003cp\u003eSometimes we need a quick set of statistics to see what is going on inside a vSphere host.  Sort of like using Microsoft\u0026rsquo;s task manager on a Windows server, we can quickly take a look at what some performance stats on the VMware hosts.  A couple of the tools to do this are the esxtop and resxtop commands.\u003c/p\u003e\n\u003cp\u003eEsxtop and resxtop are basically the same with the exception that esxtop must be run directly on the vSphere host by connecting via SSH.  Resxtop can be run remotely from the vMA perhaps.  Below is a screenshot of the two tools running side by side.  Aside from the refresh rates not being matched up, you can see that they are both showing the same information.\u003c/p\u003e","title":"Using ESXTOP and RESXTOP to Obtain Performance Metrics"},{"content":"It\u0026rsquo;s a pretty common best practice to not install the Infrastructure Master (FSMO) Role on a Global Catalog Server. This post should help to explain why that is, and the circumstances where you can get away with it.\nGlobal Catalog Review A Global Catalog contains a full set of attributes for the domain that it\u0026rsquo;s a member of and a subset of information for all domains in its forest. So basically, what this means is that all of the little attributes that are stored on objects in Active Directory, in the GC\u0026rsquo;s domain, will be housed on Global Catalog servers. The global Catalog will also have a replica of the objects from other domains in the forest, but only a smaller set of their attributes.\nInfrastructure Master FSMO Role Review \u0026ldquo;Updates cross-domain references and phantoms from the global catalog.\u0026rdquo; To describe this a little better, lets have an example. If User1 was deleted from the domain, the Global Catalog Servers would almost immediately remove this object but if there were additional references in a different domain to this object a \u0026ldquo;phantom object\u0026rdquo; would be created as a sort of temporary placeholder. For instance if Group1 in domain2 has User1 in domain1 as a member, a phantom object would be created.\nThe Infrastructure Master periodically checks against the Global Catalogs to see if any \u0026ldquo;phantoms\u0026rdquo; exist and if so, cleans up the references and once done can remove the phantom.\nhttp://support.microsoft.com/kb/223346/en-us\nInfrastructure Master on Global Catalog Now that we have some background it\u0026rsquo;s much easier to see why the Infrastructure Manager shouldn\u0026rsquo;t be on a Global Catalog. Since Global Catalogs will have a reference to each object in the forest, the infrastructure master won\u0026rsquo;t see any phantoms and therefore will not update the rest of the Domain Controllers.\nThere are a few scenarios in which you can get away with having the Infrastructure Master on the GC:\nIn a single domain phantoms are not needed. In this instance the Infrastructure Master doesn\u0026rsquo;t have anything to do so it doesn\u0026rsquo;t matter where you put it. If every domain controller in a domain is also a Global Catalog Server there is also no work for the infrastructure Master to do. ","permalink":"https://theithollow.com/2012/07/16/infrastructure-master-with-global-catalogs-rundown/","summary":"\u003cp\u003eIt\u0026rsquo;s a pretty common best practice to not install the Infrastructure Master (FSMO) Role on a Global Catalog Server.  This post should help to explain why that is, and the circumstances where you can get away with it.\u003c/p\u003e\n\u003ch3 id=\"global-catalog-review\"\u003eGlobal Catalog Review\u003c/h3\u003e\n\u003cp\u003eA Global Catalog contains a full set of attributes for the domain that it\u0026rsquo;s a member of and a subset of information for all domains in its forest.  So basically, what this means is that all of the little attributes that are stored on objects in Active Directory, in the GC\u0026rsquo;s domain, will be housed on Global Catalog servers.  The global Catalog will also have a replica of the objects from other domains in the forest, but only a smaller set of their attributes.\u003c/p\u003e","title":"Infrastructure Master with Global Catalogs Rundown"},{"content":"While I was studying for the VCAP-DCA I realized that many people might not have access to a lab that includes the capability to do VMDirectPath I/O. My own lab is using nested ESXi hosts inside of VMware Workstation so I don\u0026rsquo;t have access to DirectPath either, but I was able to borrow some equipment in order to test my skills.\nIf you don\u0026rsquo;t have access to this type of equipment but want to study for the VCAP5-DCA, the below setup should suffice for you to learn it, as the setup is not very difficult.\nPrerequisites According to the vSphere 5 documentation there are several prerequisites that must be taken care of, before any VMDirectPath can be setup.\nThe virtual machine must be on hardware version 7 or later. Intel® Virtualization Technology for Directed I/O (VT-d) or AMD I/O Virtualization Technology (IOMMU) enabled in the BIOS of the host. PCI Devices are connected to the host and marked for passthrough (we\u0026rsquo;ll do this in the example below) http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc_50%2FGUID-7C9A1E23-7FCD-4295-9CB1-C932F2423C63.html\nLimitations VMDirectPath has a few limitations and some of them are fairly painful for someone who has gotten used to the benefits of vSphere. It\u0026rsquo;s my recommendation to not use VMDirectPath unless you absolutely need to do so for performance purposes.\n*vMotion doesn\u0026rsquo;t work no DRS (obviously since vMotion doesn\u0026rsquo;t work) HA won\u0026rsquo;t work VMware Snapshots are not supported you can only use 6 PCI devices with VMDirectPath IO \\* vMotion is supported in vSphere5 if you are using the Cisco UCS Fabrix Extender http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.networking.doc_50%2FGUID-F9E3ECA9-B00A-4C29-BDE3-D5418E922043.html\nSetup Open your vSphere Client and go to the host you want to configure DirectPath on and go to the configuration tab. Click the \u0026ldquo;Advanced Settings\u0026rdquo; under Hardware.\nIf this is the first time you\u0026rsquo;ve setup DirectPath I/O, you won\u0026rsquo;t see any devices listed. Click on the \u0026ldquo;Configure Passthrough\u0026hellip;\u0026rdquo; hyperlink to take you to a list of devices that you can pass through to a virtual machine.\nHere, you can pick from the PCI devices installed on your host machine.\nSelect the device that you\u0026rsquo;re configuring passthrough on and click ok. Note: if you pick a device that ESXi is already using, you\u0026rsquo;ll be presented with a warning message. This might be very useful in the event that you accidentally pick the wrong device.\nNow that you\u0026rsquo;ve picked your device, it should show up in the configuration tab again. However in the case below, it requires a reboot of the host before it will be available to virtual machines.\nFast forward a bit after a host reboot and we\u0026rsquo;ll now see that the PCI Device is now ready to be used.\nThe last step to setup the passthrough is to add the device to a virtual machine. To do this we\u0026rsquo;ll add a device much like we add any other device to a VM. Right click the VM and choose edit settings. The click Add\u0026hellip; and chose the PCI device. NOTE: your VM might need to be powered off to add a PCI device.\nAdd the PCI Device that you configured earlier for passthrough and it should then be visible in device manager of the virtual machine.\n","permalink":"https://theithollow.com/2012/07/09/vmdirectpath-io-basic-setup/","summary":"\u003cp\u003eWhile I was studying for the VCAP-DCA I realized that many people might not have access to a lab that includes the capability to do VMDirectPath I/O.  My own lab is using nested ESXi hosts inside of VMware Workstation so I don\u0026rsquo;t have access to DirectPath either, but I was able to borrow some equipment in order to test my skills.\u003c/p\u003e\n\u003cp\u003eIf you don\u0026rsquo;t have access to this type of equipment but want to study for the VCAP5-DCA, the below setup should suffice for you to learn it, as the setup is not very difficult.\u003c/p\u003e","title":"VMDirectPath I/O Basic Setup"},{"content":"I was recently helping out a company attempt to upgrade their Netapp Filer from OnTap 7.3 over to Data OnTap 8. We ran the Netapp Upgrade advisor and got to a section that wanted us to run the AggrSpaceCheck tool to make sure that the aggregates had sufficient space available. Normally, I skip this step because I usually have plenty of space available, but in this particular case, some of the aggregates were already 99% full. Since we didn\u0026rsquo;t want to have a serious failure during our upgrade we decided to error on the side of caution (and best practices) and run the AggrSpaceCheck tool.\nWhen we ran the AggrSpaceCheck tool we started receiving this message.\nCould not retrieve Aggregate free space. Could not get any aggregates in this filer.\nThe message doesn\u0026rsquo;t really suggest that an error occurred, just that the tool couldn\u0026rsquo;t retrieve the free space. Upon further research I found that RSH is being used to connect to the filer, and RSH is not part of Windows 7 by default. RSH can be used if you have the Subsystem for Unix Applications feature installed, but my machine was Windows 7 Pro and it\u0026rsquo;s not available in that version. http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_install/subsystem-for-unix-based-applications-sua-missing/43e67de2-4d65-e011-8dfc-68b599b31bf5?msgId=43b0ea44-5d65-e011-8dfc-68b599b31bf5\nNow what? Running a simple tool to determine if there is enough free space to perform an upgrade has become a large pain point. This is why I\u0026rsquo;d love to see Netapp come out with an AggrSpaceCheck 2.0. Perhaps it can use SSH by default, a powershell script, or even better, just be able to determine the space from the upgrade advisor.\nIf anyone more familiar with Netapp has any information about this tool, please comment about any known plans for this in the future, or perhaps any better ways to run the tool that might be out there. I\u0026rsquo;d love to see some comments.\nUsable Workaround with AggrSpaceCheck_1.0 on Windows 7 Pro Full Disclosure: This workaround was provided from a Netapp Forum https://forums.netapp.com/thread/30994 (NOW Account Required) A big thank you to yaoguang for his work on this issue. I\u0026rsquo;ve added his work below.\nTo use AggrSpaceCheck_1.0 on Windows 7 Pro\n1. Install Strawberry perl http://strawberryperl.com/\n2. Make the following modifications to the aggrspacecheck.pl file\nYou should only need to change the RSH to plink. And add –ssh with a pw sub setConfig {\nmy ($remCmd, $options);\n$PuserName = \u0026lsquo;root\u0026rsquo; unless $PuserName;\nunless ($^O =~ /MSWin32|linux/i) {\n$errorMsg = \u0026ldquo;Unsupported OS $^O. Only Windows and Linux are supportedn\u0026rdquo;;\nreturn -1;\n}\n$remCmd = \u0026quot; plink\u0026quot;;\nif ($^O =~ /MSWin32/i) {\n$options = \u0026ldquo;$PfilerName -ssh -l $PuserName -pw netapp123 \u0026ldquo;;\n=================================================================================================\n3. Place the plink.exe, aggrspacecheck.pl, and aggrspacecheck.exe in the strawberry directory.\n4. Run the following command: perl aggrspacecheck.pl –filer “FILERNAME”\n","permalink":"https://theithollow.com/2012/07/01/netapp-aggrspacecheck-2-0-needed/","summary":"\u003cp\u003eI was recently helping out a company attempt to upgrade their Netapp Filer from OnTap 7.3 over to Data OnTap 8.  We ran the Netapp Upgrade advisor and got to a section that wanted us to run the AggrSpaceCheck tool to make sure that the aggregates had sufficient space available.  Normally, I skip this step because I usually have plenty of space available, but in this particular case, some of the aggregates were already 99% full.  Since we didn\u0026rsquo;t want to have a serious failure during our upgrade we decided to error on the side of caution (and best practices) and run the AggrSpaceCheck tool.\u003c/p\u003e","title":"Netapp AggrSpaceCheck 2.0 needed"},{"content":"Setting up a disaster recovery site can be a costly endeavor. VMware Site Recovery Manager has made disaster recovery much simpler, but it\u0026rsquo;s still expensive to get a DR site up and going. Rack space, power, cooling, bandwidth, storage and compute can all add up pretty quickly, not to mention that hopefully you\u0026rsquo;ll never have to use this equipment.\nReplication Bandwidth Bandwidth could be very expensive depending on how much data needs to be replicated. Consider some of these techniques to make the best use of your bandwidth.\nDo you need to replicate all of your production data? If not, select the most important servers to replicate and leave the labdev servers behind. It might not be ideal if a disaster happens, but at least the business can continue to run while you rebuild some lab servers. Consider using cheaper asynchronous bandwidth. For the most part, the DR site is going to be doing heavy downloads but not heavy uploads. This might allow you to use some cheaper DSL type services, but be weary; if you do have a disaster you\u0026rsquo;ll likely need to replicate that data back, which will be very difficult with this type of bandwidth. Try using a point to point type connection instead of a VPN Tunnel. VPN tunnels require encryption because they are using the public Internet to route traffic. If you have a private point to point connection you can save bandwidth by removing this encryption. Of course, consult your security managers first on this. Test a WAN optimizer. Buying extra equipment doesn\u0026rsquo;t seem like a way to save money, but it\u0026rsquo;s possible to get away with much less bandwidth if you have some optimized replication going. It may be cheaper to buy a couple of WAN optimizers as opposed to paying for higher bandwidth. Cut out unnecessary file replication. SQL TempDBs, page files and temp files don\u0026rsquo;t need to be replicated. If you can, try to setup your servers so that these files don\u0026rsquo;t need to be replicated. Hardware Costs It\u0026rsquo;s frustrating to buy expensive equipment for the disaster site, knowing that it might never be used. Here are a few options to get you started if you can\u0026rsquo;t bite the bullet and buy the hardware.\nInstead of using Array Based replication, try using the built in VMware Replication that is in SRM 5.0. A SAN may be one of the most expensive pieces of equipment in your datacenter. Especially if you\u0026rsquo;re paying for the maintenance contracts. Luckily SRM 5 has a way to do replication without have expensive storage. Use lower cost servers in the DR Site. It might be possible to use fewer servers, or less powerful servers as long as your users know that the performance might not be as good as it is in production. Again, this might not be ideal, but as long as you can still run the business the hard part is over. You can always buy some equipment to beef up the DR site once a disaster happens. Site Costs Just having a DR site can be the most difficult pill to swallow when it comes to paying the bills. Here are a few ideas that might help.\nDo you already have a second location? Use that for your DR Site instead of creating a third site for the DR location. There is no point in paying someone to host your data if you already have two sites. In fact, if you don\u0026rsquo;t have a second site already perhaps consider opening a second office as opposed to renting space from someone. If you\u0026rsquo;re going to need a second site anyway, maybe you can grow your business at the same time to help offset the costs. How do your users connect to the DR site if it needs to be used? If you have a business that needs people onsite it could be difficult to find a cheap enough place to host your data and your users, but if your users can just VPN into your network, all you\u0026rsquo;ll need is space for your server equipment. ","permalink":"https://theithollow.com/2012/06/22/lowering-disaster-recovery-costs-with-site-recovery-manager/","summary":"\u003cp\u003eSetting up a disaster recovery site can be a costly endeavor.  VMware Site Recovery Manager has made disaster recovery much simpler, but it\u0026rsquo;s still expensive to get a DR site up and going.  Rack space, power, cooling, bandwidth, storage and compute can all add up pretty quickly, not to mention that hopefully you\u0026rsquo;ll never have to use this equipment.\u003c/p\u003e\n\u003ch2 id=\"replication-bandwidth\"\u003eReplication Bandwidth\u003c/h2\u003e\n\u003cp\u003eBandwidth could be very expensive depending on how much data needs to be replicated.  Consider some of these techniques to make the best use of your bandwidth.\u003c/p\u003e","title":"Lowering Disaster Recovery Costs with Site Recovery Manager"},{"content":"I recently presented my current employers DR Strategy at the Chicago Vmug and had several comments about the gotchas section so I thought I\u0026rsquo;d get them on the blog for future reference.\nDuring our DR Test we found several items that need to be carefully considered when doing a failover to a secondary site. It is my hope that this post provides a good starting point for considering your own DR Strategy using VMware Site Recovery Manager.\nDefrags - If you defrag a virtual disk that is being replicated to your DR site, you may increase some performance on your production VM, but it also means that you\u0026rsquo;ll be replicating almost all of your blocks to the DR site over again. This might eat up your bandwidth quickly, especially if you\u0026rsquo;re doing defrags on multiple protected VMs.\nDatabase Maintenance - Along with defrags, you may have a very similar situation with databases. Re-indexing a database might cause a tremendous number of blocks to be changed, causing a lot of additional replication to take place. Perhaps instead of replicating your database with block level replication strategies, maybe try some sort of log shipping or database mirrioring techniques.\nSnapshots - If you\u0026rsquo;re keeping snapshots on your replicated datastore, those snapshots are being replicated to the DR site as well. Maybe this is intended, but be careful to keep track of the size and number of snapshots you\u0026rsquo;re keeping.\nDatabase File Replication - Make sure to replicate your database files at the same time. You don\u0026rsquo;t want to get into a situation where you\u0026rsquo;re not replicating log files and the database at different times. You can get away with replicating these volumes at different times if you know what you\u0026rsquo;re doing, but be careful. It\u0026rsquo;s much simpler to make sure the LUNs are synching at same time.\nRun Books - SRM makes it very easy to do a failover to a secondary site by simply pushing a button, but a run book is still a good idea. Imagine a situation where the VMware guy isn\u0026rsquo;t able to make it to the office due to a disaster, but yet the company needs to fail over the servers. If there is a run book with a few simple instructions, almost anyone could perform a failover.\nDR Site Templates - If your production site is a \u0026ldquo;smoking hole\u0026rdquo; and you\u0026rsquo;ve failed over your production site successfully, you might be put in a situation where you need to build a VM, but if you didn\u0026rsquo;t replicate your templates over to the DR Site, you might be building a server from scratch.\nAsynchronous Bandwidth - Take care if you are using asynchronous bandwidth like a DSL connection. This might be a fairly cheap and useful alternative to replicate data to your DR site, but if a disaster occurs you might not have enough bandwidth in the opposite direction to get your data back to your new production site.\nStorage DRS - VMware has a nice feature to automatically distribute I/O over different datastores to improve performance, but don\u0026rsquo;t forget that if a VM is moved from a replicated volume to a non-replicated volume, it might break your recovery plan.\nPhone Tree - How do your users know that a disaster was declared? Maybe the disaster happened after hours and users need to be notified about a new method of logging in to work the next day. Consider a phone tree and an offsite website that can quickly be updated to give users instructions for logging in. This will save you time on your helpdesk that could be spent putting out fires.\n","permalink":"https://theithollow.com/2012/06/18/vmware-srm-gotchas/","summary":"\u003cp\u003eI recently presented my current employers DR Strategy at the \u003ca href=\"http://www.chicagovmug.com\"\u003eChicago Vmug\u003c/a\u003e and had several comments about the gotchas section so I thought I\u0026rsquo;d get them on the blog for future reference.\u003c/p\u003e\n\u003cp\u003eDuring our DR Test we found several items that need to be carefully considered when doing a failover to a secondary site.  It is my hope that this post provides a good starting point for considering your own DR Strategy using VMware Site Recovery Manager.\u003c/p\u003e","title":"VMware SRM Gotchas"},{"content":"vSphere AutoDeploy always seemed like a lot of work to setup just to deploy a few VMware hosts, but in my current job I don\u0026rsquo;t setup hosts very often. If you are constantly deploying new hosts to get out in front of performance issues, or are building a new datacenter and deploying many hosts at once, AutoDeploy can be a great way to get up and running quickly.\nPrerequisites In order to use AutoDeploy, you\u0026rsquo;ll first need vSphere5, the AutoDeploy Install (which is on the vCenter Media), the vSphere5 Offline Bundle, PowerCLI, a DHCP Server and a TFTP server for starters.\nInstall the AutoDeploy plugin and configure it so that it points to the correct vCenter instance. I installed my AutoDeploy instance on my vCenter Server.\nSetup your TFTP server and the root folder. I used a program called TFTPd32 and installed it on the vCenter Server. Be sure to open up the Windows Firewall for tftp if it\u0026rsquo;s running.\nOpen up the vCenter client and go to the AutoDeploy module. Download the tftp boot zip files to your root directory that was created in the previous step.\nSetup your DHCP server so that the scope your ESXi hosts will be on have options 66 and 67 setup. For option 67 enter \u0026ldquo;undionly.kpxe.vmw-hardwired\u0026rdquo; and for option 66 enter the server name for your tftp server.\nLastly, place the ESXi vSphere offline bundle in your tftp root directory. Leave it in the zip file.\nDeploy Images To setup the images for the ESXi hosts to boot from, we need to do a little PowerCLI. Open up your PowerCLI console and connect to your vCenter server. Don\u0026rsquo;t forget to set your execution policy if you haven\u0026rsquo;t already.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] set-executionpolicy remotesigned connect-viserver prodvcenter [/sourcecode]\nFirst, we\u0026rsquo;ll setup the software depot. This is the actual hypervisor software that will be installed on the server.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] add-esxsoftwaredepot c:tftptftp-deployVMware-ESXi-5.0.0-469512-depot [/sourcecode]\nOnce the software depot has been setup, we can run\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] get-ESXiImageProfile[/sourcecode]\nwhich will show us our installation options.\nI\u0026rsquo;ll be using the ESXi-5.0.0-469512-standard image profile because I want the option to install VMware Tools later.\nNow we need to setup a rule so that when the ESXi hosts boot up and get the image that is designed for it. In my case I\u0026rsquo;ve setup a DHCP reservation for one of my hosts that is set to the IP address 192.168.160.200. We create a rule designed for this IP address and the software depot that I setup earlier. I\u0026rsquo;m also adding two items named AutoDeployProfile01 and AutoDeployCluster which adds a Host Profile and puts the host into the AutoDeploy Cluster I\u0026rsquo;ve created in vSphere.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] New-DeployRule –Name “ProductionAutoDeploy” –Item “ESXi-5.0.0-469512-standard”, AutoDeployProfile01, AutoDeployCluster –Pattern “ipv4=192.168.160.200\u0026quot; [/sourcecode]\nNotice that the -pattern option can be any of the following, to meet your needs.\nAsset domain hostname ipv4 mac model owmstring serial uuid vendor The command you have just entered, has setup a deploy rule for what VMware calls \u0026ldquo;The Working set.\u0026rdquo; If you try to boot up your host at this point, you\u0026rsquo;ll see that the host can\u0026rsquo;t finish the boot process. A message like the one below will be displayed, showing the patterns that can be matched for the specific hardware that\u0026rsquo;s being used.\nTo finish the autodeploy setup, you must take your \u0026ldquo;working set\u0026rdquo; and make it the \u0026ldquo;Active Set.\u0026rdquo; To do this we run\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] Add-DeployRule -deployrule ProductionAutoDeploy [/sourcecode]\nWhen we boot up the host now we\u0026rsquo;ll see the ESXi hypervisor start to install and when it\u0026rsquo;s done the host will be added to our AutoDeploy Cluster.\nHost Profiles I won\u0026rsquo;t go into much detail in this post about Host Profiles, but for AutoDeploy to be truly effective a host profile is a must have. Just getting your hypervisor installed is nice, but in the end, you want to have your host added to a cluster, have the DNS settings configured, NTP configured, datastores setup, and virtual switches all setup. The easiest way to arrange this is build a reference host and create a host profile with all of your desired settings and then attach that profile during your boot process.\nAdditional information http://www.vmware.com/files/pdf/products/vsphere/VMware-vSphere-Evaluation-Guide-4-Auto-Deploy.pdf http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc_50%2FGUID-A69CCCEE-58DD-4D13-8596-4A03BD08C3A4.html\n","permalink":"https://theithollow.com/2012/06/05/vsphere-5-autodeploy-basics/","summary":"\u003cp\u003evSphere AutoDeploy always seemed like a lot of work to setup just to deploy a few VMware hosts, but in my current job I don\u0026rsquo;t setup hosts very often. If you are constantly deploying new hosts to get out in front of performance issues, or are building a new datacenter and deploying many hosts at once, AutoDeploy can be a great way to get up and running quickly.\u003c/p\u003e\n\u003ch2 id=\"prerequisites\"\u003ePrerequisites\u003c/h2\u003e\n\u003cp\u003eIn order to use AutoDeploy, you\u0026rsquo;ll first need vSphere5, the AutoDeploy Install (which is on the vCenter Media), the vSphere5 Offline Bundle, PowerCLI, a DHCP Server and a TFTP server for starters.\u003c/p\u003e","title":"vSphere 5 AutoDeploy Basics"},{"content":"You never know when you\u0026rsquo;ll need to script something and PowerCLI gives you the tools to do it. I decided to see if I could script some of the VMware Update Manager (VUM) tasks while I was reviewing section 5.2 of the VCAP5-DCA Beta Blueprint and found that the procedures were quite easy. My next thought was, \u0026ldquo;Why would I want to script this when I can use the GUI, and on top of that I can schedule scans and remediation already?\u0026rdquo; My answer was, \u0026ldquo;You never know.\u0026rdquo; Who knows when you\u0026rsquo;ll need to use the PowerCLI to accomplish a task. Maybe, you\u0026rsquo;re scripting something so someone else can run it without really knowing how to perform the task, or you\u0026rsquo;re trying to get a report, or who knows.\nHere is a basic procedure to scan and remediate some ESX hosts.\nYou must have PowerCLI and the VUM cmdlets installed before you can run any of these commands.\nFirst, we\u0026rsquo;ll create a patch baseline. In this case the patch baseline will get dynamic updates that are for ESXi hosts, only critical VMware Updates, and released after May 1st 2012. We\u0026rsquo;ve named the baseline \u0026ldquo;PowerCLIBaseline.\u0026rdquo;\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] new-patchbaseline -dynamic -name PowerCLIBaseline -description \u0026ldquo;PowerCLI Baseline Example -TargetType Host -SearchPatchVendor *VMware* -SearchPatchSeverity critical -SearchPatchStartDate 5.1.2012 [/sourcecode]\nJust to make sure everything works the same, we take a look in the GUI to make sure our baseline has been created.\nNext, we want to attach our newly created baseline to an object. In my example we\u0026rsquo;re attaching it to the Prod Cluster, but it could be attached to a virtual machine, a host, or a folder just like it could be from the GUI.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] get-baseline PowerCLIBaseline | attach-baseline -Entity \u0026ldquo;Prod Cluster\u0026rdquo; [/sourcecode]\nOnce the attach-baseline has been run, we check the GUI to be sure that it\u0026rsquo;s attached.\nNow it\u0026rsquo;s time for a scan. We run the Scan-inventory cmdlet on the Prod Cluster.\n[sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] Scan-inventory -entity \u0026ldquo;Prod Cluster\u0026rdquo; [/sourcecode]\nLuckily my Lab is not up to date and needs to be patched so that we can continue with our powerCLI scripting.\nTo show that you can run this on objects other than a Cluster, we\u0026rsquo;ll update only one of the two hosts. We remediate hosts 192.168.160.30. [sourcecode language=\u0026ldquo;PowerShell\u0026rdquo;] get-baseline -name PowerCLIBaseline | Remediate-inventory -entity 192.168.160.30 [/sourcecode]\nNow we see that my host is up to date and my PowerCLIBaseline is now in compliance.\nIf you want more information about using the PowerCLI for VUM here are some additional links that might help.\nVMware PowerCLI VUM Documentation http://www.vexperienced.co.uk/2012/04/12/vcap5-dca-whats-new/\n","permalink":"https://theithollow.com/2012/05/31/using-powercli-for-vmware-update-manager/","summary":"\u003cp\u003eYou never know when you\u0026rsquo;ll need to script something and PowerCLI gives you the tools to do it.  I decided to see if I could script some of the VMware Update Manager (VUM) tasks while I was reviewing section 5.2 of the VCAP5-DCA Beta Blueprint and found that the procedures were quite easy.  My next thought was, \u0026ldquo;Why would I want to script this when I can use the GUI, and on top of that I can schedule scans and remediation already?\u0026rdquo;  My answer was, \u0026ldquo;You never know.\u0026rdquo;  Who knows when you\u0026rsquo;ll need to use the PowerCLI to accomplish a task.  Maybe, you\u0026rsquo;re scripting something so someone else can run it without really knowing how to perform the task, or you\u0026rsquo;re trying to get a report, or who knows.\u003c/p\u003e","title":"Using PowerCLI for VMware Update Manager"},{"content":"Unlike many operating systems, VMware ESXi gives you a nice tool to upgrade their hypervisor to the latest version. VMware Update Manager gives you the ability to grab the latest build and apply it to your existing ESXi hosts.\nI should mention that VMware Update Manager is not the only solution to upgrade your ESXi hosts. Hosts can also be upgraded manually by booting the host to the latest build and performing an upgrade, or by utilizing the new autodeploy features in vSphere 5. VMware Update Manager is a simple tool that can automate the installs on several hosts in sequence and is available with all editions of vSphere 5.\nDeploy a new Image To begin we access the Update Manager Utility by going to the Solutions and Applications section of the vSphere client. From there choose the ESXi Images tab to see what builds are available. In the example below there are no builds available, so we want to import the latest one. Choose the \u0026ldquo;Import ESXi Image\u0026hellip;\u0026rdquo; link to start the import process.\nFrom here, a wizard will start up asking for the location of your .iso file containing the latest build.\nWe will then check the box to Create a baseline using this image. This will be useful when we scan and deploy the image to hosts.\nNow we can see the latest image has been imported into VUM for deployment.\nNext go to the Hosts and Clusters view of the vSphere client and choose either a host or a cluster. Go to the VUM tab and the option to attach a baseline will be shown. Here, you can select the baseline that we created earlier and attach it to either a single host, or a group of hosts in the cluster.\nNext we will want to scan either our Cluster or a specific host for updates. This can be done by right clicking either the Cluster or host from the \u0026ldquo;Hosts and Clusters\u0026rdquo; view of the vSphere client. Scanning the cluster might be the most efficient way of scanning all of the hosts, but if you only plan to deploy the upgrade to one host, that option is available.\nOnce the scans have completed, the Upgrade can be deployed by again, right clicking on the Host or Cluster and choosing the \u0026ldquo;Remediate\u0026hellip;\u0026rdquo; option.\nA wizard will appear asking for which baseline to upgrade to.\nRead and approve the EULA.\nA message will then show up about rolling back to the previous version should the upgrade fail. There is an option to remove third-party software modules. Pay close attention to this screen if you are using any third-party plugins in ESXi.\nThe nice thing about VUM is that you can schedule your updates for after hours if you want to reduce the impact to production activities. Give your task a descriptive name for self documentation purposes and schedule when the update should be deployed.\nNext, there will be some options about the state of your virtual machines during the patching process. You can choose to change power state as well as retrying the update in case of a failure.\nLastly, the Cluster Remediation options will show up allowing you to temporarily modify Distributed Power Management (DPM) and High Availability admission control options as well as Fault Tolerance options during the patching process.\n","permalink":"https://theithollow.com/2012/05/29/upgrading-esxi-hosts-using-vmware-update-manager/","summary":"\u003cp\u003eUnlike many operating systems, VMware ESXi gives you a nice tool to upgrade their hypervisor to the latest version.  VMware Update Manager gives you the ability to grab the latest build and apply it to your existing ESXi hosts.\u003c/p\u003e\n\u003cp\u003eI should mention that VMware Update Manager is not the only solution to upgrade your ESXi hosts.  Hosts can also be upgraded manually by booting the host to the latest build and performing an upgrade, or by utilizing the new autodeploy features in vSphere 5.  VMware Update Manager is a simple tool that can automate the installs on several hosts in sequence and is available with all editions of vSphere 5.\u003c/p\u003e","title":"Upgrading ESXi hosts using VMware Update Manager"},{"content":"If you need to backup some of your virtual machines, maybe it\u0026rsquo;s time to consider VMware Data Recovery 2.0. This VMware appliance provides an easy way to backup some virtual machines for free, but if you\u0026rsquo;re looking for a large scale backup solution it might be necessary to use more traditional backup solutions from Symantec or Veeam.\nTo get started, download the VMware Data Recovery iso from vmware.com. The iso includes a plugin for vCenter as well as an OVF for deploying the appliance. Once you\u0026rsquo;ve deployed the OVF template and installed the vCenter plugin, you can open the vDR from the solutions and applications section of the vCenter console.\nThe first time you open the vDR console it will be necessary to configure the appliance that you\u0026rsquo;ve deployed via the OVF template. This requires an IP address as well as a login.\nWhen you click the connect button a wizard will start up allowing you to setup your login and a file share for backup files.\nSetup the backup destinations.\nWhen you click the add network share you\u0026rsquo;ll get a warning message about the size of the file share, if the share is larger than 500Gb.\nEnter in the share credentials and location for storing the backups. You can add multiple shares later if necessary.\nThe traditional summary screen will then be presented, and the option to configure a backup set will be checked.\nConfiguring the backup job Setting up a backup job is pretty straight forward. The common backup settings will be required such as: what to backup, where to backup to, when to back it up, etc.\nFirst give the backup a name. Backup Job 1 is probably not a great description for a backup, but it\u0026rsquo;s the name of my backup in this example.\nChoose the virtual machines that need to be backed up.\nSelect the backup location. This is the file share we setup earlier. You can always add more shares at this point if necessary.\nWhen should the backup job be allowed to run? Perhaps you don\u0026rsquo;t want backups to be run during production hours, or a specific time of day because other processing is taking place?\nHow many backups should be retained? This screen allows you to customize how many backups should be stored in the event that one needs to be restored.\nSome additional links that might be useful when setting up and deploying vDR:\nDownload VMware Data Recovery 2.0 VMware Data Recovery Documentation vDR Logons\n","permalink":"https://theithollow.com/2012/05/20/simple-free-vmware-backups/","summary":"\u003cp\u003eIf you need to backup some of your virtual machines, maybe it\u0026rsquo;s time to consider VMware Data Recovery 2.0.  This VMware appliance provides an easy way to backup some virtual machines for free, but if you\u0026rsquo;re looking for a large scale backup solution it might be necessary to use more traditional backup solutions from Symantec or Veeam.\u003c/p\u003e\n\u003cp\u003eTo get started, download the VMware Data Recovery iso from vmware.com.  The iso includes a plugin for vCenter as well as an OVF for deploying the appliance.  Once you\u0026rsquo;ve deployed the OVF template and installed the vCenter plugin, you can open the vDR from the solutions and applications section of the vCenter console.\u003c/p\u003e","title":"Simple Free VMware Backups"},{"content":"Suppose you have multiple virtual machines that you would like to distribute load across that are housed inside of your virtual environment. How do we go about setting up Network Load Balancing so that it will still work with things like DRS and VMotion?\nSwitch Refresher In most networks we have switches that listen for MAC addresses and store them in their MAC Address Table for future use. If a switch receives a request and it knows which port the destination MAC address is associated with, it will forward that request out the single port. If a switch doesn\u0026rsquo;t know which port a MAC Address is associated with, it will basically send that frame out all of it\u0026rsquo;s ports (known as flooding) so that the destination can hopefully still receive it. This is why we\u0026rsquo;ve moved away from hubs and moved towards switches. Hubs will flood everything because they don\u0026rsquo;t keep track of the MAC Addresses. You can see how this extra traffic on the network is unwanted.\nSo in the below example, if ClientA sends a request to the MAC Address 0000.0000.0001 the frame will get to the switch and then go out port Fa0/1 and that\u0026rsquo;s all.\nUnicast Mode Lets assume that the two VMs are now in a Microsoft NLB Cluster using Unicast Mode. The two VMs now have a shared Cluster MAC Address. Essentially they can now handle any requests sent to their Clustered MAC Address. Microsoft by default uses a feature called MaskSourceMAC so that each of the VMs will send unique MAC Addresses to the switch. This is because many switches require that a unique source address is used. One of the consequences of this is that it keeps the switch from ever learning the Clustered MAC Address. Lets look at another example where Client A is attempting to connect to the NLB Cluster with a MAC Address of 1111.1111.1111.\nAs you can see, ClientA sends the request to the Clusterd MAC Address, but that MAC Address is not in the switches MAC Address Table. The switch acts as it should and floods the frame and the NLB Cluster will receive the request, but so did ClientB. ClientB will drop the frame because it\u0026rsquo;s not the intended destination, but this is unnecessary network traffic. You can see how this could be an issue on larger networks.\nThere is a bigger issue though when we\u0026rsquo;re looking at VMware. According to the VMware Knowledgebase article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1556 ESX hosts will send RARP packets during certain operations such as VMotion. So in this case if one of our VMs was VMotioned to a different ESXi host, the ESXi server would actually notify the physical switch and give it the MAC Address of the NLB Clustered IP Address. Lets look at what happens once the switch enters the MAC Address for the Cluster.\nNow, ClientA sends the same request to 1111.1111.1111 and gets a response from only 1 of the VMs. This might make the network traffic more efficient, but also means that the second VM in the NLB Cluster will never receive any more requests and NLB isn\u0026rsquo;t doing what we want anymore. VMware\u0026rsquo;s solution for this is to change the vSwitch or PortGroup properties so that \u0026ldquo;Notify Switches\u0026rdquo; is off.\nMulticast Mode Multicast mode works a bit differently and is the recommended solution from VMware for using NLB. The main difference is that in Multicast mode the NICs can still communicate using their original MAC Addresses. They will still have their original MAC Address as well as the cluster MAC Address. Since they can communicate over both the physical and the cluster addresses their is no need for a secondary network card for communication as there is in Unicast mode.\nWhen a multicast mode VM sends requests to the network now, the switch will see the physical address of the VM and not the clustered address. The switch will then add that address to it\u0026rsquo;s MAC address table as usual. The switch will likely drop the arp replies because the server has a Unicast IP address with a Multicast MAC address. In order for NLB to work correctly, the MAC address of the Cluster needs to be manually entered into the switch so that the switch will forward the frames properly. It is detailed in the following VMware KB 1006525.\nHere we can see that if ClientA makes a request for the address 1111.1111.1111 that it will be properly sent to the correct hosts.\nThe important thing to remember is to add the ARP and MAC addresses statically to the physical switch in order to forward the frames correctly. Assuming my above example uses a Cisco switch I would have run something like the following:\nconf t arp 10.10.10.10 1111.1111.1111 ARPA mac-address-table static 1111.1111.1111 vlan 100 int fa0/1 fa0/2 Once the manual arp resolution and static MAC resolution is complete on the switch, the vSwitch Teaming policy should be set to Notify Switches \u0026ndash;\u0026gt;Yes.\n","permalink":"https://theithollow.com/2012/05/08/nlb-in-vsphere-unicast-or-multicast/","summary":"\u003cp\u003eSuppose you have multiple virtual machines that you would like to distribute load across that are housed inside of your virtual environment.  How do we go about setting up Network Load Balancing so that it will still work with things like DRS and VMotion?\u003c/p\u003e\n\u003ch2 id=\"switch-refresher\"\u003eSwitch Refresher\u003c/h2\u003e\n\u003cp\u003eIn most networks we have switches that listen for MAC addresses and store them in their MAC Address Table for future use.  If a switch receives a request and it knows which port the destination MAC address is associated with, it will forward that request out the single port.  If a switch doesn\u0026rsquo;t know which port a MAC Address is associated with, it will basically send that frame out all of it\u0026rsquo;s ports (known as flooding) so that the destination can hopefully still receive it.    This is why we\u0026rsquo;ve moved away from hubs and moved towards switches.  Hubs will flood everything because they don\u0026rsquo;t keep track of the MAC Addresses.  You can see how this extra traffic on the network is unwanted.\u003c/p\u003e","title":"NLB in vSphere (Unicast or Multicast)?"},{"content":"I really wanted to test out some VMware Site Recovery Manager scenarios and realized that buying SANs, servers and networking equipment was quite expensive. I also didn\u0026rsquo;t have a lot of space in my house that was available for running all of this equipment. After completing my VCP5 I was given a copy of VMware Workstation 8 and thought that I might be able to build a nested virtual environment, where the ESXi hosts themselves were virtualized inside of workstation. (Don\u0026rsquo;t worry, virtualizing a virtual host doesn\u0026rsquo;t warp time or space, it\u0026rsquo;s safe.)\nPhysical Hardware For my whitebox, I knew that the thing I would run shortest on was memory. Nesting hosts, SAN simulators, and VMs was going to take a toll on my resources, so I got a machine that would run 32Gb of RAM and I also purchased an SSD drive for anything I wanted better performance on. I also purchased two 2Tb drives (mirrored) in the server and a six core AMD processor. My hardware is listed below:\n1 x Patriot Gamer 2 Series, Division 2 Edition 32GB (4 x 8GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model PGD332G1333ELQK 1 x ASUS M5A97 AM3+ AMD 970 SATA 6Gb/s USB 3.0 ATX AMD Motherboard with UEFI BIOS 1 x OCZ Petrol PTL1-25SAT3-64G 2.5\u0026quot; 64GB SATA III MLC Internal Solid State Drive (SSD) 1 x AMD FX-6200 Zambezi 3.8GHz (4.1GHz Turbo) Socket AM3+ 125W Six-Core Desktop Processor FD6200FRGUBOX 2x Seagate Barracuda Green ST2000DL003 2TB 5900 RPM Sata 6.0Gb/s b3.5\u0026quot; Internal Hard Drive - Bare Drive Logical Design To give you a better picture of the nesting that is happening, I\u0026rsquo;ve drawn up a picture. Here you can see that I only have 1 physical machine running a Microsoft Server OS with Workstation installed. Inside of that we have ESXi hosts setup, and inside of those we have our \u0026ldquo;virtual machines.\u0026rdquo;\nI know that the picture seems pretty convoluted, and I\u0026rsquo;ll admit that having a lab setup this way is slightly more difficult to wrap your head around, but it sure beats spending the dough on extra equipment.\nIn an effort to give you a better look at the actual lab environment, the below diagram shows the SRM part that I would be using to test SRM. Here you can see that I\u0026rsquo;ve stripped out all of the VMware Workstation layers, and the physical hardware in order to simplify what I want to test.\nSetup In order to get all of this setup, you should give yourself plenty of time. It took me somewhere around 2 weeks of working in my spare time to get it up and running fully. If you\u0026rsquo;re considering building a lab of your own, hopefully these tips will help you get going faster.\nFirst and foremost I need to thank Vargi for his great documentation on setting this up. Following his SRM in a box guide was great and is likely everything you need to get your lab up and running. I would just like to point out a few places where I got stuck so that this doesn\u0026rsquo;t happen to future users. His guide can be found here: https://docs.google.com/file/d/0B8RhOQcmJhZuMThjNWU5OWEtMGE3Ni00ODdkLTlkYzEtMTBjYTllMjE0OWZh/edit?pli=1#\nand I recommend checking out his blog at: http://virtual-vargi.blogspot.com/\nNetapp Simulators Setup I had plenty of problems getting the Netapp Simulators setup correctly. Everything that you need to know is in the SRM in a Box setup guide, but be sure to follow these commands exactly, the first time you boot up the Simulator. If you boot the simulator to OnTap without changing this info you\u0026rsquo;re done. You\u0026rsquo;ll need to get another copy of the simulator from the Now.netapp.com site and start over. Also, be sure that when you\u0026rsquo;re setting up the sysid and serial numbers of your two simulators, that they are different so that OnCommand System Manager can then manage both systems. I\u0026rsquo;ve copied Vargi\u0026rsquo;s text below for quick reference.\nBoot Simulator for initial and do its initial configuration. We need to change the serial number as we are going to have two simulators running and managed by OnCommand System Manager. When it boots press a key other than enter to break the boot, and then run the following commands.\nset bootarg.nvram.sysid=1111111101\nset SYS_SERIAL_NUM=1111111101\nboot\nI would also like to mention that this all works perfectly fine with the 8.1 version of the Netapp Simulator.\nESXi FIX for BSOD I was excited to see that ESXi was now supported in Workstation 8, after all that\u0026rsquo;s why I decided to build this in the first place. But I quickly started getting BSODs on my system and I assumed that I had purchased some faulty hardware in my whitebox. After some digging, I found that the fix was to make a setting change on the ESXi guest. (you have to love the VMware communities) http://communities.vmware.com/message/2017526\nIt appears as though this will be resolved in a future version of workstation and the \u0026ldquo;fix\u0026rdquo; won\u0026rsquo;t be necessary any longer.\nMultiple Subnets I wasn\u0026rsquo;t sure that it was necessary, but for a more realistic feel I wanted to have more than one subnet available. In my case I have a storage subnet and a virtual machineESXi host subnet. Unfortunately, I didn\u0026rsquo;t want to have my SRM Lab interfere with my home network. Luckily Vyatta has a free virtual router that you can use to route traffic between multiple subnets. This allowed me to build additional networks as needed and can communicate with them as needed. I have another post related to setting up the Vyatta Router. Vyatta Router Setup\nEnd Results ","permalink":"https://theithollow.com/2012/05/03/poor-mans-srm-lab-whitebox/","summary":"\u003cp\u003eI really wanted to test out some VMware Site Recovery Manager scenarios and realized that buying SANs, servers and networking equipment was quite expensive.  I also didn\u0026rsquo;t have a lot of space in my house that was available for running all of this equipment.  After completing my VCP5 I was given a copy of VMware Workstation 8 and thought that I might be able to build a nested virtual environment, where the ESXi hosts themselves were virtualized inside of workstation.  (Don\u0026rsquo;t worry, virtualizing a virtual host doesn\u0026rsquo;t warp time or space, it\u0026rsquo;s safe.)\u003c/p\u003e","title":"Poor Man's SRM Lab (Whitebox)"},{"content":"Netapp has released their vStorage APIs for Storage Awareness (VASA) provider 1.0 to their support site. http://support.netapp.com If you\u0026rsquo;re not that familiar with the VASA concept, this article should explain what it is and how it\u0026rsquo;s used in regards to VMware vSphere 5.\nWhat is VASA? VASA Providers collect information about your storage systems and present that information to vSphere. In previous versions of vSphere, an administrator might need to keep track of hisher datastores in a spreadsheet or have a naming convention that showed the properties of an individual datastore. For example, if your storage system had both SSD and Sata disks, the Datastore might be named VMFS01_SSD or something similar.\nWith vSphere 5 we now have storage providers which can either be set manually or now with VASA Provider 1.0 from Netapp we can have the system tell us the capabilities. Now when we\u0026rsquo;re creating a new guest machine and we\u0026rsquo;re deciding which datastore to put the VM in, we can look at the storage profile to determine the best fit.\nThe below example shows what a datastore might look like both before and after the VASA provider from Netapp is installed.\nThis screenshot shows what VMFS03 looks like before VASA is installed. Notice that the storage capabilities are \u0026ldquo;N/A\u0026rdquo;.\nAfter the VASA provider was installed and configured, I can look at the datastore again and see that the System Storage Capabilities have changed. Now, we can easily see that this is a VMFS store and that this datastore is being replicated.\nHow is this useful? Now we can setup our storage profiles and whenever we need to create a new VM or do a storage vMotion we can now select the appropriate profile and the datastores will be filtered accordingly.\nInstalling the Netapp VASA Provider The installation of the VASA provider is very simple. Download the installer from the Netapp support site.\nNetapp VASA 1.0 Download\nThe standard \u0026ldquo;click next\u0026rdquo; type of installation is given to you. WARNING: Do not try to install this on your vCenter server or you\u0026rsquo;ll end up with a port conflict. (Yes, I know this from experience)\nOnce it is installed, a fairly simple configuration is needed to contact the vCenter Server and the storage devices.\n","permalink":"https://theithollow.com/2012/05/01/netapp-vasa-provider-1-0/","summary":"\u003cp\u003eNetapp has released their vStorage APIs for Storage Awareness (VASA) provider 1.0 to their support site.  \u003ca href=\"http://support.netapp.com\"\u003ehttp://support.netapp.com\u003c/a\u003e If you\u0026rsquo;re not that familiar with the VASA concept, this article should explain what it is and how it\u0026rsquo;s used in regards to VMware vSphere 5.\u003c/p\u003e\n\u003ch2 id=\"what-is-vasa\"\u003eWhat is VASA?\u003c/h2\u003e\n\u003cp\u003eVASA Providers collect information about your storage systems and present that information to vSphere.  In previous versions of vSphere, an administrator might need to keep track of hisher datastores in a spreadsheet or have a naming convention that showed the properties of an individual datastore.  For example, if your storage system had both SSD and Sata disks, the Datastore might be named VMFS01_SSD or something similar.\u003c/p\u003e","title":"Netapp VASA Provider 1.0"},{"content":"Recently, I wrote a blog post about how to setup and configure VMware Site Recovery Manager for vSphere 5.0. This setup included using array based storage replication to transfer data and it ignored the new VMware replication engine that is included with Site Recovery Manager 5.0. This post is intended to cover the setup and configuration of the vSphere replication.\nIf you\u0026rsquo;re not familiar with it, the vSphere Replication Management Server handles individual replication of powered on virtual machines, to a secondary site. This is a free vSphere appliance with the purchase of VMware Site Recovery Manager 5.0. Traditionally, vSphere required that the storage providers were replicating the virtual machine data for SRM to work, but that has all changed with 5.0. Now VMware can do the replication for you.\nThe vSphere replication can provide big savings to small and medium businesses looking to setup a disaster recovery site. Now there is no need to purchase two storage devices (one for each site), you can now replicate single VMs without replicating additional items in a datastore, and you can save on storage licensing for replication software. Unfortunately, there are also several shortcomings of the vSphere replication, such as: Not being able to replicate powered off virtual machines, no VMware FT Support, Microsoft Clusters can\u0026rsquo;t be replicated, both sites must be on vSphere 5, and it can only replicate virtual machines i.e. no files.\nTo get started, this article assumes that VMware Site Recovery Manager is installed and configured at both sites and that the vSphere Replication Option was added during the install.\nIn the vSphere Client go to Home \u0026ndash;\u0026gt; Solutions and Applications \u0026ndash;\u0026gt; Site Recovery Manager and then click on the vSphere Replication tab.\nYou will notice that there are No VRM Servers found and the only command you can choose is to deploy a VRM Server. When you click on the \u0026ldquo;Deploy VRM Server\u0026rdquo; command, a wizard will open to allow you to import the OVF template for the VMware Replication Manager Server. This server is used to manage the replication between sites. It can have one or more replication servers registered to it that do the transferring.\nWhen you\u0026rsquo;re ready click ok. The standard OVF Template wizard starts up to deploy the VRM Server. Notice that the path is greyed out and filled in for you.\nA summary of the Template setup will appear.\nName the Virtual Appliance.\nChoose the Datastore and Host to deploy the appliance under.\nSelect the Datastore to house the appliance.\nChose a disk format.\nPut the appliance on the network appropriate for your environment. It must be reachable by vCenter.\nSetup the root password for the appliance and the network setup appropriate for your environment.\nMake sure you get a green check next to the vCenter Extension vService Dependency. If the managed IP address of the vCenter server is not set under \u0026ldquo;Runtime Settings\u0026rdquo; you will receive an error here and may not continue.\nNow that the VRMS has been deployed to your production site, you will need to deploy a second VRMS Server in the DR site. The same steps we\u0026rsquo;ve just covered will need to be repeated with the correct vCenter, storage, and networking settings.\nOnce the VRMS Servers have been deployed, they must be configured. In the vSphere Replication tab of the vSphere Client, the \u0026ldquo;Configure VRM Server\u0026rdquo; command should now be available. If this command is selected it opens a web browser with the address of the VRMS Server. Log in with the \u0026ldquo;root\u0026rdquo; username and password that was configured during the install.\nIn the VRM \u0026ndash;\u0026gt; Configuration section, the Database Server connection and vCenter Server connections need to be filled out. During my testing, I was unable to get Windows Authentication working with the SQL Database and needed to configure the database to use Mixed Mode. When I entered a SQL user account it worked fine.\nWhen you have finished configuring the VRMS Server, be sure to configure the secondary site\u0026rsquo;s server as well. Once both sites have a configured VRMS Server, choose the \u0026ldquo;Configure VMRS Connection\u0026rdquo; Command in SRM.\nSRM will then prompt for the username and password of the remote vCenter Server. Enter the required information and accept any certificate warnings that might show up. When you are finished you should see a friendly message with a green check mark indicating that the connection was setup correctly.\nNow that the VRMS Servers are done, we can deploy the vSphere Replication Servers. These are the servers that actually handle the replication of the data. In the vSphere Replication tab of SRM you can now choose the \u0026ldquo;Deploy VR Server\u0026rdquo; command. At this point you will be prompted to deploy another OVF template much like when the VMRS Servers were deployed earlier. Don\u0026rsquo;t forget, that this must be done in both sites and that these servers must be able to communicate with each other across the WAN to replicate the data.\nOnce the vSphere Replication Servers have been deployed, they must be registered to the VMRS servers that were deployed earlier in this post. In SRM there will now be a command available named \u0026ldquo;Register VR Server\u0026rdquo;. When this command is chosen, a wizard screen comes up and asks which server should be registered as a vSphere Replication Server. Choose the VR Server from your datacenter and then don\u0026rsquo;t forget to repeat this on the secondary site\u0026rsquo;s VR Server.\nThe configuration is now done for the replication setup. The rest will depend on how the DR plan needs to be executed.\nReplicating a virtual machine\nRight click on any powered on virtual machine and choose vSphere Replication.\nYet another wizard will be available asking where to replicate the data and how often to perform the replication.\nEach disk of the virtual machine can have different settings. Some disks might not need to be replicated depending on the situation. These options will be given in the wizard.\nIf there are more than one VR Servers configured, a specific VR Server can be specified or auto assigned. This should help to load balance the VR Servers and possibly replicate over different network paths depending on your environment.\nWhen the replication wizard has finished, there will be a replication console showing the status of the virtual machines and the progress of the replication. This will also be a good place to find the Recovery Point Objectives once all of the machines have been configured.\nThere are a lot of pieces to setting up the vSphere Replication for Site Recovery Manager, but as you can see they are very intuitive and wizard driven. Once the replication is working smoothly, the process of setting up recovery plans, and recovery groups can be accomplished with Site Recovery Manager and the use of expensive storage devices was not necessary.\n","permalink":"https://theithollow.com/2012/04/24/vmware-replication-setup-for-site-recovery-manager/","summary":"\u003cp\u003eRecently, I wrote a blog post about how to \u003ca href=\"/2012/04/20/vmware-site-recovery-manager-basic-setup/\" title=\"VMware Site Recovery Manager Basic Setup\"\u003esetup and configure VMware Site Recovery Manager for vSphere 5.0\u003c/a\u003e.  This setup included using array based storage replication to transfer data and it ignored the new VMware replication engine that is included with Site Recovery Manager 5.0.  This post is intended to cover the setup and configuration of the vSphere replication.\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re not familiar with it, the vSphere Replication Management Server handles individual replication of powered on virtual machines, to a secondary site.  This is a free vSphere appliance with the purchase of VMware Site Recovery Manager 5.0.  Traditionally, vSphere required that the storage providers were replicating the virtual machine data for SRM to work, but that has all changed with 5.0.  Now VMware can do the replication for you.\u003c/p\u003e","title":"VMware Replication Setup for Site Recovery Manager"},{"content":"Finally, the idea of running a Disaster Recovery test is manageable. VMware Site Recovery Manager combined with vSphere has made it possible to test a failover to a warm site without worrying that the DR test itself will cause an outage.\nSetting up Site Recovery Manager and performing a site failover sounds like a daunting task, but VMware has made this very simple, assuming you are familiar with vSphere already. If you already have a virtual environment setup at both your production site and a secondary site, SRM is pretty simple to get started with but allows for almost any DR Plan you can think of to be run.\nSRM Installation SRM will need to be installed on a server in both the production site as well as the recovery site. This can be installed on your vCenter Server and does not require a different box if you have the capacity. The installation is pretty straight forward with the typical, \u0026ldquo;where to install?\u0026rdquo; question, and the prompt for the location of the vCenter Server. The one questions that you\u0026rsquo;ll have to decide is whether or not you want the vSphere Replication to be installed. If you are not sure just install it in case you want to use it later, of course you can always run a repair installation and choose the option at that time.\nOnce you are finished installing Site Recovery Manager you may need to install your Storage Replication Adapter (SRA). The SRA integrates vSphere with your storage solution so that SRM knows what data has been replicated and is on disk. If you have decided to use the vSphere Replication instead, then installing SRAs is not necessary.\nConfigure Sites Now that you have your software installed in both sites, we need to configure the production and recovery sites. Once SRM is installed, you can access the module in the vSphere Client under Home \u0026ndash;\u0026gt; Solutions and Applications \u0026ndash;\u0026gt; Site Recovery. Under the Sites tab, you\u0026rsquo;ll see the currently connected site. We need to add the secondary site. If you select the Configure Connection button you\u0026rsquo;ll be presented with your first wizard.\nEnter in the secondary site.\nEnter the administrator credentials used to connect to the secondary site\u0026rsquo;s vCenter server.\nNotice that during the connection setup, reciprocity is established so there is no need to go through this process at the secondary site. This setup is done automatically for you.\nNow you\u0026rsquo;ll see both sites are setup in SRM.\nThe last step in site setup is to set a placeholder datastore. This datastore is used to store small virtual machine files even when a failover isn\u0026rsquo;t occurring. Just select each site and click \u0026ldquo;Configure Placeholder Datastore\u0026rdquo; to select where to store the files.\nSetting up Array Managers VMware Site Recovery Manager can\u0026rsquo;t fail over virtual machines unless all of the data is replicated to the secondary site. In order for SRM to know what data is available, you either need to use vSphere replication available in SRM 5.0 or use the SRA, which will present the replicated volumes to Site Recovery Manager. In the examples for this post, I\u0026rsquo;ve used the Netapp SRA and i\u0026rsquo;m connecting to Netapp Data Ontap 8.0 Simulators which are free for download.\nYou\u0026rsquo;ll see the Array Managers on the left hand side and in the work pane you should see that one or more of the SRAs are installed.\nClick on the Add Array Manager link and another wizard will pop up. Give the Array Manager a name and choose the SRA that you\u0026rsquo;ve installed.\nEnter all of the storage system information for the site.\nClick Finish, and the array manager is configured. This must be done on the secondary site as well!\nOnce the Array Manager is added, an array pair must be configured. This array pair shows the devices that are replicating between the Production Site and the Disaster Site. Click on the SRA that was just configured and go to the Array Pairs tab of the work panel. The local array and a remote array will be listed. If you select the \u0026ldquo;Enable\u0026rdquo; action it should setup the array pair.\nProtection Groups\nNow that the array managers are configured, a Protection Group must be setup. The Protection Group will include the virtual machines that need to be protected for a disaster. On the Protection Groups tab, choose \u0026ldquo;Create New Protection Group\u0026rdquo;. Another wizard will start up. Select whether the group is being configured from the Primary Site or the Secondary Site, and which type of replication that is being utilized.\nIf the Array Managers are setup properly, the datastore that is replicated to the secondary site will show up.\nGive the Protection Group a name.\nOnce the Protection Group is setup, the VMs will need to have their protection configured. The VM protection allows you to specify additional settings for the VMs ,should a failover occur. Choose the Protection Group and click on \u0026ldquo;Configure Protection.\u0026rdquo;\nWhen the protection is configured, there are options to set what folder to put the VM in, what resource pool, and if any of the devices should be modified during a failover. For instance if the VM has three hard drives, and one of them is not necessary in a disaster situation, that disk doesn\u0026rsquo;t need to be connected at the disaster site and can be detached.\nRecovery Plan The Recovery Plan setup will likely be the most time consuming part of setting up the SRM. This isn\u0026rsquo;t because it\u0026rsquo;s difficult to use, but rather due to the amount of detail that the organization might want during a disaster or a test. The Recovery Plan will probably replace a good section of the old run books that have traditionally been used in the case of a disaster. The Recovery Plan will be the steps that happen during a failover.\nWhen the Recovery Plan is created, the first thing that needs to be decided is what site should host the VMs in a disaster scenario.\nNext will be the list of Protection Groups that should be included. In the example, I\u0026rsquo;ve chosen the recovery group that was setup earlier but this could include multiple Protection Groups.\nOnce the Protection Groups have been chosen, network mapping will occur. There will be options to select what network VMs should be placed in during a real disaster, and also during a test failover. This is very useful, as you can leave the Test Network setting at \u0026ldquo;Auto\u0026rdquo; and Site Recovery Manager will create a new \u0026ldquo;bubble network\u0026rdquo; that is isolated from the rest of your machines. You don\u0026rsquo;t have to use the bubble network, but it might be nice to keep the test failover VMs from interacting with the production VM. If you need to do some additional routing, check out the article on Bubble Routing in virtual networks. http://wp.me/p2d48c-7H/ Lastly, name the Recovery Plan. There may be more than one type of recovery plan needed for the organization and this will allow for multiple recovery plans to be run. They can even be run simultaneously if necessary.\nHopefully this post can get you started. I expect to right a few more SRM related posts in the future explaining the details of doing a failover, adding powercli scripts to the Recovery Plans and using the VMware Replication as opposed to SRA.\nHelpful Links http://www.vmware.com/products/site-recovery-manager/\nMike Laverick has some great information about SRM on his blog. I invite you to check it out sometime. http://rtfm-ed.co.uk/\n","permalink":"https://theithollow.com/2012/04/20/vmware-site-recovery-manager-basic-setup/","summary":"\u003cp\u003eFinally, the idea of running a Disaster Recovery test is manageable.  VMware Site Recovery Manager combined with vSphere has made it possible to test a failover to a warm site without worrying that the DR test itself will cause an outage.\u003c/p\u003e\n\u003cp\u003eSetting up Site Recovery Manager and performing a site failover sounds like a daunting task, but VMware has made this very simple, assuming you are familiar with vSphere already.  If you already have a virtual environment setup at both your production site and a secondary site, SRM is pretty simple to get started with but allows for almost any DR Plan you can think of to be run.\u003c/p\u003e","title":"VMware Site Recovery Manager Basic Setup"},{"content":"A question often comes up about what to do when you have a segmented virtual network that needs to be able to traverse subnets. This might happen if you\u0026rsquo;re doing some testing and don\u0026rsquo;t want the machines to contact the production network, or perhaps doing a test SRM failover and having the virtual machines in their own test network. Virtual machines in subnet (A) might need to contact other virtual machines in subnet (B) but don\u0026rsquo;t have access to the physical router any longer, so they can\u0026rsquo;t communicate. To solve this issue, how about we try a virtual router?\nSo here is our basic problem. Virtual Machines in the 192.168.1.0/24 subnet can\u0026rsquo;t contact the 192.168.2.0/24 subnet because there is no route between the two networks.\nWe are assuming that putting a physical router into the picture might not be possible because it\u0026rsquo;s a home lab where the equipment isn\u0026rsquo;t available, or the network team isn\u0026rsquo;t available to setup the networking, or maybe you just don\u0026rsquo;t know how to set that up. Our quick solution is to use a virtual router like in the below example.\nOnce the above solution is put in place, the virtual machines can then communicate.\nVyatta has some software that will create a virtual router that will allow you to do this. They also have an enterprise grade version if you need it for production use. The free version can be downloaded from here. http://www.vyatta.org/downloads\nOnce you\u0026rsquo;ve downloaded the software, you can create a new virtual machine and mount the Vyatta iso. This will take you right into the installation and configuration of the Vyatta router. To configure the router for the networks above, all we need to do is connect a network card of the virtual router to each of the vSwitches. Then assign an IP address for each of the NICs. If you set the default gateway of your virtual machines to the IP address of the virtual router, you\u0026rsquo;ll then be able to route between the two networks and the traffic should never have to leave the ESXi host.\nI should add, that if your bubble network spans multiple ESXi hosts with say a vDSwitch, this setup will still work, but you will need an uplink to a physical switch. This might also not be the most efficient way to route, but should work in a pinch.\nHere is my setup on the Vyatta Router. There is much more you can do with this, but the basic routing between two networks is shown below. If you would like to learn more about the product, or how to use it there are some great resources on http://www.vyatta.com/ including some free training.\n","permalink":"https://theithollow.com/2012/04/18/virtual-routing-for-bubble-networks/","summary":"\u003cp\u003eA question often comes up about what to do when you have a segmented virtual network that needs to be able to traverse subnets.  This might happen if you\u0026rsquo;re doing some testing and don\u0026rsquo;t want the machines to contact the production network, or perhaps doing a test SRM failover and having the virtual machines in their own test network.  Virtual machines in subnet (A) might need to contact other virtual machines in subnet (B) but don\u0026rsquo;t have access to the physical router any longer, so they can\u0026rsquo;t communicate.  To solve this issue, how about we try a virtual router?\u003c/p\u003e","title":"Virtual Routing for Bubble Networks"},{"content":"We\u0026rsquo;re probably all aware of the benefits of clustering things like SQL Server in order to provide highly available data. But shared storage clustering has some drawbacks on VMware ESXi clusters such as not being able to vMotion.\n• Database Mirroring – SQL Server database mirrors utilize a non-shared storage availability solution, using built-in SQL Server replication technology to create and maintain one or more copies of each database on other SQL Servers in the environment. SQL Server database mirrors provide application-aware availability, and the lack of a quorum disk makes this a VMware-friendly solution, allowing the full use of vMotion, DRS, and HA.\n• Log Shipping – Geared towards disaster recovery scenarios, log shipping delays the commitment of log files into the database copy based on an administrator defined lag time. Replay lag provides protection against logical database corruption by providing the ability to recover up to the last copied and inspected log file, or to a specific point-in-time (PIT) within the lag window by manipulating the log files.\n• Microsoft Failover Clustering (MSCS) – Microsoft failover clusters utilize a shared storage availability solution using the Microsoft Clustering Service to handle failover of application services in the event of active node failure. Microsoft failover clusters provide application-aware availability, but offer a single point of failure in the storage subsystem and because of the need for a quorum disk, Microsoft failover clusters cannot utilize VMware features like vMotion, DRS, and HA. http://www.vmware.com/files/pdf/sql_server_use_cases.pdf\nVMware vCenter may be a great candidate for Microsoft HA Mirroring. I\u0026rsquo;m assuming that Microsoft Clustering Services is out because we definitely want to be able to vMotion our vCenter server to other hosts. Especially, when we use VMware Update Manager to deploy patches. If we\u0026rsquo;re using Clustering services then we would need to power off the VM and patch the host. Then once we\u0026rsquo;re done, power on the VM and power off the second VM in the cluster and patch that host. What a pain!\nLog Shipping is certainly a possibility and gives you the opportunity to replay log files to a point in time. This would be beneficial if you had a corrupt database or experienced user error. The downfall is that it doesn\u0026rsquo;t support automatic failover.\nDatabase Mirroring will provide us automatic failover and will still let us use the HA, DRS, and vMotion capabilities if vCenter is on a virtual machine. If your vCenter is on a physical server(s) MS HA Mirroring will allow you to have automatic failover without having to setup shared storage.\nThere is a great article on how to setup SQL HA Mirroring at http://www.packtpub.com/article/microsoft-sql-server-2008-high-availability-installing-database-mirroring.\nOnce you\u0026rsquo;ve setup your database mirror, all you need to do is modify your ODBC Connection to specify both the Primary and Secondary Servers. Then if there is a failure, the mirrored database will take over while VMware HA restarts the original.\nSQL 2012 is now promoting the \u0026ldquo;Always On\u0026rdquo; functionality which will likely be supported on vSphere eventually, since it is similar to the Exchange DAGs that are officially supported.\n","permalink":"https://theithollow.com/2012/04/15/sql-ha-mirroring-with-vcenter/","summary":"\u003cp\u003eWe\u0026rsquo;re probably all aware of the benefits of clustering things like SQL Server in order to provide highly available data.  But shared storage clustering has some drawbacks on VMware ESXi clusters such as not being able to vMotion.\u003c/p\u003e\n\u003cblockquote\u003e\n\u003cp\u003e• Database Mirroring – SQL Server database mirrors utilize a non-shared storage availability solution,\nusing built-in SQL Server replication technology to create and maintain one or more copies of each\ndatabase on other SQL Servers in the environment. SQL Server database mirrors provide\napplication-aware availability, and the lack of a quorum disk makes this a VMware-friendly solution,\nallowing the full use of vMotion, DRS, and HA.\u003c/p\u003e","title":"SQL HA Mirroring with vCenter"},{"content":"One of my most frequently read articles is on how to use MBRAlign to align your virtual machine disks on Netapp storage. Well, after Netapp has released their new Virtual Storage Console (VSC4) the tedious task of using MBRAlign might be eased for some admins.\nOptimization and Migration The new VSC4 console for vSphere has a new tab called Optimization and Migration. Here you are able to scan all or some of your datastores to check the alignment of your virtual machines. The scan manager can even be set on a schedule so that changes to the datastore will be recognized.\nOnce you have scanned your datastores you can go the the Virtual Machine Alignment section and see if your virtual machines are aligned.\nWhat if your virtual machines are not aligned already? Netapp has a new way to align your virtual machines without having to take them offline. Disclaimer: I\u0026rsquo;ve looked for documentation on exactly how this process works, but couldn\u0026rsquo;t find any. The information below is how I perceive it to work after testing in my lab. If there are any Netapp folks that have definitive answers on this process, or have documentation explaining this process, please post them in the comments and I will modify this post.\nInstead of realigning the virtual machine, Netapp is creating a new datastore specifically built to align itself with the unaligned virtual machine. These new datastores are considered \u0026ldquo;Functionally aligned\u0026rdquo; because they make the vm appear to be aligned. If you were to put an \u0026ldquo;Actually Aligned\u0026rdquo; virtual machine in a functionally aligned datastore that vm would appear to be misaligned. It seems that Netapp is creating a new volume and has the starting offset match the virtual machine that is unaligned.\nAligning Virtual Machines Lets go through the process of aligning a misaligned virtual machine using VSC4. First, we select the virtual machine that is misaligned and choose the migrate task. This opens the alignment wizard.\nChoose your filer. Next we choose a datastore. If we already have a functionally aligned datastore with an offset that\u0026rsquo;s the same as your unaligned virtual machine\u0026rsquo;s offset, you can select an existing datastore. If you don\u0026rsquo;t have an existing datastore that will align with your vm, you\u0026rsquo;ll receive an error message like the one below. If that\u0026rsquo;s the case, create a new datastore from the wizard.\nChoose the datastore type. In our case we\u0026rsquo;ll create a new datastore.\nOnce the migration is complete you\u0026rsquo;ll see your virtual machine in a new datastore and it will be aligned. Notice how the virtual machine offset matches the name of the new datastore that was created. Offset 7 was put into the AlignedDatastore1_optimized_7 datastore.\nNow you can rest easy, knowing that your virtual machines are not suffering performance issues due to unaligned disks, and no downtime was required to do so.\n","permalink":"https://theithollow.com/2012/04/10/netapp-vsc4-optimization-and-migration/","summary":"\u003cp\u003eOne of my most frequently read articles is on how to use MBRAlign to align your virtual machine disks on Netapp storage. Well, after Netapp has released their new Virtual Storage Console (VSC4) the tedious task of using MBRAlign might be eased for some admins.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eOptimization and Migration\u003c/strong\u003e\nThe new VSC4 console for vSphere has a new tab called Optimization and Migration. Here you are able to scan all or some of your datastores to check the alignment of your virtual machines. The scan manager can even be set on a schedule so that changes to the datastore will be recognized.\u003c/p\u003e","title":"Netapp VSC4 Optimization and Migration"},{"content":"Many services such as DHCP or TFTP use broadcast packets to find a particular server. In the case of DHCP, a device when connecting to a network will send out a broadcast to find a DHCP server to get an IP address to use. But what if you have multiple subnets on your network? You could have a DHCP server on each of your subnets, but this seems a bit overkill.\nReview on Broadcast Domains\nJust as a quick review, devices in most cases, use unicast addresses, meaning that they send an Ethernet frame to a specific device. A switch then should know which port the device is on and forward that frame to the destination device.\nIn the below example we see the basic unicast frame being forwarded to the destination.\nNow if the same PC tries to contact a service like DHCP it will send a frame to the broadcast address of the network. The switch will then flood the frame to all devices on the same vlan. Don\u0026rsquo;t forget that it will also forward the frame to any trunk ports on the switch.\nThe below example shows the same computer sending a broadcast frame. Notice that it can\u0026rsquo;t reach the DHCP server since it\u0026rsquo;s on a different vlan.\nHelper Addresses\nNow that we can visualize the problem at hand we can learn how to forward the broadcast frame to the DHCP Server as well. If we log into the router we would need to configure a UDP helper address.\nCisco and HP accomplish this by using the ip-helper address command.\nIf our router is a cisco router we would do something like:\nrouter(config) int Eth0\nrouter(config-if) ip-helper address 10.200.1.10\nHP would be similar to below\nswitch(Config)#Vlan 100 ip helper-address 10.200.1.10\nOnce the helper address is added we can then broadcast to the DHCP Server as expected.\nI should mention that not all broadcasts are forwarded. By default Cisco forwards 8 services.\nTime TACACS DNS BOOTP/DHCP Server BOOTP/DHCP Client TFTP NetBIOS name service NetBIOS datagram service\nIf you need to forward other services you can use \u0026ldquo;ip forward protocol\u0026rdquo; to add additional protocols to the default list.\n","permalink":"https://theithollow.com/2012/04/06/how-to-broadcast-across-subnets/","summary":"\u003cp\u003eMany services such as DHCP or TFTP use broadcast packets to find a particular server. In the case of DHCP, a device when connecting to a network will send out a broadcast to find a DHCP server to get an IP address to use. But what if you have multiple subnets on your network? You could have a DHCP server on each of your subnets, but this seems a bit overkill.\u003c/p\u003e","title":"How to Broadcast Across Subnets"},{"content":"Hewlett-Packard has released the details of their new product line the Gen8 (don\u0026rsquo;t call me G8) servers. The new line as, you would expect, has all of the performance increases that seem necessary when coming out with a new product. The new Sandy Bridge XEON processors are onboard, they\u0026rsquo;ve increased the number of DIMM slots, increased the total amount of memory allowed per system while also increasing the memory speed supported. HP has also switched over to PCI 3.0 which is providing much faster speeds for PCI devices.\nIt is quite apparent that HP\u0026rsquo;s main focus was on life cycle management and reducing administration. They have made many changes that should reduce both the amount of downtime and the amount of time required by System Administrators to provision and manage their systems.\nIntegrated Lights out (iLO)\nHP is doing away with the LO cards that came with their 100 series line of servers. Now all servers will come equipped with the iLO cards. It was decided that better management of the servers required an upgrade to how iLO functioned. The new iLO boards now come with a 4Gb NAND chip so that they can store their own data. This is a major advance to how the servers can be managed. This means that there will no longer be SmartStart disks to use when building a new server. The SmartStart data will be stored onboard and you can enter the setup screen by pressing a function key at boot time. This means that the System Administrator doesn\u0026rsquo;t need to go into the datacenter and stand in front of the server to update the firmware and install the operating system. This may be a small thing, but that time adds up.\nThe next thing changed was there is now video on the iLO card. The video doesn\u0026rsquo;t do much, but if you\u0026rsquo;ve ever booted a server and gotten a black screen, this change will excite you. Now instead of getting a black screen, iLO can display the information about what is happening and you can report that info to support and get the resolution much faster.\nHP is now logging system data to the 4Gb NAND chip so that if there is an issue with the server, this data can be retrieved and a resolution can be determined, hopefully with less downtime. In addition to just the logging, they\u0026rsquo;ve gone from logging around 400 objects to over 1600 which gives much more granular information for troubleshooting.\nHP Insight Online\nNow that the iLO chip is storing system events, HP setup their new HP Insight Online service which will allow companies to upload their system events to HP support either automatically, manually or not at all. If your company decides to upload system events automatically, HP can proactively monitor your systems for potential issues such as a pre-fail drive. Tickets can then be auto created and they can ship you a drive (covered under warranty, of course) before the drive actually fails. It will also allow them to track their servers so if they had a batch of servers off their production line that have a higher failure rate, they can proactively look for issues before they happen and contact users about a possible issue and how to fix it. This type of automatic uploading of logs should dramatically lower administrator\u0026rsquo;s time to pull change logs, and system logs to send to support when their is an issue.\nFirmware Updates\nHP realized that their patching was confusing and hard to manage so they\u0026rsquo;ve modified their own process. HP will begin releasing their firmware and drivers bundled into a \u0026ldquo;Service Pack\u0026rdquo; and these service packs will be release on a regular schedule. These service packs will be firmware and drivers that have been tested together and validated and can be installed online while the server is running. If a reboot is required, you can either reboot immediately or wait. The new service packs will also give the ability to revert back to the previous firmware if a failure occurs.\nPower Discover Services\nIf you are using HP Gen8 servers as well as the HP PDUs in your racks, you can utilize HP\u0026rsquo;s Power Discovery Services which allows you to not only monitor how much power is being used, but also get a graphical map of which servers are plugged into which PDU and which port. This can help eliminate plugging dual power supplies into the same PDU by mistake.\nLocation Discovery Services\nSimilar to the Power Discovery Services, if you are using the new HP racks with your HP Gen8 servers, you can get a graphical representation of which Rack Units your servers are sitting in and which rack. This can eliminate the time spent updating Visio or spreadsheets with all of your servers in the racks. They will be automatically tracked.\nHP Drive Cages\nThe new HP Drive cages have new lights. The lights will still show disk activity, but will turn yellow if they are in pre-fail, amber for failed, and a new light will show up if the disk should not be removed because data loss would occur. For instance if you have a mirror with two drives and one drive fails, the working drive will get a new light that shows the disk should not be removed.\nChassis\nLastly, the new HP Chassis requires no tools to replace any components. If a replacement to a part needs to be done, there should be no screwdriver required. All components are color coded and snap into place. The baffles no longer sit in the case, but rather snap into place to make sure they are in correctly.\nThe CPUs now have fail proof way to install the processor. You will now slide a processor into the guide and snap it into place. This eliminates the issues with bent pins which comprised 30% of all motherboard issues.\nAnd best of all, look at these sweet new bezels (sold separately).\nFor more information check out HP\u0026rsquo;s website\nhttp://welcome.hp.com/country/us/en/prodserv/servers.html\n","permalink":"https://theithollow.com/2012/03/30/hp-gen8-server-class-review/","summary":"\u003cp\u003eHewlett-Packard has released the details of their new product line the Gen8 (don\u0026rsquo;t call me G8) servers.  The new line as, you would expect, has all of the performance increases that seem necessary when coming out with a new product.  The new Sandy Bridge XEON processors are onboard, they\u0026rsquo;ve increased the number of DIMM slots, increased the total amount of memory allowed per system while also increasing the memory speed supported.  HP has also switched over to PCI 3.0 which is providing much faster speeds for PCI devices.\u003c/p\u003e","title":"HP Gen8 Server Class Review"},{"content":"If you\u0026rsquo;ve built a virtual infrastructure you\u0026rsquo;ve probably had to decide whether or not to use Beacon Probing when setting up your vSwitch uplink ports. But what is it, and why do we need it?\nLet me propose a scenario. Assume that we have a virtual switch with three uplinks, and one of those uplinks fails.\nIf the uplinks are setup correctly, they will see the failed uplink and start sending their frames over the other active uplinks. This is standard network fault tolerance from vSphere.\nBut what happens if the link failure happens further upstream like between an access switch and the core switch?\nUnder normal circumstances the ESXi uplink will still show a connection and route frames to physical switch (A), even though this physical switch\u0026rsquo;s upstream switch link is down. This situation could be just as bad as having an uplink down on the ESXi host, but the ESXi server is unaware of the link failure.\nDepending on how your network is setup, you could be using Link-State Tracking on your access switches which means that if the link between switch A and your core switch is down, it will put all of the downstream ports into an error state. If this is the case, the ESXi server will be able to determine that there is a failure and to use the other uplink ports. However, if your network isn\u0026rsquo;t using Link-State Tracking and you are in a situation where an upstream switch could cause a failure, it might be a good idea to use beacon probing.\nBeacon probing is a software alternative provided by VMware vSphere, to detect uplink failures. During the Beacon Probing process the ESXi host periodically sends out a broadcast packet which vSphere expects will be forwarded by all of the physical switches. The ESXi server then listens on all of the other NICs in the team to hear that broadcast frame. If the other NICs don\u0026rsquo;t hear three consecutive broadcasts then ESXi will mark the Nic with the problem path as failed.\nIt should be noted that you should have three uplinks in the team to do this properly. If you only have 2 uplinks in the team and a beacon isn\u0026rsquo;t received, the ESXi host isn\u0026rsquo;t able to correctly determine if the problem was with the link the beacon was sent on, or the link the beacon was to be received on.\nLinks: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1005577\nThis document was created using the official VMware icon and diagram library. Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware does not endorse or make any representations about third party information included in this document, nor does the inclusion of any VMware icon or diagram in this document imply such an endorsement.\n","permalink":"https://theithollow.com/2012/03/27/understanding-beacon-probing/","summary":"\u003cp\u003eIf you\u0026rsquo;ve built a virtual infrastructure you\u0026rsquo;ve probably had to decide whether or not to use Beacon Probing when setting up your vSwitch uplink ports. But what is it, and why do we need it?\u003c/p\u003e\n\u003cp\u003eLet me propose a scenario. Assume that we have a virtual switch with three uplinks, and one of those uplinks fails.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"https://assets.theithollow.com/wp-content/uploads/2012/03/beaconprobing1.jpg\"\u003e\u003cimg alt=\"BeaconProbing1\" loading=\"lazy\" src=\"https://assets.theithollow.com/wp-content/uploads/2012/03/beaconprobing1.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIf the uplinks are setup correctly, they will see the failed uplink and start sending their frames over the other active uplinks. This is standard network fault tolerance from vSphere.\u003c/p\u003e","title":"Understanding Beacon Probing"},{"content":"Determining which type of RAID to use when building a storage solution will largely depend on two things; capacity and performance. Performance is the topic of this post.\nWe measure disk performance in IOPS or Input/Output per second. One read request or one write request = 1 IO. Each disk in you storage system can provide a certain amount of IO based off of the rotational speed, average latency and average seek time. I\u0026rsquo;ve listed some averages for each type of disk below.\nsources: http://www.techrepublic.com/blog/datacenter/calculate-iops-in-a-storage-array/2182 http://www.yellow-bricks.com/2009/12/23/iops/ http://en.wikipedia.org/wiki/IOPS\nSo for some basic IOPS calculations we\u0026rsquo;ll assume we have three JBOD disks at 5400 RPM, we can assume that we have a maximum of 150 IOPS. This is calculated by taking the number of disks times the amount of IOPS each disk can provide.\nBut now we assume that these disk are in a RAID setup. We can\u0026rsquo;t get this maximum amount of IOPS because some sort of calculation needs to be done to write data to the disk so that we can recover from a drive failure. To illustrate lets look at an example of how parity is calculated.\nLets assume that we have a RAID 4 system with four disks. Three of these disks will have data, and the last disk will have parity info. We use an XOR calculation to determine the parity info. As seen below we have our three disks that have had data written to them, and then we have to calculate the parity info for the fourth disk. We can\u0026rsquo;t complete the write until both the data and the parity info have been completely written to disk, in case one of the operations fails. Waiting the extra time for the parity info to be written is the RAID Penalty.\nNotice that since we don\u0026rsquo;t have to calculate parity for a read operation, there is no penalty associated with this type of IO. Only when you have a write to disk will you see the RAID penalty come into play. Also a RAID 0 stripe has no write penalty associated with it since there is no parity to be calculated. A no RAID penalty is expressed as a 1.\nRAID 1\nIt is fairly simple to calculate the penalty for RAID 1 since it is a mirror. The write penalty is 2 because there will be 2 writes to take place, one write to each of the disks.\nRAID 5\nRAID 5 is takes quite a hit on the write penalty because of how the data is laid out on disk. RAID 5 is used over RAID 4 in most cases because it distributes the parity data over all the disks. In a RAID 4 setup, one of the disks is responsible for all of the parity info, so every write requires that single parity disk to be written to, while the data is spread out over 3 disks. RAID 5 changed this by striping the data and parity over different disks.\nThe write penalty ends up being 4 though in a RAID 5 scenario because for each change to the disk, we are reading the data, reading the parity and then writing the data and writing the parity before the operation is complete.\nRAID 6\nRAID 6 will be almost identical to RAID 5 except instead of calculating parity once, it has to do it twice, therefore we have three reads and then three writes giving us a penalty of 6.\nRAID DP\nRAID DP is the tricky one. Since RAID DP also has two sets of parity, just like RAID 6, you would think that the penalty would be the same. The penalty for RAID DP is actually very low, probably because of how the Write Anywhere File Layout (WAFL) writes data to disk. WAFL will basically write the new data to a new location on the disk and then move pointers to the new data, eliminating the reads that have to take place. Also, these writes are written to NVRAM first and then flushed to disk which speeds up the process. I welcome any Netapp experts to post comments explaining in more detail how this process cuts down the write penalties.\nCalculating the IOPS\nNow that we know the penalties we can figure out how many IOPS our storage solution will be able to handle. Please keep in mind that other factors could limit the IOPS such as network congestion for things like iSCSI or FCoE, or hitting your maximum throughput on your fibre channel card etc.\nRaw IOPS = Disk Speed IOPS * Number of disks\nFunctional IOPS = (Raw IOPS * Write % / RAID Penalty) + (RAW IOPS * Read %)\nTo put this in a real world example, lets say we have five 5400 RPM disks. That gives us a total Raw IOPS of 250 IOPS. (50 IOPS * 5 disks = 250 IOPS).\nIf we were to put these disks is a RAID 5 setup, we would have no penalty for reads, but the writes would have a penalty of four. Lets assume 50% reads and writes.\n(250 Raw IOPS * .5 / 4) + (250 * .5) = 156.25 IOPS\n","permalink":"https://theithollow.com/2012/03/21/understanding-raid-penalty/","summary":"\u003cp\u003eDetermining which type of RAID to use when building a storage solution will largely depend on two things; capacity and performance. Performance is the topic of this post.\u003c/p\u003e\n\u003cp\u003eWe measure disk performance in IOPS or Input/Output per second. One read request or one write request = 1 IO.  Each disk in you storage system can provide a certain amount of IO based off of the rotational speed, average latency and average seek time.  I\u0026rsquo;ve listed some averages for each type of disk below.\u003c/p\u003e","title":"Understanding RAID Penalty"},{"content":"VMware has lots of ways to setup networking on their ESXi hosts. In order to set this up in the best way for your needs, it\u0026rsquo;s important to understand how the traffic will be routed between VMs, virtual switches, physical switches and physical network adapters.\nBefore looking at an example, we should review some networking 101. Machines on the same vlan on the same switch can communicate with one another (assuming there is no firewall type devices in the way). Machines on different vlans on the same switch cannot communicate unless the traffic passes through a router.\nNow that we\u0026rsquo;ve had a short refresher course on networking, lets look at how VMware uses virtual switches to pass traffic between VMs and the physical network.\nVMs on the same vlan on the same host.\nThe diagram below shows two virtual machines on vlan 10, which are connected to the same virtual switch. If the top VM wants to send data to the bottom vlan it simply sends a frame to the connected virtual switch and that switch forwards the frame to the bottom VM. Nothing else needs to occur, in fact you don\u0026rsquo;t even need to have a physical NIC attached as an uplink port and no physical equipment is necessary to do this forwarding.\nVMs on different Vlans on the same host\nNow lets look at what happens when the top VM is on a seperate vlan from the bottom vlan. This process starts out the same, with the top VM sending a frame to the virtual switch. Since the virtual switch doesn\u0026rsquo;t see the destination VM on the same subnet it will forward the frame to it\u0026rsquo;s uplink (physical NIC) and out to the physical network. Once at the physical network we will hit our router which will then be able to route it back to the virtual switch on the new vlan. This type of routing is sometimes referred to as a \u0026ldquo;router on a stick\u0026rdquo;. Once the virtual switch gets this new frame, it can find the bottom VM and forward the frame. A reply would then travel this entire distance in the opposite direction.\nAs you can see this isn\u0026rsquo;t necessarily the best use of bandwidth because now you\u0026rsquo;re limited by the physical NIC\u0026rsquo;s adapter speed.\nVMs on different Switches on the same host.\nIf we look at the next example, we have two separate virtual switches on the same host. This example really is the same as our previous example because a vlan basically separates the traffic just like having two completely separate switches. It doesn\u0026rsquo;t matter if the VMs are on the same VLAN or not, they can\u0026rsquo;t communicate without getting to the physical network. The difference would be that once the frame is in the physical network, it doesn\u0026rsquo;t need a router to forward on a packet, it can just have a switch forward the frame.\nVMs on different hosts on same vlan.\nClearly since the two VMs below are on different hosts, they are going to require the frames to be sent to the physical network to get between hosts. This is true even if you\u0026rsquo;re using distributed switches. The diagrams that you see depicting distributed switches make it look like the switches are somehow attached to each other, but they are just trying to show you that you\u0026rsquo;ve created one distributed switch instead of individual vSwitches.\nThis document was created using the official VMware icon and diagram library. Copyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware does not endorse or make any representations about third party information included in this document, nor does the inclusion of any VMware icon or diagram in this document imply such an endorsement.\n","permalink":"https://theithollow.com/2012/03/16/vmware-network-traffic-routing/","summary":"\u003cp\u003eVMware has lots of ways to setup networking on their ESXi hosts.  In order to set this up in the best way for your needs, it\u0026rsquo;s important to understand how the traffic will be routed between VMs, virtual switches, physical switches and physical network adapters.\u003c/p\u003e\n\u003cp\u003eBefore looking at an example, we should review some networking 101.  Machines on the same vlan on the same switch can communicate with one another (assuming there is no firewall type devices in the way).  Machines on different vlans on the same switch cannot communicate unless the traffic passes through a router.\u003c/p\u003e","title":"VMware Network Traffic Routing"},{"content":"Zoning and Lun Masking are often confused for each other, probably because both of them are used to restrict access to storage. They should both be used to secure the storage network and reduce unnecessary traffic.\nZoning\nIf you want to specify only certain hosts from accessing a storage device then you would want to setup zoning. For instance, in the example below, you can see that the two servers on the right can access three of the four storage devices, whereas the two on the left can only access two of the SANs. This configuration is done on the Fibre Channel switch. iSCSI, NFS, and FCoE can also be segmented, but they would use typical TCPIP segmentation methods like setting up a VLAN.\nThere are two type of zoning techniques: Hard Zoning and Soft Zoning.\nSoft zoning filters one device from seeing another device. However, if the ports are manually setup, the switch will not stop the devices from communicating. Hard zoning by comparison prevents one port from sending traffic to the other port and is more secure.\nZoning can also be setup based off the port or the World Wide Name (WWN). Port zoning grants access from one port on a switch to another port on a switch. This would require physical security to be setup around the Fibre Switch, because the zones could be changed around simply by moving the cables in the switch. This also makes it more of a struggle for management if switches need to be moved or re-cabled. WWN zoning is setup by allowing access between two WWNs which makes management a little easier, but also is susceptible to WWN spoofing which could allow access to the storage device.\nLUN Masking\nOnce the zoning is done, we can further lock down access to the storage by setting up LUN (Logical Unit Number) Masking on the storage device. The SAN would prevent certain devices from seeing a specific LUN that it is hosting. This may be used more to keep a misbehaving server from accessing a LUN that it doesn\u0026rsquo;t need access to more than it is a security concern.\nIn the Example below we have taken a small subset of servers that are accessing one storage device. The SAN is presenting four LUNs to the server on the right side (with the red arrows) but it is only presenting two LUNs to the server on the left (with the green arrows).\n","permalink":"https://theithollow.com/2012/03/12/lun-masking-vs-zoning/","summary":"\u003cp\u003eZoning and Lun Masking are often confused for each other, probably because both of them are used to restrict access to storage.  They should both be used to secure the storage network and reduce unnecessary traffic.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eZoning\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eIf you want to specify only certain hosts from accessing a storage device then you would want to setup zoning.  For instance, in the example below, you can see that the two servers on the right can access three of the four storage devices, whereas the two on the left can only access two of the SANs.  This configuration is done on the Fibre Channel switch.  iSCSI, NFS, and FCoE can also be segmented, but they would use typical TCPIP segmentation methods like setting up a VLAN.\u003c/p\u003e","title":"Lun Masking vs Zoning"},{"content":"It\u0026rsquo;s important to understand how VMware ESXi servers handle connections to their associated storage arrays.\nIf we look specifically with fibre channel fabrics, we have several multipathing options to be considered. There are three path selection policy (PSP) plugins that VMware uses natively to determine the I/O channel that data will travel over to the storage device.\nFixed Path Most Recently Used (MRU) Round Robin (RR) Let\u0026rsquo;s look at some examples of the three PSPs we\u0026rsquo;ve mentioned and how they behave. The definitions come from the vSphere 5 storage guide found below.\nhttp://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage-guide.pdf\nFixed Path\nThe host uses the designated preferred path, if it has been configured. Otherwise, it selects the first working path discovered at system boot time. If you want the host to use a particular preferred path, specify it manually. Fixed is the default policy for most active-active storage devices. NOTE If the host uses a default preferred path and the path\u0026rsquo;s status turns to Dead, a new path is selected as preferred. However, if you explicitly designate the preferred path, it will remain preferred even when it becomes inaccessible\nSo we\u0026rsquo;ll show 3 examples. On the left we\u0026rsquo;ve shown the default path (manually set) to the storage system. In the middle picture, we\u0026rsquo;ve simulated a path failure, and the fixed path has changed. Then on the right, we\u0026rsquo;ve shown that when the failure has been resolved, the path goes back to the original path that was set.\nMost Recently Used\nThe host selects the path that it used most recently. When the path becomes unavailable, the host selects an alternative path. The host does not revert back to the original path when that path becomes available again. There is no preferred path setting with the MRU policy. MRU is the default policy for most active-passive storage devices.\nBelow we\u0026rsquo; show 3 more examples. This time we show the MRU policy. On the left we see the default path that the MRU has chosen. The middle picture shows a failure and what happens to the path selection. So far, the MRU Policy looks the same as the Fixed Path Policy. In the last picture (on the right) you\u0026rsquo;ll see the difference from Fixed Path. When the failure has been resolved, the MRU policy does not go back to the original path.\nRound Robin\nThe host uses an automatic path selection algorithm rotating through all active paths when connecting to active-passive arrays, or through all available paths when connecting to active-active arrays. RR is the default for a number of arrays and can be used with both active-active and active-passive arrays to implement load balancing across paths for different LUNs.\nBelow we again have 3 examples. This time we\u0026rsquo;re not showing any failures, but we are showing how the round robin policy selects one path, then another, then another and eventually repeats when all the available paths have been used.\nALUA\nAlong with PSPs we also have Storage Array Type Plugins (SATP) which are specific to the storage vendors. These SATPs are responsible for determining which I/O paths are available to be used. The SATP is responsible for monitoring changes and handling failovers. SATPs are used to determine which paths are available and the PSPs then choose the available path to use. SATPs aren\u0026rsquo;t used exclusively in failover events though, some storage arrays have two active storage processors but only one of the storage processors owns the LUN that is being accessed. The specific SATP we\u0026rsquo;ll be looking at is the Asymmetric Logical Unit Access (ALUA).\nIn the example on the left we can see that the storage path is not optimized. We found a perfectly acceptable path to the storage system but when we got there we found out that the other storage processor currently owned that LUN. At that point the storage processor can still access the LUN but has to go through the second storage process and thus is not optimized. ALUA makes sure that the most optimized path is always used.\nNow lets look at an example of the Round Robin PSP with the ALUA SATP.\nBelow we see that there is one storage processor that is the optimized path, but there are still multiple paths to get to that storage processor from the host. There are two optimized paths available for the Round Robin PSP to go back and forth between to access the LUN.\nIf we look in the ESXi Configurations we can see that ALUA is the SATP and Round Robin is the selected PSP. If you look in the storage paths you\u0026rsquo;ll see Active(I/O) listed for the optimized storage path and Active as the non-optimized storage path. Remember that just because it\u0026rsquo;s not optimized doesn\u0026rsquo;t mean that the path couldn\u0026rsquo;t be used if it was necessary.\nIf you want to set your PSP for each path without selecting every path on the host, you can do so with the PowerCLI.\n","permalink":"https://theithollow.com/2012/03/08/path-selection-policy-with-alua/","summary":"\u003cp\u003eIt\u0026rsquo;s important to understand how VMware ESXi servers handle connections to their associated storage arrays.\u003c/p\u003e\n\u003cp\u003eIf we look specifically with fibre channel fabrics, we have several multipathing options to be considered.\nThere are three path selection policy (PSP) plugins that VMware uses natively to determine the I/O channel that data will travel over to the storage device.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eFixed Path\u003c/li\u003e\n\u003cli\u003eMost Recently Used (MRU)\u003c/li\u003e\n\u003cli\u003eRound Robin (RR)\u003c/li\u003e\n\u003c/ol\u003e\n\u003cp\u003eLet\u0026rsquo;s look at some examples of the three PSPs we\u0026rsquo;ve mentioned and how they behave.  The definitions come from the vSphere 5 storage guide found below.\u003c/p\u003e","title":"Path Selection Policy with ALUA"},{"content":"Emulation and Virtualization are not the same thing. In many cases you\u0026rsquo;ll hear them used interchangeably but they are different concepts.\nEmulation\nEmulation consists of taking the properties of one system and trying to reproduce it with a different type of system. When it comes to computers, you may have seen some software emulators that you can install and run on a PC or MAC, that will reproduce the characteristics of an older system such as a Nintendo or other gaming console. As an example you could then perhaps run Super Mario Bros. on your work desktop (I am not advocating the playing of video games at work). In this case the software emulator is mimicking the gaming console so that the game could be run inside the emulator, even though the underlying hardware is an x86 architecture.\nVirtualization\nVirtualization isn\u0026rsquo;t taking one system so that it can run on a different type of machine. Virtualization puts a layer between physical hardware and controls access to that hardware. This is useful because it can then control access to the physical resources of the host and can then hand these resources out to guest machines. Each guest machine that is built on top of the abstraction layer (hypervisor) is then provided access to the physical host\u0026rsquo;s resources without them being modified as in emulation. The hypervisor can act as a traffic cop by allowing only a certain amount of the physical resources to be used by the guests, as well as handling what happens when two guests access a physical resource at the same time.\n","permalink":"https://theithollow.com/2012/03/07/virtualization-vs-emulation/","summary":"\u003cp\u003eEmulation and Virtualization are not the same thing.  In many cases you\u0026rsquo;ll hear them used interchangeably but they are different concepts.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eEmulation\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eEmulation consists of taking the properties of one system and trying to reproduce it with a different type of system.  When it comes to computers, you may have seen some software emulators that you can install and run on a PC or MAC, that will reproduce the characteristics of an older system such as a Nintendo or other gaming console.  As an example you could then perhaps run Super Mario Bros. on your work desktop (I am not advocating the playing of video games at work).  In this case the software emulator is mimicking the gaming console so that the game could be run inside the emulator, even though the underlying hardware is an x86 architecture.\u003c/p\u003e","title":"Virtualization vs Emulation"},{"content":"Many storage providers have been working with VMware to improve performance of disks by giving VMware access to invoke capabilities of the storage system. There are basically three main primitives that VMware can invoke to do this.\nFull Copy Hardware Assisted Locking Block Zeroing Full Copy\nLets look at what happens when you clone a VM without VAAI. The ESXi server will start to copy the blocks of the original VM and start to paste them in the new location. Below is an animation to describe this process.\nVAAI has allowed us to get rid of this copy process and speed it up by having the ESXi server request that the storage device perform a copy operation. This eliminates almost all of the traffic normally associated with a clone request. The animation below shows this process in better detail.\nThe Full Copy primitive can also be used with other types of activities such as storage vMotion and deploying a VM from template. You can see how useful this might be with a product like VMware View and creating multiple workstations for your VDI Infrastructure.\nHardware Assisted Locking\nIn order for an ESXi host to control a specific virtual machine disk, it has to temporarily lock the entire LUN that the VM is sitting on. During this time, the LUN obviously can\u0026rsquo;t handle requests from other ESXi hosts and will temporarily halt disk activity which might be noticeable.\nHardware Assisted Locking changes this by allowing the storage system to handle the locks. Since the storage system is controlling the LUN locks, it can allow the ESXi servers to access other VMDKs while locking the specific VMDK that needs the reservation.\nBlock Zeroing\nWhen creating a new VM, you have the option of creating a thick provisioned \u0026ldquo;zeroed\u0026rdquo; disk. During the creation of this disk, a \u0026ldquo;zero\u0026rdquo; is written to all the blocks that will comprise that vmdk on the physical disk.\nBefore VAAI and the Block Zeroing primitive, the ESXi server would have to request for all of those blocks to be \u0026ldquo;zeroed out.\u0026rdquo; If the storage system is VAAI capable, this process can modified so that the ESXi host requests that the storage system \u0026ldquo;zeros\u0026rdquo; out the blocks and reduces the overhead.\nThis document was created using the official VMware icon and diagram library.\nCopyright © 2010 VMware, Inc. All rights reserved. This product is protected by U.S. and\ninternational copyright and intellectual property laws. VMware products are\ncovered by one or more patents listed at http://www.vmware.com/go/patents.\nVMware does not endorse or make any representations about third party information\nincluded in this document, nor does the inclusion of any VMware icon or diagram\nin this document imply such an endorsement.\n","permalink":"https://theithollow.com/2012/03/05/overview-of-vstorage-api-array-integration-vaai/","summary":"\u003cp\u003eMany storage providers have been working with VMware to improve performance of disks by giving VMware access to invoke capabilities of the storage system.  There are basically three main primitives that VMware can invoke to do this.\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eFull Copy\u003c/li\u003e\n\u003cli\u003eHardware Assisted Locking\u003c/li\u003e\n\u003cli\u003eBlock Zeroing\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cstrong\u003eFull Copy\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eLets look at what happens when you clone a VM without VAAI.  The ESXi server will start to copy the blocks of the original VM and start to paste them in the new location.  Below is an animation to describe this process.\u003c/p\u003e","title":"Overview of vStorage API Array Integration (VAAI)"},{"content":"vSphere has made it very simple to resize disks. They old days of finding larger disks to put in your severs and cloning or migrating data aren\u0026rsquo;t necessary now that virtualization has become widely used.\nIf you\u0026rsquo;re using vSphere you can easily extend non system drives by changing the size of the Hard Disk, and then going into the virtual machine and using diskpart or Disk Manager and extending the drive.\nThe trick comes to older servers (pre Windows Server 2008) where you can\u0026rsquo;t extend the system drive even with diskpart.\nThis becomes a pretty easy task to complete because we can power off the server, detach the hard disk and attach it to another server that is running.\nGo into the vm properties and remove the virtual disk from the machine. MAKE SURE YOU DO NOT DELETE THE FILES FROM DISK\nAttach that virtual disk as a second disk to another VM that is running.\nNow if we go into disk manager on the new VM we can see the secondary drive still has 2 Gb of unallocated space that we want to use.\nLets try disk part again.\nNow that we have successfully added the extra 2 Gb of space we can then remove the drive from the second VM and attach it to the original VM again.\nPower on the server and you\u0026rsquo;ll have the extra 2Gb needed.\n","permalink":"https://theithollow.com/2012/03/02/extending-windows-system-drives-with-vsphere/","summary":"\u003cp\u003evSphere has made it very simple to resize disks.  They old days of finding larger disks to put in your severs and cloning or migrating data aren\u0026rsquo;t necessary now that virtualization has become widely used.\u003c/p\u003e\n\u003cp\u003eIf you\u0026rsquo;re using vSphere you can easily extend non system drives by changing the size of the Hard Disk, and then going into the virtual machine and using diskpart or Disk Manager and extending the drive.\u003c/p\u003e","title":"Extending Windows System Drives with vSphere"},{"content":"For more VMware related PowerCLI scripts visit a couple of these great blogs.\nhttp://www.virtu-al.net/ http://www.lucd.info/\n","permalink":"https://theithollow.com/vmware-powercli/","summary":"\u003cp\u003eFor more VMware related PowerCLI scripts visit a couple of these great blogs.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://www.virtu-al.net/\"\u003ehttp://www.virtu-al.net/\u003c/a\u003e \u003ca href=\"http://www.lucd.info/\"\u003ehttp://www.lucd.info/\u003c/a\u003e\u003c/p\u003e","title":"VMware PowerCLI"},{"content":"First, lets learn how to get more information. Powershell uses cmdlets that should be in the format of verb-noun. So in order to get help, we use the command Get-Help. If we want to get help on a specific command we can run get-help [command]\nIn the below example we\u0026rsquo;re trying to get-help on the cmdlet get-childitem.\nAs you can see, you\u0026rsquo;ll be given some syntax and more information about the command you\u0026rsquo;re running.\nLet\u0026rsquo;s try it out.\nLets run the get-childitem cmdlet with a path of c:userseshanksdesktoptestfolder\nThe cmdlet returned four files in that folder and you can see that there are a couple of other attributes listed as well.\nLet\u0026rsquo;s continue with this and modify our results to exclude executables. This can be done by adding the \u0026ldquo;exclude\u0026rdquo; switch.\nAs you would expect, we show the contents of the folder, except now without anything with a .exe extension.\nSimilar to the get-childitem cmdlet there is a get-item cmdlet. This cmdlet is used to specify one specific file or folder. (Remember, if you don\u0026rsquo;t know how to use it, we can always run \u0026ldquo;get-help get-item\u0026rdquo; to give us some extra information.)\nIn this next test, we\u0026rsquo;ll run get-item on one of the documents in the folder we were testing the get-childitem on.\nGet-Childitem returns almost the same results as the get-childitem except it works with only one item at a time.\nWhat if there is more information about this item that we want to know about? Let\u0026rsquo;s run the command again but use the | Format-List cmdlet with it. (FL for short)\nThe Format-List cmdlet gives us more information about the file we\u0026rsquo;re working with. The FL cmdlet should show us the attributes that we can get by using get-item.\nNotice in the example above, that one of the attributes is \u0026ldquo;LastAccessTime.\u0026rdquo; So we should be able to get that information directly.\nLet\u0026rsquo;s run get-item again, but try to only get the LastAccessTime.\nAs you can see the returned results only show the LastAccessTime.\nLet\u0026rsquo;s look further at what can be done with the pipe { | }.\nThe pipe is used to pass results from one cmdlet to another cmdlet in the same script.\nLet\u0026rsquo;s see an example of this by using the childitem cmdlet we learned earlier and adding the \u0026ldquo;select-string\u0026rdquo; cmdlet which is similar to the linux \u0026ldquo;grep\u0026rdquo; command\u0026quot;\nHere, we\u0026rsquo;ll call the same \u0026ldquo;get-childitem\u0026rdquo; cmdlet but then pipe those results over to the select-string cmdlet. This means that we\u0026rsquo;re going to run the select-string cmdlet on the results of get-childitem.\nOur childitems included:\nWhen we combine that with the select-string cmdlet, it returns one document as well as the string. I\u0026rsquo;ve added a screenshot of the document as well just to prove that it works.\nNow that we know how to use pipes, lets move on and add a \u0026ldquo;foreach\u0026rdquo; loop. We\u0026rsquo;ll also use a variable which is written as $_. This is a special variable that will contain only the item it\u0026rsquo;s working with at the time. An example might be useful to explain this.\nGet-childitem is returning four items and then calls the foreach loop to perform an operation on them.\nEssentially, the foreach loop is doing the following since it’s used in conjunction with a pipe.\nWrite-host Doc1.txt\nWrite-host Doc2.txt\nWrite-host Executable1.exe\nWrite-host Shortcut.lnk\nYou can see that the $_ takes the place of each filename returned by get-childitem.\nNow that we can do a loop, and can pipe cmdlets together, and know how to do variables correctly, we should be able to get the lastaccesstimes of all of the files in the \u0026ldquo;TestFolder\u0026rdquo;\nWe\u0026rsquo;ve now run the get-childitem cmdlet again, and passed the results to a foreach loop to return the name, and the name.lastaccesstime results as seen in the picture above.\nThere are countless ways that powershell can be used to retrieve data. This is just a short tutorial on how some of the concepts work together and hopefully they are useful to you when building your new scripts.\n","permalink":"https://theithollow.com/tutorials/","summary":"\u003cp\u003eFirst, lets learn how to get more information.  Powershell uses cmdlets that should be in the format of verb-noun.  So in order to get help, we use the command Get-Help.  If we want to get help on a specific command we can run get-help [command]\u003c/p\u003e\n\u003cp\u003eIn the below example we\u0026rsquo;re trying to get-help on the cmdlet get-childitem.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/02/ps1.png\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/02/ps1.png\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eAs you can see, you\u0026rsquo;ll be given some syntax and more information about the command you\u0026rsquo;re running.\u003c/p\u003e","title":"Tutorials"},{"content":"I found people have a hard time understanding that a SAN Snapshot and a VMware snapshot are fundamentally different. I think because unless you\u0026rsquo;re a storage administrator, you\u0026rsquo;re probably not dealing a whole lot with snaps to begin with. VMware has made it more commonplace for System Administrators to deal with snapshot technology.\nSAN Snapshots\nLets first look at how traditional SANs take snapshots.\nTo start we have 6 blocks being used. The file system has marked blocks which blocks are being used.\nNow we create a SAN Snapshot and modify part of the data. Notice how the data changes.\nAs you can see, block 4 was not overwritten, but the new data was written to block 11 and the file system made note of the change. Notice the data shows 1,2,3,11,5,6 where 11 has replaced 4.\nThe snapshot however still has pointers to the original data which can be mounted, backed up, modified etc. depending upon your storage system.\nVMware Snapshots\nVMware snapshots are much different. When you have a virtual disk there is an associated .vmdk file. When you take a snapshot, a new .vmdk will be created to write any of the changes. This DISKNAME-00001.vmdk will then store all of the deltas. VMware is using Copy on Write to then take any changes to the original DISKNAME.vmdk and write them to the DISKNAME-00001.vmdk.\nVMware snapshots should be used carefully because they can quickly fill up the datastore they belong too. For instance a 40 Gb disk could potentially have a 40Gb snapshot file. There could also be another 40 Gb snapshot of that file so with 2 snapshots we could have tripled the amount of space used.\nAlso, when you have finished using a snapshot and \u0026ldquo;delete\u0026rdquo; the snapshot, you actually are committing those deltas to the original vmdk. This will take resources and could affect your VM\u0026rsquo;s performance. The larger the snapshot, the longer it will take to commit and the higher the likelihood that you\u0026rsquo;ll notice a performance issue.\nVMware snapshots work best for short term operations such as applying a new patch. If you finish installing your patch and are satisfied with the results, get rid of your snapshot. Don\u0026rsquo;t leave them sitting around taking up space and causing you a performance issue later on.\nSNAPSHOTS ARE NOT BACKUPS!\nMy final point is this: SNAPSHOTS ARE NOT BACKUPS. If you lose the disk array that the snapshots are on, you\u0026rsquo;ve lost your data too. Make sure your backups are stored in another location.\nMore information on snapshots http://blogs.vmware.com/kb/2010/06/vmware-snapshots.html http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1015180 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US\u0026amp;cmd=displayKC\u0026amp;externalId=1009402\n","permalink":"https://theithollow.com/2012/02/27/san-snapshots-vs-vmware-snapshots/","summary":"\u003cp\u003eI found people have a hard time understanding that a SAN Snapshot and a VMware snapshot are fundamentally different.  I think because unless you\u0026rsquo;re a storage administrator, you\u0026rsquo;re probably not dealing a whole lot with snaps to begin with.  VMware has made it more commonplace for System Administrators to deal with snapshot technology.\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eSAN Snapshots\u003c/strong\u003e\u003c/p\u003e\n\u003cp\u003eLets first look at how traditional SANs take snapshots.\u003c/p\u003e\n\u003cp\u003eTo start we have 6 blocks being used.  The file system has marked blocks which blocks are being used.\u003c/p\u003e","title":"SAN Snapshots vs VMware Snapshots"},{"content":"I was recently tasked with performing a company wide disaster recovery test. The test had the normal goals with a standard recovery time objective, and recover point objectives. Unfortunately, the test needed to be performed during the middle of a production day, and not affect production. Under normal circumstances we could assume that our production servers were disabled or destroyed in some manner and we could power up our DR servers and continue the business. During this test however we needed to make sure that both networks could run at the same time.\nNetwork design\nBoth the Production network and DR network were using Netapp filers and VMware ESXi servers. VMware Site Recovery Manager was used in conjunction with Netapp Snapmirror to keep the data in sync between the two sites. During a DR test, SRM could be used to fail the production servers over to the secondary site and put the servers in a isolated \u0026ldquo;bubble\u0026rdquo; network, so that a server couldn\u0026rsquo;t communicate with it\u0026rsquo;s clone in the production site.\nThe trick came to the Exchange servers which were not failed over with SRM. The production site had two Exchange 2010 mailbox servers and the DR site had one mailbox server. All of these servers were in a Database Availability Group configuration. The DAG was used so that mailboxes could be activated in the DR site if there was an Exchange outage in the primary site, but not a total disaster happened.\nSplit Brain In order to achieve the objectives I was given, I severed the WAN connection to stop communication between the two sites and then did a normal SRM \u0026ldquo;test\u0026rdquo; failover. This failed our servers over to the DR Site. It also shutdown the Domain Controller in the DR site, cloned it and then started the clone. (This was done so that any changes we wanted to make on the network could be made and the clone could be destroyed when the test was over. If this wasn\u0026rsquo;t done, any changes in AD would then replicate back to the Production site when the WAN link was restored). Lastly, the Exchange server in DR was also snapshotted so that when the DR test was over, we could undo all of the Exchange failover changes that we made. All of these changes were done automatically from PowerCLI scripts called from SRM.\nOnce the other servers were up and running and the DC was cloned, we needed to get the Exchange databases mounted. Depending on your Exchange setup, you would need to use one of two methods Listed Below.\nExchange Failover Procedures\nCheck your DatacenterActivationMode from the Exchange Powershell console. In our case DAC was off. Similar procedures could be followed for Mailbox servers in DAC = On mode.\nDatacenterActivationMode = off \u0026ldquo;Get-DatabaseAvailabilityGroup [DAGNAME] | FL” and make sure that the server is set to “off” for the DatacenterActivationMode\n-Stop the Cluster Service on the DR Mail server. You cannot failover the DAG and force quorum if the cluster service is running.\nOnce the service has stopped, you should run: Net start clussvc /forcequorum\nNext open the Microsoft Failover Cluster Console.\nYou will want to expand the DAG and go to nodes:\nRight click on one of the DAG members from the failed site and choose evict.\nThen repeat with all the remaining DAG Members from the failed site.\nOpen an Exchange Management Shell under elevated permissions.\nRun: Cluster [DAGNAME] /quorum /nodemajority\nThis command will make the failover server the only member of the DAG and force it to have quorum for the DAG.\nThen run:\nMove-ActiveMailboxDatabase –identity [MailboxDatabasename] –ActivateOnServer [DRMailServerRoom] –MountDialOverride BestEffort –SkipActiveCopyChecks –SkipClientExperienceChecks –SkipLagChecks\nThis command should mount all of the databases hosted on this DAG Member.\nOnce the switchover is complete, run “get-mailboxdatabasecopystatus” to make sure that the databases are mounted properly.\nAt this point you should be able to see the databases mounted in the Exchange Management Console as well.\nTesting\nOnce Exchange Mailboxes were mounted we could then test AD, applications failed over using SRM and Email to an extent. We could then open outlook, connect to the DR Site mail server and see our mail. We could send out mail, but we couldn\u0026rsquo;t receive email from outside the DR Network unless we used a different domain suffix. (remember that our primary MX record was still forwarding mail to the production site which was still up and available).\nCleanup\nOnce all of the testing was done, we could finish the SRM Test, revert the Exchange server back to the snapshot, delete the AD Clone and power on the original AD Server. Once all the servers were back to normal, we could re-establish the WAN link and replication etc would start back up as normal. No one would know the difference.\nTroubleshooting Sites with the best resources for failing over Exchange.\nhttp://technet.microsoft.com/en-us/library/dd351049.aspx http://blogs.msexchange.org/walther/2011/10/09/real-life-exchange-2010-site-switchover-lesson-learned/\n","permalink":"https://theithollow.com/2012/02/26/exchange-split-brain-on-purpose/","summary":"\u003cp\u003eI was recently tasked with performing a company wide disaster recovery test.  The test had the normal goals with a standard recovery time objective, and recover point objectives.  Unfortunately, the test needed to be performed during the middle of a production day, and not affect production.  Under normal circumstances we could assume that our production servers were disabled or destroyed in some manner and we could power up our DR servers and continue the business.  During this test however we needed to make sure that both networks could run at the same time.\u003c/p\u003e","title":"Exchange Split Brain ... On Purpose?"},{"content":"VMworld 2011 was held at the Venetian Hotel in Las Vegas. Over 25,000 attendees this year.\nIt was held in Las Vegas, but the sites and attractions didn\u0026rsquo;t take away from the event. Despite all the distractions that Las Vegas can provide, there was too much going on at VMworld to get caught up in the city.\nMy favorite part of VMworld was the Hands on Labs.\nAfter signing up for the specific lab you wanted, you were ushered to your assigned desk. There were dual screen workstations setup at every desk and very straight forward instructions on how to complete the labs. These labs would get very in depth and would show you why and what was happening behind the scenes when you would perform your operations. I especially enjoyed the Netapp lab.\nMy second favorite part was the solutions exchange. This was a group of companies that provide various solutions for IT shops.\nThere were storage companies like EMC and Netapp. There were compute companies like Intel, AMD, HP, Cisco and Dell. OS Vendors like Redhat and Microsoft and tons of others.\nThe Solutions Exchange proved to be very useful to me personally. Since the majority of my experience comes from working with equipment specific to my company, it\u0026rsquo;s hard to get out and find new solutions. Most of the time, new solutions come from consultants who have seen countless different scenarios. The solutions exchange gave me the opportunity to see products like Fusion-io\u0026rsquo;s storage cards, a flexpod, a vblock, several different types of thin clients and backup products like Veeam. The solutions Exchange would be enough to get me to go back to VMworld every year.\nThe Sessions:\nVMware did sessions almost all day every day that you could go to and learn just about anything you wanted to. There were sessions on SRM, vCloud Directors, View, HA, DRS, compliance, PowerCLI and the list goes on and on. These sessions ended up being useful, but sitting in them all day got to be a little monotonous. My suggestion would be to pick out the specific sessions you wanted to see and spend the rest of your time in the awesome hands on labs.\n","permalink":"https://theithollow.com/2012/02/25/vmworld-2011/","summary":"\u003cp\u003eVMworld 2011 was held at the Venetian Hotel in Las Vegas.  Over 25,000 attendees this year.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/02/venetian.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/02/venetian.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eIt was held in Las Vegas, but the sites and attractions didn\u0026rsquo;t take away from the event.  Despite all the distractions that Las Vegas can provide, there was too much going on at VMworld to get caught up in the city.\u003c/p\u003e\n\u003cp\u003eMy favorite part of VMworld was the Hands on Labs.\u003c/p\u003e\n\u003cp\u003e\u003ca href=\"http://shanksnet.files.wordpress.com/2012/02/vmworld-hol1.jpg\"\u003e\u003cimg loading=\"lazy\" src=\"http://shanksnet.files.wordpress.com/2012/02/vmworld-hol1.jpg\"\u003e\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003eAfter signing up for the specific lab you wanted, you were ushered to your assigned desk.  There were dual screen workstations setup at every desk and very straight forward instructions on how to complete the labs.  These labs would get very in depth and would show you why and what was happening behind the scenes when you would perform your operations.  I especially enjoyed the Netapp lab.\u003c/p\u003e","title":"VMWorld 2011"}]