<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>V7wk8s on The IT Hollow</title>
    <link>https://theithollow.com/tags/v7wk8s/</link>
    <description>Recent content in V7wk8s on The IT Hollow</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 04 Jan 2021 21:50:57 +0000</lastBuildDate>
    <atom:link href="https://theithollow.com/tags/v7wk8s/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Enable the Harbor Registry on vSphere 7 with Tanzu</title>
      <link>https://theithollow.com/2021/01/04/enable-the-harbor-registry-on-vsphere-7-with-tanzu/</link>
      <pubDate>Mon, 04 Jan 2021 21:50:57 +0000</pubDate>
      <guid>https://theithollow.com/2021/01/04/enable-the-harbor-registry-on-vsphere-7-with-tanzu/</guid>
      <description>&lt;p&gt;Your Kubernetes clusters are up and running on vSphere 7 with Tanzu and you can&amp;rsquo;t wait to get started on your first project. But before you get to that, you might want to enable the Harbor registry so that you can privately store your own container images and use them with your clusters. Luckily, in vSphere 7 with Tanzu, the Harbor project has been integrated into the solution. You just have to turn it on and set it up.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Ingress Routing - TKG Clusters</title>
      <link>https://theithollow.com/2020/09/15/ingress-routing-tkg-clusters/</link>
      <pubDate>Tue, 15 Sep 2020 14:05:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/09/15/ingress-routing-tkg-clusters/</guid>
      <description>&lt;p&gt;If you have been following &lt;a href=&#34;https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/&#34;&gt;the series&lt;/a&gt; so far, you should have a TKG guest cluster in your lab now. The next step is to show how to deploy a simple application and access it through a web browser. This is a pretty trivial task for most Kubernetes operators, but its a good idea to know whats happening in NSX to make these applications available. We&amp;rsquo;ll walk through that in this post.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploying Tanzu Kubernetes Clusters on vSphere 7</title>
      <link>https://theithollow.com/2020/09/09/deploying-tanzu-kubernetes-clusters-on-vsphere-7/</link>
      <pubDate>Wed, 09 Sep 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/09/09/deploying-tanzu-kubernetes-clusters-on-vsphere-7/</guid>
      <description>&lt;p&gt;This post will focus on deploying Tanzu Kubernetes Grid (TKG) clusters in your vSphere 7 with Tanzu environment. These TKG clusters are the individual Kubernetes clusters that can be shared with teams for their development purposes.&lt;/p&gt;
&lt;p&gt;I know what you&amp;rsquo;re thinking. Didn&amp;rsquo;t we already create a Kubernetes cluster when we setup our Supervisor cluster? The short answer is yes. However the Supervisor cluster is a unique Kubernetes cluster that probably shouldn&amp;rsquo;t be used for normal workloads. We&amp;rsquo;ll discuss this in more detail in a follow-up post. For now, let&amp;rsquo;s focus on how to create them, and later we&amp;rsquo;ll discuss when to use them vs the Supervisor cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Create a Content Library for vSphere 7 with Tanzu</title>
      <link>https://theithollow.com/2020/09/08/create-a-content-library-for-vsphere-7-with-tanzu/</link>
      <pubDate>Tue, 08 Sep 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/09/08/create-a-content-library-for-vsphere-7-with-tanzu/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll setup a vSphere Content Library so that we can use it with our Tanzu Kubernetes Grid guest clusters. If you&amp;rsquo;re not familiar with Content libraries, you can think of them as a container registry, only for virtual machines.&lt;/p&gt;
&lt;p&gt;Why do we need a content library? Well, the content library be used to store the virtual machine templates that will become Kubernetes nodes when you deploy a TKG guest cluster.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Replace vSphere 7 with Tanzu Certificates</title>
      <link>https://theithollow.com/2020/08/31/replace-vsphere-7-with-tanzu-certificates/</link>
      <pubDate>Mon, 31 Aug 2020 14:45:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/31/replace-vsphere-7-with-tanzu-certificates/</guid>
      <description>&lt;p&gt;When setting up your vSphere 7 with Tanzu environment, its a good idea to update the default certificate shipped from VMware with your own certificate. This is a good security practice to ensure that your credentials are protected during logins, and nobody likes to see those pesky certificate warnings in their browsers anyway, am I right?&lt;/p&gt;
&lt;h2 id=&#34;create-and-trust-certificate-authority&#34;&gt;Create and Trust Certificate Authority&lt;/h2&gt;
&lt;p&gt;This section of the blog post is to create a root certificate. In many situations, you won&amp;rsquo;t need to do this since your organization probably already has a certificate authority that can be used to sign certificates as needed. Since I&amp;rsquo;m doing this in a lab, I&amp;rsquo;m going to create a root certificate and make sure my workstation trusts this cert first. After this, we can use the root certificate to sign our vSphere 7 certificates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Connecting to a Supervisor Namespace</title>
      <link>https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/</link>
      <pubDate>Mon, 24 Aug 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/24/connecting-to-a-supervisor-namespace/</guid>
      <description>&lt;p&gt;In this post we&amp;rsquo;ll finally connect to our Supervisor Cluster Namespace through the Kubernetes cli and run some commands for the first time.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/2020/08/17/creating-supervisor-namespaces/&#34;&gt;last post&lt;/a&gt; we created a namespace within the Supervisor Cluster and assigned some resource allocations and permissions for our example development user. Now it&amp;rsquo;s time to access that namespace so that real work can be done using the platform.&lt;/p&gt;
&lt;p&gt;First, login to vCenter again with the &lt;a href=&#34;mailto:administrator@vsphere.local&#34;&gt;administrator@vsphere.local&lt;/a&gt; account and navigate to the namespace that was previously created. You should see a similar screen where we configured our permissions. In the &lt;code&gt;Status&lt;/code&gt; tile, click one of the links to either open in a browser or copy the URL to then open in a browser.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Creating Supervisor Namespaces</title>
      <link>https://theithollow.com/2020/08/17/creating-supervisor-namespaces/</link>
      <pubDate>Mon, 17 Aug 2020 14:15:00 +0000</pubDate>
      <guid>https://theithollow.com/2020/08/17/creating-supervisor-namespaces/</guid>
      <description>&lt;p&gt;Congratulations, you&amp;rsquo;ve deployed the Workload Management components for your vSphere 7 cluster. If you&amp;rsquo;ve been following along with the series so far, you&amp;rsquo;ll have left off with a workload management cluster created and ready to being configuring your cluster for use with Kubernetes.&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/08/namespaces0-3.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The next step in the process is to create a namespace. Before we do that, it&amp;rsquo;s probably useful to recap what a namespace is used for.&lt;/p&gt;
&lt;h2 id=&#34;namespaces-the-theory&#34;&gt;Namespaces the Theory&lt;/h2&gt;
&lt;p&gt;Depending on your past experiences, a namespace will likely seem familiar to you in some fashion. If you have a kubernetes background, you&amp;rsquo;ll be familiar with namespaces as a way to set permissions for a group of users (or a project, etc) and for assigning resources. Alternatively, if you have a vSphere background, you&amp;rsquo;re used to using things like Resource Pools to set resource allocation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vSphere 7 with Tanzu - Getting Started Guide</title>
      <link>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/</link>
      <pubDate>Tue, 14 Jul 2020 14:16:18 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-getting-started-guide/</guid>
      <description>&lt;p&gt;VMware released the new version of vSphere with functionality to build and manage Kubernetes clusters. This series details how to deploy, configure, and use a lab running vSphere 7 with Kubernetes enabled.&lt;/p&gt;
&lt;p&gt;The instructions within this post are broken out into sections. vSphere 7 requires pre-requisites at the vSphere level as well as a full NSX-T deployment. Follow these steps in order to build your own vSphere 7 with Kubernetes lab and start using Kubernetes built right into vSphere.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Enable Workload Management</title>
      <link>https://theithollow.com/2020/07/14/enable-workload-management/</link>
      <pubDate>Tue, 14 Jul 2020 13:44:36 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/enable-workload-management/</guid>
      <description>&lt;p&gt;This post focuses on enabling the workload management components for vSphere 7 with Kubernetes. It is assumed that the vSphere environment is already in place and the NSX-T configuration has been deployed.&lt;/p&gt;
&lt;p&gt;To enable workload management, login to your vCenter as the &lt;a href=&#34;mailto:administrator@vsphere.local&#34;&gt;administrator@vsphere.local&lt;/a&gt; account. Then in the Menu, select Work&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/07/image-40.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;Within the Workload Management screen, click the &lt;code&gt;ENABLE&lt;/code&gt; button.&lt;/p&gt;
&lt;figure&gt;
    &lt;img loading=&#34;lazy&#34; src=&#34;https://assets.theithollow.com/wp-content/uploads/2020/07/image-30-1024x409.png&#34;/&gt; 
&lt;/figure&gt;

&lt;p&gt;The first screen in the wizard, will list your compatible vSphere clusters. These clusters must have HA and DRS enabled in fully automated mode. If you are missing clusters, make sure you have ESXi hosts on version 7 with HA and DRS enabled. You&amp;rsquo;ll also need a Distributed switch on version 7 for these clusters.&lt;/p&gt;</description>
    </item>
    <item>
      <title>vSphere 7 with Kubernetes Environment and Prerequisites</title>
      <link>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-environment-and-prerequisites/</link>
      <pubDate>Tue, 14 Jul 2020 13:42:33 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/vsphere-7-with-kubernetes-environment-and-prerequisites/</guid>
      <description>&lt;p&gt;This post describes the lab environment we&amp;rsquo;ll be working with to build our vSphere 7 with Kubernetes lab and additional prerequisites that you&amp;rsquo;ll need to be aware of before starting. This is not the only topology that would work for vSphere 7 with Kubernetes, but it is a robust homelab that would mimic many production deployments except for the HA features. For example, we&amp;rsquo;ll only install one (singular) NSX Manager for the lab where in a production environment would have three.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tier-0 Gateway</title>
      <link>https://theithollow.com/2020/07/14/tier-0-gateway/</link>
      <pubDate>Tue, 14 Jul 2020 13:39:41 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/tier-0-gateway/</guid>
      <description>&lt;p&gt;This post will review the deployment and configuration of a Tier-0 gateway to provide north/south routing into the NSX-T overlay networks.&lt;/p&gt;
&lt;p&gt;The Tier-0 (T0) gateway is where we&amp;rsquo;ll finally connect our new NSX-T backed overlay segments to the physical network through an NSX-T Edge which was previously deployed.&lt;/p&gt;
&lt;p&gt;The Tier-0 gateway will connect directly to a physical VLAN and on the other side to our T1 router deployed in the previous post. From there, we should have all the plumbing we need to route to our hosts and begin using NSX-T to do some cooler stuff. In the end, the network topology will look something like this:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Tier-1 Gateway and NSX Segments</title>
      <link>https://theithollow.com/2020/07/14/tier-1-gateway-and-nsx-segments/</link>
      <pubDate>Tue, 14 Jul 2020 13:36:56 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/tier-1-gateway-and-nsx-segments/</guid>
      <description>&lt;p&gt;This post will focus on deploying our first NSX Gateway/Router and setting up our overlay segments. Before you can start these steps, the Edge nodes should be up and running so that they can support the Tier-1 gateways.&lt;/p&gt;
&lt;p&gt;NSX uses two types of routers/gateways. We&amp;rsquo;ll start by using a Tier-1 (T1) router. These routers are usually used to pass traffic between NSX overlay segments. We could create NSX segments without any routers, but it would require a router to pass traffic between these segments so we will create a T1 router first.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Deploy NSX-T Edge Nodes</title>
      <link>https://theithollow.com/2020/07/14/deploy-nsx-t-edge-nodes/</link>
      <pubDate>Tue, 14 Jul 2020 13:26:22 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/deploy-nsx-t-edge-nodes/</guid>
      <description>&lt;p&gt;NSX-T Edge nodes are used for security and gateway services that can&amp;rsquo;t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use &lt;code&gt;Tier0&lt;/code&gt; or &lt;code&gt;Tier1&lt;/code&gt; routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed.&lt;/p&gt;</description>
    </item>
    <item>
      <title>NSX Pools, Zones, and Nodes Setup</title>
      <link>https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/</link>
      <pubDate>Tue, 14 Jul 2020 13:23:46 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;https://theithollow.com/2020/07/14/nsx-pools-zones-and-nodes-setup/&#34;&gt;previous post&lt;/a&gt; we deployed an NSX Manager. Now it&amp;rsquo;s time to start configuring NSX so that we can build cool routes, firewall zones, segments, and all the other NSX goodies. And even if we don&amp;rsquo;t want to build some of these things, we&amp;rsquo;ll need this setup for vSphere 7 with Kubernetes.&lt;/p&gt;
&lt;h2 id=&#34;add-an-ip-pool&#34;&gt;Add an IP Pool&lt;/h2&gt;
&lt;p&gt;The first thing we&amp;rsquo;ll setup is an IP Pool. As you might guess, an IP Pool is just a group of IP Addresses that we can use for things. Specifically, we&amp;rsquo;ll use these IP Addresses to assign Tunnel Endpoints (Called TEPs previously called VTEPs in NSX-V parlance) to each of our ESXi hosts that are participating in the NSX Overlay networks. The TEP becomes the point in which encapsulation and decapsulation takes place on each of the ESXi hosts. Think of it this way, when encapsulated traffic needs to be routed to a VM on a host, what IP Address do we need to send the traffic to, so that it can reach that VM. This is the TEP. We need to setup a TEP on each host, and the IP Addresses for these TEPs come from an IP Pool. Since I have three hosts, and expect to deploy 1 edge nodes, I&amp;rsquo;ll need a TEP Pool with at least 4 IP Addresses. Size your environment appropriately.&lt;/p&gt;</description>
    </item>
    <item>
      <title>NSX Installation</title>
      <link>https://theithollow.com/2020/07/14/nsx-installation/</link>
      <pubDate>Tue, 14 Jul 2020 13:18:52 +0000</pubDate>
      <guid>https://theithollow.com/2020/07/14/nsx-installation/</guid>
      <description>&lt;p&gt;This post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing.&lt;/p&gt;
&lt;h2 id=&#34;deploy-the-nsx-manager&#34;&gt;Deploy the NSX Manager&lt;/h2&gt;
&lt;p&gt;The first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you&amp;rsquo;ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you&amp;rsquo;ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
