vSphere 7 with Tanzu - Getting Started Guide

VMware released the new version of vSphere with functionality to build and manage Kubernetes clusters. This series details how to deploy, configure, and use a lab running vSphere 7 with Kubernetes enabled. The instructions within this post are broken out into sections. vSphere 7 requires pre-requisites at the vSphere level as well as a full NSX-T deployment. Follow these steps in order to build your own vSphere 7 with Kubernetes lab and start using Kubernetes built right into vSphere. ...

July 14, 2020 · 1 min · eshanks

Enable Workload Management

This post focuses on enabling the workload management components for vSphere 7 with Kubernetes. It is assumed that the vSphere environment is already in place and the NSX-T configuration has been deployed. To enable workload management, login to your vCenter as the [email protected] account. Then in the Menu, select Work Within the Workload Management screen, click the ENABLE button. The first screen in the wizard, will list your compatible vSphere clusters. These clusters must have HA and DRS enabled in fully automated mode. If you are missing clusters, make sure you have ESXi hosts on version 7 with HA and DRS enabled. You’ll also need a Distributed switch on version 7 for these clusters. ...

July 14, 2020 · 4 min · eshanks

vSphere 7 with Kubernetes Environment and Prerequisites

This post describes the lab environment we’ll be working with to build our vSphere 7 with Kubernetes lab and additional prerequisites that you’ll need to be aware of before starting. This is not the only topology that would work for vSphere 7 with Kubernetes, but it is a robust homelab that would mimic many production deployments except for the HA features. For example, we’ll only install one (singular) NSX Manager for the lab where in a production environment would have three. ...

July 14, 2020 · 4 min · eshanks

Tier-0 Gateway

This post will review the deployment and configuration of a Tier-0 gateway to provide north/south routing into the NSX-T overlay networks. The Tier-0 (T0) gateway is where we’ll finally connect our new NSX-T backed overlay segments to the physical network through an NSX-T Edge which was previously deployed. The Tier-0 gateway will connect directly to a physical VLAN and on the other side to our T1 router deployed in the previous post. From there, we should have all the plumbing we need to route to our hosts and begin using NSX-T to do some cooler stuff. In the end, the network topology will look something like this: ...

July 14, 2020 · 3 min · eshanks

Tier-1 Gateway and NSX Segments

This post will focus on deploying our first NSX Gateway/Router and setting up our overlay segments. Before you can start these steps, the Edge nodes should be up and running so that they can support the Tier-1 gateways. NSX uses two types of routers/gateways. We’ll start by using a Tier-1 (T1) router. These routers are usually used to pass traffic between NSX overlay segments. We could create NSX segments without any routers, but it would require a router to pass traffic between these segments so we will create a T1 router first. ...

July 14, 2020 · 3 min · eshanks

Deploy NSX-T Edge Nodes

NSX-T Edge nodes are used for security and gateway services that can’t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use Tier0 or Tier1 routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed. ...

July 14, 2020 · 5 min · eshanks

NSX Pools, Zones, and Nodes Setup

In the previous post we deployed an NSX Manager. Now it’s time to start configuring NSX so that we can build cool routes, firewall zones, segments, and all the other NSX goodies. And even if we don’t want to build some of these things, we’ll need this setup for vSphere 7 with Kubernetes. Add an IP Pool The first thing we’ll setup is an IP Pool. As you might guess, an IP Pool is just a group of IP Addresses that we can use for things. Specifically, we’ll use these IP Addresses to assign Tunnel Endpoints (Called TEPs previously called VTEPs in NSX-V parlance) to each of our ESXi hosts that are participating in the NSX Overlay networks. The TEP becomes the point in which encapsulation and decapsulation takes place on each of the ESXi hosts. Think of it this way, when encapsulated traffic needs to be routed to a VM on a host, what IP Address do we need to send the traffic to, so that it can reach that VM. This is the TEP. We need to setup a TEP on each host, and the IP Addresses for these TEPs come from an IP Pool. Since I have three hosts, and expect to deploy 1 edge nodes, I’ll need a TEP Pool with at least 4 IP Addresses. Size your environment appropriately. ...

July 14, 2020 · 6 min · eshanks

NSX Installation

This post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing. Deploy the NSX Manager The first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you’ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you’ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab. ...

July 14, 2020 · 4 min · eshanks

NSX Issues After Replacing VMware Self-Signed Certs

Recently, I’ve been going through and updating my lab so that I’m all up to date with the latest technology. As part of this process, I’ve updated my certificates so that all of my URLs have the nice trusted green logo on them. Oh yeah, and because it’s more secure. I updated my vSphere lab to version 6.5 and moved to the vCenter Server Appliance (VCSA) as part of my updates. However, after I replaced the default self-signed certificates I had a few new problems. Specifically, after the update, NSX wouldn’t connect to the lookup service. This is particularly annoying because as I found out later, if I’d have just left my self-signed certificates in tact, I would never have had to deal with this. I thought that I was doing the right thing for security, but VMware made it more painful for me to do the right thing. I’m hoping this gets more focus soon from VMware. ...

March 13, 2017 · 3 min · eshanks

vRealize Automation 7 - Deploy NSX Blueprints

In the previous post we went over how to get the basics configured for NSX and vRealize Automation integration. In this post we’ll build a blueprint and deploy it! Let’s jump right in and get started. Blueprint Designer Login to your vRA tenant and click on the Design Tab. Create a new blueprint just like we have done in the past posts. This time when you are creating your blueprint, click the NSX Settings tab and select the Transport zone. I’ve also added a reservation policy that can help define with reservations are available for this blueprint. ...

March 9, 2016 · 2 min · eshanks