Deploy NSX-T Edge Nodes

Deploy NSX-T Edge Nodes

July 14, 2020 8 By Eric Shanks

NSX-T Edge nodes are used for security and gateway services that can’t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use Tier0 or Tier1 routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed.

In my lab, I’m deploying the edge nodes to their own cluster. This is not a requirement for a lab, but a good recommendation for a production setup since traffic is usually funneled through these instances and they can become a network hot spot.

Before we do the deploy, let’s revisit this logical diagram for the edge node we’ll be deploying. I’ll be honest, this edge networking caused me fits until I realized that we need to extend the overlay networks from the ESXi hosts in our workload cluster, to the Edge Virtual Machine. You must add the Edge VM to the TEP network to participate in the overlay. Then, you will have a second VM interface that connects to a VLAN transport zone which will be the portgroup created on my Edge ESXi host virtual switch.

To wrap your head around the Edge VM networking, take a look at this page, and specifically the diagram found below. The Edge VM has a virtual switch inside it, and we’ll connect the edge vm uplinks to the Distributed virtual switch uplinks. The Edge VM will have three or more interfaces. 1 for Management, 1 for Overlay, and 1 for VLAN traffic to the physical network.

In the end, our overlay network will extend from the ESXi hosts to the Edge virtual machine. The edge virtual machine will have a path to the physical network.

To deploy the first edge node, go to the NSX Manager under System –> Fabric –> Nodes -> Edge Transport Nodes. Click the + ADD EDGE VM button.

Give the edge vm a name, FQDN, and description before selecting a size. Sizing is critically important for a production environment. It has impacts on the number of load balancers that can be provisioned among other things.

NOTE: The documentation states that you need the Large form factor for the Tanzu components. I was able to get it to come up with a Medium form factor, but could not deploy TKG clusters until I upgraded to Large.

Next, select the compute manager, cluster, resource pool, and datastore for the edge node to be deployed on.

In the configure node settings box, give the VM an IP address on a management network. This is NOT in the data plane, but rather a way to communicate with the edge node. I’ve placed this VM on one of my existing management portgroups on my edge cluster. (VLAN 50 – Management)

The last screen is where the configuration really happens. This is where we create virtual switches in the Edge VM and connect them (through uplinks) to a physical Nic. We need to create two switches in this screen, these switches will not be visible in the ESXi hosts view, because they exist within the Edge virtual machine.

We need two switches created. One for the Overlay network which belongs on the TEP network along with our ESXi hosts in our workload cluster. And another for the VLAN backed network which is how the VM communicates on the physical network.

On the Configure NSX screen, click the +ADD SWITCH link twice so we can setup each switch. The first I’ve named nsx-vlan to represent the northbound physical network. I selected the VLAN-Zone transport zone (VLAN 201 – Edge Network) and the single nic profile which is an out-of-the-box profile. Under the teaming policy, I’ve selected my Edge Uplink portgroup that was already created on my ESXi DVS. This link IS within the data path.

On the second switch in the configuration I’ve added the Overlay-Zone transport zone, again with the single nic profile. Under Address assignment, select Use IP Pool and select the TEP pool that we also used on our workload ESXi hosts to add them to the overlay. Then select the uplink for the NSX-TEP network (VLAN 200 – NSX TEP).

I know this piece was confusing to me, so if you’re stuck, take a look at the diagram below. The edge VM will be created with a pair of switches. The uplinks on those switches will be portgroups on the DVS. My lab layout is shown below for the edge VM.

After finishing the configuration, an edge VM should be listed under your Edge Transport Nodes.

Create Edge Cluster

Edge Nodes can (and in production environments should) be deployed in pairs. Groups of Edge nodes can then be pooled together into an Edge cluster. Thus far, we haven’t focused enough on High Availability because its a lab short of resources, but these routines should be modified slightly to provide this HA capability. This might include adding a second VLAN transport zone for a second physical switch etc. Edge clusters, although we won’t be using more than one in our example, are required.

Add your new Edge node to an edge cluster by navigating to System –> Fabric –> Nodes –> Edge Transport Zones. Click the + ADD button to create a new cluster.

Give the cluster a name and description. Then make sure your edge VM has been selected and moved to the right column.

Summary

I found that understanding the edge node routing was the most difficult piece to setting up NSX in the lab. Remember that we’re extending the Overlay transport zone from the workload ESXi host cluster to the Edge node VM. The Edge VM then has a second VLAN transport zone where traffic can be routed to the physical network. Stay tuned for the next post were we create some actual overlay networks that our VMs can use.