Tier-1 Gateway and NSX Segments

This post will focus on deploying our first NSX Gateway/Router and setting up our overlay segments. Before you can start these steps, the Edge nodes should be up and running so that they can support the Tier-1 gateways. NSX uses two types of routers/gateways. We’ll start by using a Tier-1 (T1) router. These routers are usually used to pass traffic between NSX overlay segments. We could create NSX segments without any routers, but it would require a router to pass traffic between these segments so we will create a T1 router first. ...

July 14, 2020 · 3 min · eshanks

Deploy NSX-T Edge Nodes

NSX-T Edge nodes are used for security and gateway services that can’t be run on the distributed routers in use by NSX-T. These edge nodes do things like North/South routing, load balancing, DHCP, VPN, NAT, etc. If you want to use Tier0 or Tier1 routers, you will need to have at least 1 edge node deployed. These edge nodes provide a place to run services like the Tier0 routes. When you first deploy an edge, its like an empty shell of a VM until these services are needed. ...

July 14, 2020 · 5 min · eshanks

NSX Pools, Zones, and Nodes Setup

In the previous post we deployed an NSX Manager. Now it’s time to start configuring NSX so that we can build cool routes, firewall zones, segments, and all the other NSX goodies. And even if we don’t want to build some of these things, we’ll need this setup for vSphere 7 with Kubernetes. Add an IP Pool The first thing we’ll setup is an IP Pool. As you might guess, an IP Pool is just a group of IP Addresses that we can use for things. Specifically, we’ll use these IP Addresses to assign Tunnel Endpoints (Called TEPs previously called VTEPs in NSX-V parlance) to each of our ESXi hosts that are participating in the NSX Overlay networks. The TEP becomes the point in which encapsulation and decapsulation takes place on each of the ESXi hosts. Think of it this way, when encapsulated traffic needs to be routed to a VM on a host, what IP Address do we need to send the traffic to, so that it can reach that VM. This is the TEP. We need to setup a TEP on each host, and the IP Addresses for these TEPs come from an IP Pool. Since I have three hosts, and expect to deploy 1 edge nodes, I’ll need a TEP Pool with at least 4 IP Addresses. Size your environment appropriately. ...

July 14, 2020 · 6 min · eshanks

NSX Installation

This post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing. Deploy the NSX Manager The first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you’ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you’ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab. ...

July 14, 2020 · 4 min · eshanks

Kubernetes Validating Admission Controllers

Hey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don’t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster? If any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers. Validating Admission Controllers - The Theory Admission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you’ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is Open Policy Agent (OPA). ...

May 26, 2020 · 10 min · eshanks

Kubernetes Liveness and Readiness Probes

Just because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not. Liveness and Readiness Probes - The Theory On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node’s health, and restarting failed containers. But how does the Kubelet know if there is a failed container? ...

May 18, 2020 · 5 min · eshanks

Kubernetes Pod Auto-scaling

You’ve built your Kubernetes cluster(s). You’ve built your apps in containers. You’ve architected your services so that losing a single instance doesn’t cause an outage. And you’re ready for cloud scale. You deploy your application and are waiting to sit back and “profit.” When your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn’t it be better to deploy the application and let the platform scale the application automatically? ...

May 4, 2020 · 5 min · eshanks

Kubernetes Resource Requests and Limits

Containerizing applications and running them on Kubernetes doesn’t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the “noisy neighbor” problem in a Kubernetes Cluster. Resource Requests and Limits - The Theory Kubernetes uses the concept of a “Resource Request” and a “Resource Limit” when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests. ...

April 20, 2020 · 6 min · eshanks

In-tree vs Out-of-tree Kubernetes Cloud Providers

VMware offers a Kubernetes Cloud Provider that allows Kubernetes (k8s) administrators to manage parts of the vSphere infrastructure by interacting with the Kubernetes Control Plane. Why is this needed? Well, being able to spin up some new virtual disks and attaching them to your k8s cluster is especially useful when your pods need access to persistent storage for example. The Cloud providers (AWS, vSphere, Azure, GCE) obviously differ between vendors. Each cloud provider has different functionality that might be exposed in some way to the Kubernetes control plane. For example, Amazon Web Services provides a load balancer that can be configured with k8s on demand if you are using the AWS provider, but vSphere does not (unless you’re using NSX). ...

April 14, 2020 · 4 min · eshanks

Deploying Tanzu Kubernetes Grid Management Clusters - vSphere

VMware recently released the 1.0 release of Tanzu Kubernetes Grid (TKG) which aims at decreasing the difficulty of deploying conformant Kubernetes clusters across infrastructure. This post demonstrates how to use TKG to deploy a management cluster to vSphere. If you’re not familiar with TKG yet, you might be curious about what a Management Cluster is. The management cluster is used to manage one to many workload clusters. The management cluster is used to spin up VMs on different cloud providers, and lay down the Kubernetes bits on those VMs, thus creating new clusters for applications to be build on top of. TKG is built upon the ClusterAPI project so this post pretty accurately describes the architecture that TKG uses. ...

April 6, 2020 · 6 min · eshanks