NSX Installation

This post will focus on getting the NSX-T Manager deployed and minimally configured in the lab. NSX-T is a pre-requisite for configuring vSphere 7 with Kubernetes as of the time of this writing. Deploy the NSX Manager The first step in our build is to deploy the NSX Manager from an OVA template into our lab. The NSX Manager is the brains of the solution and what you’ll be interacting with as a user. Each time you configure a route, segment, firewall rule, etc., you’ll be communicating with the NSX Manager. Download and deploy the OVA into your vSphere lab. ...

July 14, 2020 · 4 min · eshanks

Kubernetes Validating Admission Controllers

Hey! Who deployed this container in our shared Kubernetes cluster without putting resource limits on it? Why don’t we have any labels on these containers so we can report for charge back purposes? Who allowed this image to be used in our production cluster? If any of the questions above sound familiar, its probably time to learn about Validating Admission Controllers. Validating Admission Controllers - The Theory Admission Controllers are used as a roadblocks before objects are deployed to a Kubernetes cluster. The examples from the section above are common rules that companies might want to enforce before objects get pushed into a production Kubernetes cluster. These admission controllers can be from custom code that you’ve written yourself, or a third party admission controller. A common open-source project that manages admission control rules is Open Policy Agent (OPA). ...

May 26, 2020 · 10 min · eshanks

Kubernetes Liveness and Readiness Probes

Just because a container is in a running state, does not mean that the process running within that container is functional. We can use Kubernetes Readiness and Liveness probes to determine whether an application is ready to receive traffic or not. Liveness and Readiness Probes - The Theory On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node’s health, and restarting failed containers. But how does the Kubelet know if there is a failed container? ...

May 18, 2020 · 5 min · eshanks

Kubernetes Pod Auto-scaling

You’ve built your Kubernetes cluster(s). You’ve built your apps in containers. You’ve architected your services so that losing a single instance doesn’t cause an outage. And you’re ready for cloud scale. You deploy your application and are waiting to sit back and “profit.” When your application spins up and starts taking on load, you are able to change the number of replicas to handle the additional load, but what about the promises of cloud and scaling? Wouldn’t it be better to deploy the application and let the platform scale the application automatically? ...

May 4, 2020 · 5 min · eshanks

Kubernetes Resource Requests and Limits

Containerizing applications and running them on Kubernetes doesn’t mean we can forget all about resource utilization. Our thought process may have changed because we can much more easily scale-out our application as demand increases, but many times we need to consider how our containers might fight with each other for resources. Resource Requests and Limits can be used to help stop the “noisy neighbor” problem in a Kubernetes Cluster. Resource Requests and Limits - The Theory Kubernetes uses the concept of a “Resource Request” and a “Resource Limit” when defining how many resources a container within a pod should receive. Lets look at each of these topics on their own, starting with resource requests. ...

April 20, 2020 · 6 min · eshanks

In-tree vs Out-of-tree Kubernetes Cloud Providers

VMware offers a Kubernetes Cloud Provider that allows Kubernetes (k8s) administrators to manage parts of the vSphere infrastructure by interacting with the Kubernetes Control Plane. Why is this needed? Well, being able to spin up some new virtual disks and attaching them to your k8s cluster is especially useful when your pods need access to persistent storage for example. The Cloud providers (AWS, vSphere, Azure, GCE) obviously differ between vendors. Each cloud provider has different functionality that might be exposed in some way to the Kubernetes control plane. For example, Amazon Web Services provides a load balancer that can be configured with k8s on demand if you are using the AWS provider, but vSphere does not (unless you’re using NSX). ...

April 14, 2020 · 4 min · eshanks

Deploying Tanzu Kubernetes Grid Management Clusters - vSphere

VMware recently released the 1.0 release of Tanzu Kubernetes Grid (TKG) which aims at decreasing the difficulty of deploying conformant Kubernetes clusters across infrastructure. This post demonstrates how to use TKG to deploy a management cluster to vSphere. If you’re not familiar with TKG yet, you might be curious about what a Management Cluster is. The management cluster is used to manage one to many workload clusters. The management cluster is used to spin up VMs on different cloud providers, and lay down the Kubernetes bits on those VMs, thus creating new clusters for applications to be build on top of. TKG is built upon the ClusterAPI project so this post pretty accurately describes the architecture that TKG uses. ...

April 6, 2020 · 6 min · eshanks

Hello World - COVID-19 and Golang

There is a worldwide pandemic going on right now and it has disrupted practically everything. Many people are worried not only about their health and families health, but also their job situations. I feel incredibly fortunate that my employer seems intent on continuing to work through this situation and that I am already a remote worker most of the time. My team was asked to of course take care of our families, but also to take this opportunity to learn something new. I took this respite from normal activities to try to learn how to do some basic Golang (Go) programming. I have a hard time focusing on a project sometimes when there are no specific goals in mind, so my “Hello World” attempt at programming in Golang was to grab the latest COVID-19 statistics and post them to slack once per day. ...

March 22, 2020 · 5 min · eshanks

Tanzu Mission Control Getting Started Guide

VMware Tanzu is a family of products and services for modernizing your applications and infrastructure with a common goal: deliver better software to production, continuously. The portfolio simplifies multi-cloud operations, while freeing developers to move faster and access the right resources for building the best applications. VMware Tanzu enables development and operations’ teams to work together in new ways that deliver transformative business results. One of these new solutions within the Tanzu brand is Mission Control. If you’re looking to get started with Tanzu Mission Control for management and visibility for your Kubernetes Clusters, start with the articles below. You’ll learn the basics of Tanzu Mission Control, how to deploy and manage Kubernetes clusters, assigning policies, and managing lifecycles of those clusters. ...

March 10, 2020 · 5 min · eshanks

Tanzu Mission Control - Access Policies

Controlling access to a Kubernetes cluster is an ongoing activity that must be done in conjunction with developer needs and is often maintained by operations or security teams. Tanzu Mission Control (TMC) can help use setup and manage these access policies across fleets of Kubernetes clusters, making everyone’s life a little bit easier. Setup Users Before we can assign permissions to a user or group, we need to have a user or group to assign these permissions. By logging into the VMware Cloud Services portal (cloud.VMware.com) and going to the Identity and Access Management Tab we can create and invite new users. You can see I’ve created a user. ...

March 10, 2020 · 3 min · eshanks