Enable Workload Management
July 14, 2020This post focuses on enabling the workload management components for vSphere 7 with Kubernetes. It is assumed that the vSphere environment is already in place and the NSX-T configuration has been deployed.
To enable workload management, login to your vCenter as the administrator@vsphere.local account. Then in the Menu, select Work
Within the Workload Management screen, click the ENABLE
button.
The first screen in the wizard, will list your compatible vSphere clusters. These clusters must have HA and DRS enabled in fully automated mode. If you are missing clusters, make sure you have ESXi hosts on version 7 with HA and DRS enabled. You’ll also need a Distributed switch on version 7 for these clusters.
If you’re having trouble finding information about why your cluster isn’t listed as compatible, you can run the command below to list why your cluster is incompatible.
dcli com vmware vcenter namespacemanagement clustercompatibility list
Code language: PHP (php)
You can see why two of my vSphere clusters are incompatible from running the command above. If you have more trouble with enabling “workload management” I recommend reading this post from William Lam.
Next, you must select a control plane size. This defines the VM size of the control plane nodes for your Kubernetes clusters. Since I have limited resources in my lab, I’ve chosen the Tiny
size.
The next screen requires you to fill out networking information. First, we’ll discuss the management network. Each of the control plane nodes that will be deployed will have a network connection on the management network. (VLAN 150 if you’ve been following the series). Select the management portgroup for your network, and then the starting IP Address to be used for new nodes. They will increment from this IP Address so be sure to have at least five IP Addresses available. Next, set the subnet mask and the gateway, DNS info and NTP configs.
Once you’re through with the management network, its time to configure the workload network. Select the Distributed switch that will be used, and the Edge cluster. Next enter an API Server endpoint DNS name. This will be associated with the first “Starting IP Address” IP created in the management network (So 10.10.50.120 in this example). You will want to add a DNS entry for this FQDN. The Pod CIDRs and Service CIDRs should be fine, but you can change this if you’d like. Lastly, you need to enter Ingress and Egress CIDRs. This IP Address range should come from your external network. In my case this is VLAN 201. I’ve carved two /26s aside for ingress/egress access.
Next up, its time to setup the storage. You’ll need to store three different types of objects on a datastore.
- Control Plane Node – virtual disks for control plane nodes
- Ephemeral Disks – vSphere pod disks
- Image Cache – container image cache
For each of these objects you’ll need to select a storage policy that defines what datastores are compatible. I created a Hollow-Storage-Profile
policy as a pre-requisite that selects my vsanDatastore. Select the storage profile configured for each of these components.
Once you’re done, click Finish and go get some coffee. No, I mean it, go drive to Starbucks or start a fresh pot of coffee and wait for it to be ready. Then drink it, and then come back. This process took about an hour in my cluster to complete.
As the configuration is running, you can view some minimal status information in the clusters screen. You can see here it’s configuring.
As I set this up in my lab, I had a couple of challenges and needed to find details about what was happening. If you need to find log details, login to the vCenter appliance shell and cat
or tail
the following two log files to give you information about whats happening.
tail -f /var/log/vmware/wcp/wcpsvc.log
tail -f /var/log/vmware/wcp/nsxd.log
Code language: JavaScript (javascript)
NOTE: there are some items which might fail, or give you a 404 error. These seem to be normal operations that will be retried via a control loop. So getting an error here and there might not be anything to worry about.
When complete, you should see your cluster has a “Config Status” of “Running”. You’ll also see the control plane node IP Address which comes form the Ingress CIDR created previously.
Summary
Enabling the Workload Management components aren’t too labor intensive once you have the prerequisites done, but it does take a while to enable. You should have a supervisor cluster created and ready to be used at this point. Stay tuned and we’ll cover what to do with that cluster now that you’ve setup vSphere 7 with Kubernetes!
[…] to your physical network. You can begin using NSX-T for vSphere 7 on Kubernetes by following the next post, or whatever network segmentation/routing/stretched Layer 2 thing you can come up with. Good luck […]
THIS SITE IS AWESOME FOR MY VC 7 NESTED HOME LAB!!! Thank you!!
I wish some eval licenses came with NSX though 🙂
Nick, thank you for your comment.
Take a peek at William Lam’s post here: https://www.virtuallyghetto.com/2020/07/is-vsphere-with-kubernetes-available-for-evaluation.html
Maybe that can get you going.
[…] 7 – Enable Workload Management […]
Eric
when did the enable workload is hung with error messages…
Cluster domain-c8 is unhealthy: the server has asked for the client to provide credentials.
My lab is with vSphere 7, vcenter 7, nsx-t 3.0.2
You might need to re-enable the trust between NSX Manager and vCenter.
this error with 404 is driving me crazy
Configure operation for the Master node VM with identifier vm-3643 failed.
what shouid ido?
esxi7.0.2