2021 Home Lab

2021 Home Lab

March 8, 2021 6 By Eric Shanks

Time for an update on the home lab. 2020 meant spending a lot of time at home and there were plenty of opportunities to tinker around with the home lab. I did purchase some new hardware, and did plenty of reconfiguring so here’s the 2021 version of my home lab in case anyone is interested.

Rack

The rack is custom made and been in use for a while now. My lab sits in the basement on a concrete floor. So I built a wooden set of shelves on casters so I could roll it around if it was in the way. I place the UPS on the shelf so that I can unplug the power to move the lab. As long as I have a long enough Internet cable, I can wheel my lab around for as long as the UPS holds on. On one side I put a whiteboard so I could draw something out if I was stuck. I don’t use it that often, but I like that it covers the side of the rack.

On the back of the shelves, I added some cable management panels.

Power

As mentioned, I have a UPS powering my lab. It’s a CyberPower 1500 AVR. I’m currently running around 550 Watts for the lab under normal load. I’ve mounted a large power strip along the side and a few small strips on each shelf. I also bought some 6 inch IEC cables which really cuts down the cable clutter behind the lab.

Compute

Last year I bought new servers because I needed more capacity and couldn’t run the AES instructions with my old processors. I bought new compute so that I could run the vSphere 7 stack, complete with NSX-T, Tanzu Kubernetes Grid, and anything else you can think of. So I bought three new E200-8d Supermicro servers with a six core Intel processor and 128 GB of memory. In 2021 I added two more of these and decommissioned one of my old home built servers.

For local storage I use a 64 GB USB drive for the ESXi host OS disk and I added a 1 TB SSD and a 500 GB NVMe drive. These drives are added for capacity and caching tiers for VMware vSAN. There isn’t a lot of room for disk drives in this model, but they sure are compact enough to fit on a shelf.

These servers have two 10GbE NICs, two 1 GbE NICs, and an IPMI port for out of band management of the server. I wanted to be sure to have a way to power on and off the server, load images into a virtual cd-rom, etc.

I have one other server built out of spare parts I had lying around. It includes another 6 cores and 128 GB of memory.

vSphere Clusters are built into three clusters. My main “HollowCluster” is where most of my machines are built and tested. I have an edge cluster with a single node that is used to run my NSX-T Edge nodes. And finally I have an auxiliary cluster where I run workloads not critical to my infrastructure (Its the spare parts machine).

Storage

For Storage, I have a tiered system. I have an eight bay Synology array used for virtual machines and file stores. Then I have a secondary Synology used as a backup device. Important information on the large Synology is backed up to the smaller one, and then pushed to Amazon S3 once a month for an offsite.

  • vSphere Storage Array: Synology DS1815+
    • 8 TB available of spinning disks with dual 256 GB SSD for Caching
  • File Storage and Backup Array: Synology DS1513+
    • 3.6 TB available of spinning Disk

vSAN has become my place for ephemeral data. I found that when running Kubernetes clusters, my Synology arrays with ssd cache wasn’t good enough. I was getting etcd timeouts. (Full disclosure I started using a log aggregation tool as well which was chewing up some IOPS and maybe part of the problem)

So I started using vSAN for my Kubernetes clusters. My VSAN disks include 1TB Kingston SSDs per host, and a Western Digital 500GB NVMe SSD for caching.

Network

No real network updates this year as far as hardware, but I did make a fair amount of configuration changes. The biggest change I made this year was to create a second wireless LAN and put all of my IoT devices on it. Then, I created a firewall rule to prevent them from accessing my home wireless or lab equipment. This took a fair amount of time to move my Smart Home controller, Amazon Echos, Wemo plugs, TVs, Appliances, Smoke Detectors, Cameras, etc over to a new network, but it helps me sleep at night. Now I don’t worry as much about zero days since I’ve segmented them away from any actual data.

If you want to see the network design, you can take a look at the diagram below.

I mounted my basement access point, USG and PoE switch to a piece of plywood and mounted it with a patch panel.

The cables are colored according to purpose.

  • Yellow – Management Networks and Out of Band access.
  • Green – Storage and vMotion Networks (10GbE)
  • Blue – Trunk ports for virtual machines
  • Red – Uplinks

Cloud

I’ve decided to use Amazon as my preferred cloud vendor. Mainly because I’ve done much more work here than on Azure. My AWS Accounts are configured in a hub spoke model which mimics a production like environment for customers.

I use the cloud for backup archival, and just about anything you can think of that my home lab either can’t do or doesn’t have capacity for. I like to use solutions like Route53 for DNS so a lot of times my test workloads still end up in the cloud. Most of the accounts below are empty or have resources that don’t cost money, such as VPCs.

My overall monthly spend on AWS is around $35, most of which is spent on the VPN tunnel and some DNS records.