Poor Man’s SRM Lab (Whitebox)
May 3, 2012I really wanted to test out some VMware Site Recovery Manager scenarios and realized that buying SANs, servers and networking equipment was quite expensive. I also didn’t have a lot of space in my house that was available for running all of this equipment. After completing my VCP5 I was given a copy of VMware Workstation 8 and thought that I might be able to build a nested virtual environment, where the ESXi hosts themselves were virtualized inside of workstation. (Don’t worry, virtualizing a virtual host doesn’t warp time or space, it’s safe.)
Physical Hardware
For my whitebox, I knew that the thing I would run shortest on was memory. Nesting hosts, SAN simulators, and VMs was going to take a toll on my resources, so I got a machine that would run 32Gb of RAM and I also purchased an SSD drive for anything I wanted better performance on. I also purchased two 2Tb drives (mirrored) in the server and a six core AMD processor. My hardware is listed below:
- 1 x Patriot Gamer 2 Series, Division 2 Edition 32GB (4 x 8GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model PGD332G1333ELQK
- 1 x ASUS M5A97 AM3+ AMD 970 SATA 6Gb/s USB 3.0 ATX AMD Motherboard with UEFI BIOS
- 1 x OCZ Petrol PTL1-25SAT3-64G 2.5″ 64GB SATA III MLC Internal Solid State Drive (SSD)
- 1 x AMD FX-6200 Zambezi 3.8GHz (4.1GHz Turbo) Socket AM3+ 125W Six-Core Desktop Processor FD6200FRGUBOX
- 2x Seagate Barracuda Green ST2000DL003 2TB 5900 RPM Sata 6.0Gb/s b3.5″ Internal Hard Drive – Bare Drive
Logical Design
To give you a better picture of the nesting that is happening, I’ve drawn up a picture. Here you can see that I only have 1 physical machine running a Microsoft Server OS with Workstation installed. Inside of that we have ESXi hosts setup, and inside of those we have our “virtual machines.”
I know that the picture seems pretty convoluted, and I’ll admit that having a lab setup this way is slightly more difficult to wrap your head around, but it sure beats spending the dough on extra equipment.
In an effort to give you a better look at the actual lab environment, the below diagram shows the SRM part that I would be using to test SRM. Here you can see that I’ve stripped out all of the VMware Workstation layers, and the physical hardware in order to simplify what I want to test.
Setup
In order to get all of this setup, you should give yourself plenty of time. It took me somewhere around 2 weeks of working in my spare time to get it up and running fully. If you’re considering building a lab of your own, hopefully these tips will help you get going faster.
First and foremost I need to thank Vargi for his great documentation on setting this up. Following his SRM in a box guide was great and is likely everything you need to get your lab up and running. I would just like to point out a few places where I got stuck so that this doesn’t happen to future users. His guide can be found here: https://docs.google.com/file/d/0B8RhOQcmJhZuMThjNWU5OWEtMGE3Ni00ODdkLTlkYzEtMTBjYTllMjE0OWZh/edit?pli=1#
and I recommend checking out his blog at: http://virtual-vargi.blogspot.com/
Netapp Simulators Setup
I had plenty of problems getting the Netapp Simulators setup correctly. Everything that you need to know is in the SRM in a Box setup guide, but be sure to follow these commands exactly, the first time you boot up the Simulator. If you boot the simulator to OnTap without changing this info you’re done. You’ll need to get another copy of the simulator from the Now.netapp.com site and start over. Also, be sure that when you’re setting up the sysid and serial numbers of your two simulators, that they are different so that OnCommand System Manager can then manage both systems. I’ve copied Vargi’s text below for quick reference.
Boot Simulator for initial and do its initial configuration. We need to change the serial
number as we are going to have two simulators running and managed by OnCommand
System Manager. When it boots press a key other than enter to break the boot, and then
run the following commands.set bootarg.nvram.sysid=1111111101
set SYS_SERIAL_NUM=1111111101
boot
I would also like to mention that this all works perfectly fine with the 8.1 version of the Netapp Simulator.
ESXi FIX for BSOD
I was excited to see that ESXi was now supported in Workstation 8, after all that’s why I decided to build this in the first place. But I quickly started getting BSODs on my system and I assumed that I had purchased some faulty hardware in my whitebox. After some digging, I found that the fix was to make a setting change on the ESXi guest. (you have to love the VMware communities) http://communities.vmware.com/message/2017526
It appears as though this will be resolved in a future version of workstation and the “fix” won’t be necessary any longer.
Multiple Subnets
I wasn’t sure that it was necessary, but for a more realistic feel I wanted to have more than one subnet available. In my case I have a storage subnet and a virtual machineESXi host subnet. Unfortunately, I didn’t want to have my SRM Lab interfere with my home network. Luckily Vyatta has a free virtual router that you can use to route traffic between multiple subnets. This allowed me to build additional networks as needed and can communicate with them as needed. I have another post related to setting up the Vyatta Router. Vyatta Router Setup
End Results
Hi,
I am glad I came across this !! Just wondering how you are getting on with it?
I was considering purchasing 2 x ML110’s with 16GB each and a layer 3 SG300 switch, but that is coming at quite a lot.
Would be excellent to get some feedback on performance and how many VM’s you are running simultaneously 🙂
Any feedback, kindly appreciated.
Many thanks,
G.
I’m running about 10 VMs simultaneously, but they aren’t really doing a whole lot. Performance is ok if you’re just testing out how things work, or some proof of concept work but if you need performance I would sugges the ML110s or maybe even the HP Microserver. There is a great blog post here by Chris Wahl @chriswahl http://wahlnetwork.com/2012/07/09/the-hp-proliant-microserver-n40l-vmware-home-lab-review-video/
I’m adding a second SSD drive soon to be used for host cache and some VM IOPS. My biggest hangup right now is the slow speed SATA disks. Adding some SSD should help.
Hi,
Thank you for the reply.
So you’re saying the ML110 might perform better than your desktop build? You have a 6 core beast.
The Microserver CPU’s are somewhat limited, but I think you are right, best bang for money are either the ML110 or the NL40.
Your only performance issue is the SATA speeds then, CPU and all of that running ok?
Many thanks,
Gabi.
I’m having no issues with CPU or Memory. 32Gb of memory is really the saving grace. SATA performance is really my bottleneck. It’s not a perfect scenario, but it does allow me to create labs for studying, and it is fairly cheap.
It also takes up alot less space than a pair of ML110s and is pretty quiet. I suggest big slower speed fans to keep noise down.
I’m pretty happy with it.
[…] goals of each question. If you are looking for a fairly cheap way to build a lab, check out the post on my home […]
[…] finally outgrown my home lab which I wrote about previously here: Poor Mans SRM Lab Whitebox It was a great lab to train on but I finally decided I wanted some physical servers that I could […]
[…] out other setups on the web from big names like Duncan Epping, Chris Wahl, Hersey Cartwright, Eric Shanks and Phil Jaenke, I made my picks based on my budget and best choices for VMware compatible […]
[…] Poor Mans SRM Lab […]
Hi Eric,
I was wondering if you could help me with a problem as I see you have used the Asus M5 A97 motherboard.
I have the following home lab whitebox:
Asus M5 A97 + 32GB RAM ( 4 x 8GB modules )
Radeon HD 5450 PCI Express ( inserted in the PCIe 2.0 [Black x4 speed ] slot )
Seagate SATA 2TB hard disk.
Kingston Traveller USB 2GB
UEFI BIOS set at Optimised Defaults.
When I run the ESXi 5.1 installer from my unetbootin Kingston USB drive, it hangs always at “Relocating modules and starting up the kernel”.
I have googled other KBs and forums and tried many options:
1. Tried the 16 speed blue PCI express slot for my graphics card.
2. Tried Shift + O at Install and added IgnoreHeadless=TRUE.
Still no joy Eric 🙁
Kind Regards
Bob
Sorry, no real great information from me.
I would guess it’s an issue with the hardware not being on the HCL. http://www.vmware.com/resources/compatibility/search.php
In my case I’m running windows with VMware workstation. Then ESXi on workstation so essentially ESXi is running on completely different hardware (virtual hardware).
I hope you figure it out, or if not, throw it on workstation. Probably not the answer you were looking for though. Sorry again.
vmcreator, if video is a problem get one of these: http://www.amazon.com/ATI-Rage-8MB-Video-Card/dp/B003FP95Q2 For $17 and Prime shipping, in two days you have one of the greatest video cards for a homebuilt server that doesn’t have video.
Sorry for getting back to you guys so late. Been so busy I even forgot I posted this help here ( oh dear ) need a holiday. The problem was with the ASUS motherboard and a batch problem from the supplier.
I purchased the same motherboard, but a different CPU to match from Amazon along with a new ATI rage Card and quality Corsair ram. Runs fine, I have been building nested ESXi 5.5 and all sorts of configurations without problems.
Cheers guys
VMCreator
Can you show us step by step on how you setup the Vcenter Linked Mode for vsphere 5.5? i have issues doing it.
My lab is no longer setup to do this because I’m using the vCenter Server Appliance which is not supported.
Maybe this list of prerequisites will help. http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-7634B78B-07E5-44EC-B5A0-CBEE842A20FD.html
Christian, I had issues with the vcenter linked mode, Vsphere 5.5. too. I got around them by setting up vcenter-1 (with embedded sql),all defaults . During vcenter 2, avoid the simple install, install each item manually in order, 1. Inventory Service, 2. Single Sign on, 3. Vcenter Server, and 4. Web client, 5, Fat client. The step 2 is the most critical for linked mode to work, you have to choose the 3rd radio button which you wont even see if you choose simple install. Once you do this, and enter the fqdn, password for vcenter 1, you will be able to complete the install as linked mode during step 3. Here’s the article I used http://www.mikelaverick.com/2013/11/back-to-basics-vcenter-5-5-with-multisite-sso-and-linked-mode-configuration/
Thanks for the great reference article Eric. How did you get past the ” filer01% sudo makedisk.main -n 14 -t 23 -a 2 sudo: makedisk.main: command not found ” error during the Netapp Simulator setup ?
[…] out other setups on the web from big names like Duncan Epping, Chris Wahl, Hersey Cartwright, Eric Shanks and Phil Jaenke, I made my picks based on my budget and best choices for VMware compatible […]
[…] Poor Man’s SRM Lab […]
an old post, but still looking to do this. 🙂
Great Stuff!
My Looking to rebuild my lab from scratch.
Looking to rebuild my whitebox like yours, and maybe repurpose the microservers??
Below my current poor setup
Whitebox:
Asrock 970 Extreme3
32GB Kingston RAM
1x Samsung 256 SSD
1x WD 1TB Green
Microserver N40L (Esxi6.0)
1x 8GB USB (Boot)
1x 3TB WD Red
2x 500GB HDD
8GB RAM
Microserver N40L (Hyper-V)
2x 250GB HDD
16GB RAM
Netgear Stora
Stock OS
2x 1.5TB WD Green
Cisco SLM2008
Is it possible to configure the HP N40Ls as a ESX cluster?
Can you suggest a poor man processor as its 2016 now and compatible mobo as well.
haha, good point! My lab has grown pretty considerably over the past few years so I don’t have any current recommendations. I think your best bet is to look for something to give you lots of memory, an SSD and a processor that fits your price range with as many cores as possible. If you come up with something, please post it in the comments! Thanks for reading.