Are you thin or thick? Where at?

March 26, 2013 4 By Eric Shanks

I’m often asked about how to provision virtual machine disks.  This almost always comes down to, “Should I use thick or thin disks?” and then “Should I do thin provisioning on the array or on the hypervisor?”

So here we go: Thin vs Thick




Thin provisioning:

Thin provisioned disks don’t allocate all of the space during the provisioning of the storage.  Instead, they allocate the space on demand.  This is a great way to get more bang for you buck out of your storage.  Let’s take a closer look with an example.


The above picture shows a 100GB virtual disk, and 20GB of it is actually being used by the virtual machine.  In a thinly provisioned disk, the hypervisor will only show 20GB of disks space used.  The virtual machine on the other hand will still show a 100GB disk that is available to be used.


Obviously the main reason to use thinly provisioned disks is to cut down on your storage costs.  In the example we used earlier, we could create four more virtual machines each of which has 100GB available to them, and is only using a total of 20GB X 4 = 80GB.

Also, think about what happens if you start to do full clones.  Now you’re only increasing your disk space based off what’s actually being used.


Now say that we did create four more virtual machines and we’re sitting at 80GB of used disk space.  Each of those machines could grow to 100GB.  If they all did grow unexpectedly, you could fill up the datastore and cause an outage.

Thick Provisioning:

Thick provisioning comes in two flavors.  Eager Zeroed and Lazy Zeroed.

Eager Zeroed allocates all of the disk space when you provision it and chews up the blocks it’s been assigned almost right away.  It takes a short period of time during creation in order to write zeroes in all of the assigned blocks.  This time has been dramatically reduced with the VAAI primitives.

To give a very simplified example of this, the below diagram shows twelve blocks.  Four of them have data on them, but the rest are allocated and have 0’s written on them.


Lazy Zeroed allocates all of the disk space immediately in the vmfs file system, but doesn’t actually start using the disk blocks on the storage system until they are requested by the virtual machine.  There is a small performance hit to zero the blocks before they can be written to.



Thick provisioning will keep you from over provisioning your datastores and assure you dont’ cause downtime.  Thick provisioned Eager Zeroed will also have the best performance since all of the blocks will be pre-zeroed so they don’t have to be during normal operations.


This type of disk will eat of your storage must faster and will likely waste disk space on empty blocks of data.


 What about Array Thin Provisioning?

It gets a little more complex when you’re considering thin provisioning your storage array as well as your VMware datastores.

The best thing to do is to realize that the array doesn’t know what VMFS is doing on it.  The array can just tell if blocks are empty or not.

Let’s look at Thick Provisioned Eager Zeroed disks on a thin provisioned LUN.   We look at the same blocks from our previous diagram only this time we put them on top of a LUN.  This LUN is larger than the virtual disk size that was provisioned.  If the array thin provisions, it must be at least as large as all of our blocks.  Here, the arrow shows the disk savings the a LUN could gain by moving from a full sized LUN to a thin provisioned LUN.



Thin provisioned virtual disks on a thinly provisioned LUN can reduce the size by much more.  This example shows the same four blocks that have data on them, but remember that thinly provisioned virtual disks don’t pre-allocate the rest of the space.  So if we took the same size LUN that was full sized, and then thin provisioned it, we’d gain a lot more space.




Thick provisioned Lazy zeroed disks actually behave much like thinly provisioned disk in this instance.  Remember that we pre-allocate all of the disk space, but we don’t zero it out.  This means that our VMFS datastores won’t be over provisioned, but the storage array could be.  Below we see the four data blocks and the additional allocated blocks, but since they’re not zeroed, the array doesn’t know anything is allocated.  Remember, the array doesn’t know what VMware disks are doing.




So the answer to the question of “Are you thin or thick?” and “Where at?” is… It depends.  But at least now you hopefully understand the differences and can decide for yourself.