Netapp VSC4 Optimization and Migration

One of my most frequently read articles is on how to use MBRAlign to align your virtual machine disks on Netapp storage. Well, after Netapp has released their new Virtual Storage Console (VSC4) the tedious task of using MBRAlign might be eased for some admins.

Optimization and Migration
The new VSC4 console for vSphere has a new tab called Optimization and Migration. Here you are able to scan all or some of your datastores to check the alignment of your virtual machines. The scan manager can even be set on a schedule so that changes to the datastore will be recognized.

Once you have scanned your datastores you can go the the Virtual Machine Alignment section and see if your virtual machines are aligned.


What if your virtual machines are not aligned already? Netapp has a new way to align your virtual machines without having to take them offline.
Disclaimer: I’ve looked for documentation on exactly how this process works, but couldn’t find any. The information below is how I perceive it to work after testing in my lab. If there are any Netapp folks that have definitive answers on this process, or have documentation explaining this process, please post them in the comments and I will modify this post.

Instead of realigning the virtual machine, Netapp is creating a new datastore specifically built to align itself with the unaligned virtual machine. These new datastores are considered “Functionally aligned” because they make the vm appear to be aligned. If you were to put an “Actually Aligned” virtual machine in a functionally aligned datastore that vm would appear to be misaligned. It seems that Netapp is creating a new volume and has the starting offset match the virtual machine that is unaligned.

Aligning Virtual Machines
Lets go through the process of aligning a misaligned virtual machine using VSC4.
First, we select the virtual machine that is misaligned and choose the migrate task. This opens the alignment wizard.

Choose your filer.

Next we choose a datastore. If we already have a functionally aligned datastore with an offset that’s the same as your unaligned virtual machine’s offset, you can select an existing datastore. If you don’t have an existing datastore that will align with your vm, you’ll receive an error message like the one below. If that’s the case, create a new datastore from the wizard.


Choose the datastore type.
In our case we’ll create a new datastore.

Once the migration is complete you’ll see your virtual machine in a new datastore and it will be aligned. Notice how the virtual machine offset matches the name of the new datastore that was created. Offset 7 was put into the AlignedDatastore1_optimized_7 datastore.


Now you can rest easy, knowing that your virtual machines are not suffering performance issues due to unaligned disks, and no downtime was required to do so.

9 Responses to Netapp VSC4 Optimization and Migration

  1. Thanks for this. I received the “no suitable datastore” error and was wondering just what was going on there as I ahve plenty of datastores with free space. I already had a case open with Netapp about it so I pointed them at this post, asking if it was correct. They confirmed it was.

  2. DISCLAIMER: Nick from NetApp here. One of my core charters at NetApp is being the TME for the VSC, and as such, will be creating the aforementioned documentation that you mentioned is lacking. We know it’s missing, and definitely want to fill that void. Currently in process now.

    First of all, excellent post, and the things you were uncertain about, you are spot-on.

    Allow me to describe a bit further. This magic datastore, and the whole “Functionally Aligned versus Actually Aligned” went through several name changes as it developed.

    I’m glad to see you were able to discern what we were getting at. Essentially what we’re doing is lying to vSphere, and creating what we like to call a “shim” datastore that is intentionally offset. This way, you just svMotion any misaligned VM’s into this datastore, and the misaligned I/O all of a sudden becomes an aligned I/O stream. Pretty slick if you think about it.

    This is to be considered a short-term solution for a much longer “Actually aligned” strategy. Let me be perfectly clear about that. Your VM’s are still misaligned! We’re just aligning the I/O stream under the covers in an online fashion, in order to ease the performance pains of VM’s being misaligned.

    Please, still schedule some time to align your VM’s offline with MBRalign, and then move them out of these Functionally Aligned datastores.

    Again, great job on the post! (and thanks for beating me to it!) 😉


    • What’s one to do in a heterogeneous stgaore environment. It’s not as if you can configure a per-VM isolation response?VM cluster with some NFS stgaore, some iSCSI VMFS datastores, some FC datastores, and some FCoE.Likelihood that host will retain access to VM network: Unlikely (ESXi second management VMkernel on same subnet as one of the das.isolationaddress on 2x10gig interface team on same dvSwitch as VM network)Likelihood that host will retain access to VM datastores: Some VM x datastores Likely (VMs on FC, iSCSI) FC unaffected by Ethernet, iSCSI protected by iSCSI multipathing and additional 1-gigabit connections through a second stgaore-dedicated switch.Some VM x datastores: Unlikely (NFS);NFS datastore on 2x10gig team shared with ESXi management.Because NFS does not support multipathing, there is no path failover possible in case of network issues if the problem cannot be detected by network teaming on the ESXi host.

    • We noticed the same issue as Garret, we have 3 clrtseus accessing the same LUNS and the SIOC alerts are present across nearly all VMFS datastores.A comment on spindles being shares across multiple LUNS, Arrays etc, in reality this may be an efficient use of space but if you need the best performance for a particular type of workload sharing spindles would not be your best solution. If your looking for best utilization of your storage resources, maximize your arrays with the most disks it can handle, the more heads reading and writing the data the better your performance will be but be careful to ensure that you have load balancing setup on your multi-pathing design so as to not saturate any one target. Although ESX is limited to 2TB RAW LUNs this can be overcome with extents when creating your VMFS datastores.

  3. @Nick. Thanks Nick for the info. That is, the Functionally Aligned datastores considered misaligned in terms of deduplication?

    • Your explanation seems plaluibse, but I wonder why you conclude it must be the spindles. In an IBM SVC architecture the concept of spindles is hidden by the SVC, which presents VDisks on the client (ESX servers) side. It could be the SVC load as well, or the FC switches, Personaly, I don’t think you can set up a modern SAN architecture where you have to dedicate spindles to specific applications.Shouldn’t your first conclusion, if I follow your reasoning, say Enable SIOC only on datastores that have exclusive use of a set of spindles ?Another point that I don’t really get is why we see these non-VI Workload alarms only on some of the nodes in a cluster. Does ESX only detect these anomalies when there is a certain amount of datastore activity ?Are there any user-configurable parameters for this algorithm ? Shouldn’t there be some ?Are there any metrics exposed that could provide more information ? Shouldn’t the alarm be a metric alarm , where the user can define a threshold, instead of an event alarm ?Could a VCB backup impact the algorithm ?

  4. @Belu: We definitely account for the 7 (I think it’s 7…) major offset “groups” by migrating those into individual datastores. Meaning, we create a Functionally-Aligned datastore that is specific to the offset of the GuestOS/partition scheme. For most Windows implementations, this would be “Group 7.” I put up a video here that goes through some of this: –> Videos –> VSC. It’s a YouTube playlist of videos walking through each section of the VSC 4.1 in detail.

    @Yassin: We consider a 1:1 LUN:Volume strategy for Datastores a Best Practice for VMware environments. Not so much for performance reasons, but moreso that FlexVol’s for us are not much more than logical folders with entry points attached. So if you’re sticking more than one LUN into a single volume, and mounting each of those LUN’s as datastores, that might be something for you to reconsider, since a lot of our value-add is at the volume-level. This also tends to solve a lot of “too many snapshots per volume” issues, as well as performance issues related to VMware throughput. At the end of the day, the LUN’s live in the same aggregate, which contains the volumes, which contain the LUNs. I’d also encourage you to throw away the misconceptions of “FC is better and more performant than NFS.” Those days are behind us, and we’ve proven it. Plus, you get to throw away all the overhead, limitations, and SCSI handcuffs that come along with using block-based storage. That’s not to say we don’t support it just as natively as we do NFS, but Ethernet is here to stay, folks. Drink the Kool-Aid and jump on the train. 🙂

    @Bartu: Datastores should never be exclusive to any one aggregate. You’re going to either undo all of the efficiency you gained by virtualizing in the first place, or make your aggr’s too small to be performant. The first, and most common, thing I like to check when performance concerns appear, is raid group sizing. Underneath the covers of large aggregates are a series of 1+n raid groups. We like these to follow the 14+2 aggregate rule, with a minimum of 12 disks per raid group. So if you’ve got several shelves of, let’s say the older DS14 shelves, 14 disks per shelf, that totals you say 56 spindles, I’d say put them all in one 64-bit aggregate, but make sure that each raid group has at least 12 spindles (14 is ideal) plus 2 spindles per aggr for double-parity, plus 2 spindles for hot spares. If we work backwards, it looks something like…

    56 – 2 spares – 2 parity = 52/14 = 3 rg’s of 14 disks, with 10 leftovers. At this point, we could then add 2 more spindles to each rg, making them 16 disks (which is awesome for WAFL striping performance) and you’re only left with one extra disk. If you were to simply add ALL unclaimed disks to a new aggregate, and tell it you wanted 14-disk raid groups, those 10 leftovers would be put into a new raid group. I’ve seen this happen where three or four disks get dropped into their own raid group, and the performance is terrible. My personal recommendation: Follow the 14+2 rule always and do 16-disk raid groups. anything between 12-16 disks per raid group is the sweet spot. Thin provision everything and monitor at the aggregate level for capacity and utilization, leveraging auto-grow on all your volumes and LUNs.

    As far as performance is concerned, you can certainly leverage sysstat -x on the controller to monitor discrete stats on throughput to see if SIOC is telling the truth or not.

Leave a reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.