vRealize Automation 6 with NSX – Load Balancing

vRealize Automation 6 with NSX – Load Balancing

November 9, 2015 0 By Eric Shanks

If you’re building a multi-machine blueprint or multi-tiered app, there is a high likelihood that at least some of those machines will want to be load balanced. Many apps require multiple web servers in order to provide additional availability or to scale out. vRealize Automation 6 coupled with NSX will allow you to put some load balancing right into your server blueprints.

Just to set the stage here, we’re going to deploy an NSX Edge appliance with our multi-machine blueprint and this will load balance both HTTPs and HTTP traffic between a pair of servers.

NSX-LBLogo2

Use Case: Load balancing can be used for plenty of different reasons. Providing scale out capacity and additional availability are two of the primary use cases.

Build It

If you haven’t already gone through the process of setting up NSX and connecting it to vRealize Automation, you should do this first. In addition, you’ll need to already have a multi-machine blueprint ready to go. If you haven’t gone through this setup yet, please look at one of these posts before continuing:

I’ve chosen to copy my already existing NAT profile that we built in a previous post. Feel free to use any type of multi-machine blueprint that you want.

Once you’ve setup the basics, go to your blueprint and go to the Build Information tab. From here, you’ll probably want to make sure that a minimum of two servers are listed. We then select “edit” under the network view.

 

vra-nsx-blueprints1

You’ll notice that we already have a network and network profile setup. If you haven’t done this already please see one of the previous posts about setting up network adapters.

vra-nsx-lb2

 

Now go to the Load Balancer tab. Here you can select the service that you want to load balance. I’ve decided to do both HTTP (port 80) and HTTPS (port 443) to demonstrate what happens when multiple protocols are load balanced. Save your changes. Make sure that you publish your blueprint and assign it to your users.

vra-nsx-lb1

Request Item

Request your new load balancing blueprint from your catalog.

vRA-NSX-LBCatalog1

 

From here, your mileage may vary based on the number of VMs in your blueprint and the type of blueprint you’ve deployed. In my case I’ve used a NAT network profile. When my machines started to deploy, I got a new NSX Edge appliance and there was one vNIC setup as an uplink to my transit network and another link used as my internal network.

vra-nsx-NAT1

 

If I look at the NAT tab I’ll see that I’ve got two entries for NAT and each of them has the same original IP Address. The difference between the two NAT entries is the port. Remember that I load balanced two ports (HTTPS/HTTP) so the two NAT entries correspond to those two ports.

vra-nsx-NAT2

 

Now if I look at the load balancer tab of my new edge appliance we’ll see some cool stuff. Looking at the Pools, we’ll see that there are two pools created. If we look at the first pool, we can see that there are two machines associated with this pool and they are on port 443 (HTTPS). 
NSXPools1

 

If we go look at the second pool, we’ll see that the same two IP Addresses are listed in the pool, but now on port 80 (HTTP). So we can see that a pool is created for each port that is going to be load balanced.NSXPools2

 

Now we can look at the Virtual Servers. Here we see that there are two virtual servers. This is the IP Address that uses would browse to in order to retrieve our web page. Notice that two virtual servers are created, each with the same IP Address. The different again is the port that is used.

vra-nsx-virtualservers1

 

Summary

Load balancing is a pretty common place thing for multi-tiered apps so its pretty crucial that your automation solution also be able to provide this ability. Using vRealize Automation and NSX, it is possible to provide these abilities to your developers and end users so that they can test and build new services as they would be in an production environment.