Configuring the Azure Load Balancer

After the deployment of the Azure Load Balancer is completed, we can go to the resource and find different options. SETTINGS is of most interest to us and we'll need it to configure our Load Balancer. We can find standard Azure resource settings such as Properties, Locks, and Automation script that can be found for all Azure resources. Frontend IP configuration gives us the ability to manage IP addresses associated with our Load Balancer, add new IP addresses, or remove existing ones. Other settings will be needed to configure our Load Balancer to distribute incoming traffic and to configure where that traffic should be directed to.

First, we need to configure the Backend pools for our Load Balancer. I'll select my Load Balancer to be associated to an Availability set. Previously in this chapter, I created a VM named WebSrv1 with an Availability set named WebSet. Then I added an identical VM named WebSrv2 and added it to the same Availability set. So, I have two identical VMs in the same Availability set and I have associated my Azure Load Balancer to this set. Finally, we have to define the network IP configuration that will be used and set it up to use both VMs in this availability set. If we had more VMs in this Availability set, we could set up more IP configurations to be targeted. An example of how to set up a backend pool is shown in this screenshot:

 

The second step is to set up health probes. We need to define the Protocol, Port, Interval, and Unhealthy threshold. Protocol and Port will define what needs to be monitored. As I intend to use WebSrv1 and WebSrv2 as web servers, I'll set up monitoring on port 80. Interval will define how often a check needs to be performed in order to make sure the server is responsive. The threshold defines how many consecutive intervals must probe and fail to contact the server in order to declare it unresponsive. An example of how to set up a health probe on port 80 is shown in this screenshot: 

As I want to use the web server role, I'll repeat the same thing for port 443. In the screenshot here, we can see both probes are created but the USED BY information is empty:

In the third step, we create a load-balancing rule. We need to provide a Name, IP Version, Frontend IP address, Protocol, Frontend port, Backend port, Backend pool, Health probe, Session persistence, and Idle timeout (minutes). Name and IP Version are self-explanatory options, so let's jump on the rest of them. 

For the Frontend IP address, you can choose any of the Load Balancer IP addresses as it can be associated with multiple IP addresses. There is a restriction on what IP address you can choose based on the IP version selected. If the IP Version is set to IPv4, you can select only the IPV4 IP address. If IPv6 is selected, only the IP addresses of the same version associated with the Load Balancer can be selected.

Protocol and Port are connected, with this option you select what protocol needs to be forwarded from the defined port to the defined Backend port. For example, TCP on port 80 should be forwarded to port 80.

With the Backend pool, we define where traffic is forwarded to. As you can have multiple backend pools in a single Azure Balancer, you can select any of these pools. 

Health probe needs to be selected in order to have a check on the VM state. You need to select the probe that performs a check on the backend port used in the rule you are creating.

Options for Sessions persistence and Idle timeout (minutes) are related to how client connections should be handled. As you have at least two VMs in your Backend pool, you need to set traffic to be handled by the same VM for the duration of one session. If you select that traffic coming from the same client IP over the same protocol, this should keep the session alive. The client will be directed to the same VM as long as the session is active.

Idle timeout (minutes) determines how long the session will stay active if no action is taken. The default value is 4 minutes but it can be changed to up to 30 minutes. With this setting, you determine how long the session will be active if the client isn't using the application, and isn't sending any messages in order to keep the session alive.

The option for a Floating IP (direct server return) address is Disabled by default and should only be used with SQL AlwaysOn Availability Listener.

In the screenshot here, you can see the options to set up the load-balancing rule, HTTP:

I'll create another rule named HTTPS for port 443. Note in the screenshot here, that probes created earlier are now used by load-balancing rules:

The last option in the Load Balancer settings is the inbound NAT rule. It has similar options as the load-balancing rule with one exception. Traffic, in this case, isn't forwarded to the backend pool but to a single VM. In the screenshot here, you can see how to set up an inbound NAT rule that will forward traffic coming over port 5589 (WinRM) to WebSrv1:

So, let's review what was achieved with setting the Load Balancer and availability set. We have two VMs acting as web servers in the backend pool. VMs are placed in different availability zones and the same availability set in order to increase the chance that at least one of the VMs is running. Health probes are checking if VMs are available on the defined port. If any of the VMs is unresponsive to two consecutive checks, it will be declared as failed. A load-balancing rule is set up to forward traffic that is coming over the Load Balancer public IP address to the backend pool. If both VMs in the backend pool are healthy, traffic will be forwarded on the round-robin rule. If the health probe declares any of the VMs unresponsive, all traffic will be forwarded to the VM that is in a healthy state. Sessions are kept alive based on the client IP, protocol, and idle timeout. Sessions from the same IP address over the same protocol will be forwarded to the same VM as long as a keep-alive signal is sent at least every 4 minutes.

This will ensure our application is up and running, even if a single VM fails. Failures can be caused by hardware or network errors in the Azure Datacenter (the availability zone and availability set ensure that both VMs are not affected). Placing more VMs in the availability set and backend pool increases the chances that at least one VM is up and running. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset