Azure Load Balancers

Deploying Azure Virtual Machines is the first step but what about business critical services and applications. For these, we'll probably want to design highly available solutions that will have the best possible service level agreement (SLA) and uptime. 

The first step for this must be performed during VM creation with the setting up of an Availability zone and Availability set. Then we add another VM to our solution with a different Availability zone and the same Availability set. This will ensure that the VMs are placed in different zones in the Azure Datacenter and don't depend on the same power source, networking, and cooling. If there is an issue within the Azure Datacenter, there is less chance that both VMs will be impacted. 

Setting up the same Availability set will insure that VMs are placed on different physical servers, compute racks, storage units, and network switches. If hardware failure occurs, there is less chance both VMs will be affected. Another thing availability set takes care of is that Microsoft will never perform maintenance that will impact both VMs at the same time. To keep Azure Datacenters secure and performing in the best possible way, maintenance tasks must be performed periodically to install updates to hosts, or upgrade firmware on hardware. Placing VMs in an availability set informs Microsoft that these VMs are set up to achieve high availability, and maintenance will be performed keeping this in mind and never affecting all VMs at the same time.

So, to achieve high availability, we need at least two VMs set up. But what about traffic? How do we direct whether something goes to the first or the second VM? If the first VM isn't available, how do we direct traffic to the second one?

This is where Azure Load Balancers come into play. This is one of Azure's network services that we skipped before and will explain when the time is right. We will have similar situations in later chapters as well. Azure Load Balancers distribute incoming traffic from frontend to backend pool instances. They can support both inbound and outbound scenarios with low latency and high throughput. In this scenario, incoming traffic would come to the Load Balancer IP and the Load Balancer would distribute traffic to VMs configured in the backend pool. Azure Load Balancer can be internal or public, depending on what kind of traffic we need to distribute. For web server roles, we probably want public Load Balancers. But in the case of a database server, we probably want to use an internal one as we don't want databases exposed over the internet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset