Export (0) Print
Expand All

Azure Load Balancer

Updated: September 11, 2014

Azure Load Balancer

All virtual machines that you create in Azure can automatically communicate using a private network channel with other virtual machines in the same cloud service or virtual network. All other inbound communication, such as traffic initiated from Internet hosts or virtual machines in other cloud services or virtual networks, requires an endpoint.

Endpoints can be used for different purposes. The default use and configuration of endpoints on a virtual machine that you create with the Azure Management Portal are for the Remote Desktop Protocol (RDP) and remote Windows PowerShell session traffic. These endpoints allow you to remotely administer the virtual machine over the Internet.

Another use of endpoints is the configuration of the Azure Load Balancer to distribute a specific type of traffic between multiple virtual machines or services. For example, you can spread the load of web request traffic across multiple web servers or web roles.

Each endpoint defined for a virtual machine is assigned a public and private port, either TCP or UDP. Internet hosts send their incoming traffic to the public IP address of the cloud service and a public port. Virtual machines and services within the cloud service listen on their private IP address and private port. The Azure Load Balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of the virtual machine and vice versa for the response traffic from the virtual machine.

When you configure load balancing of traffic among multiple virtual machines or services, Azure provides random distribution of the incoming traffic.

For a cloud service that contains instances of web roles or worker roles, you can define a public endpoint in the service definition. For a cloud service that contains virtual machines, you can add an endpoint to a virtual machine when you create it or you can add the endpoint later.

The following figure shows a load-balanced endpoint for encrypted web traffic that is shared among three virtual machines for the public and private TCP port of 443. These three virtual machines are in a load-balanced set.

Azure load balancer for protected web traffic

When Internet clients send web page requests to the public IP address of the cloud service and TCP port 443, the Azure Load Balancer performs a random balancing of those requests between the three virtual machines in the load-balanced set.

For the steps to create a load-balanced set, see Configure a load-balanced set.

Azure also supports internal load balancing of traffic between virtual machines within a cloud service, between virtual machines in cloud services that are themselves contained within a virtual network, and between on-premises computers and virtual machines in a cross-premises virtual network. For more information, see Internal load balancing.

The initial set of endpoints for a virtual machine depends on how you create it.

When you create the virtual machine with a Windows PowerShell cmdlet, no endpoints are created by default. You must add endpoints after the virtual machine has been created. For more information, see Add-AzureEndpoint.

You can create a virtual machine with the Azure Management Portal using the following methods:

  1. From Gallery The From Gallery method allows you to configure endpoints when you create the virtual machine and specify the name of an existing cloud service. For instructions, see Create a Virtual Machine Running Windows Server or Create a Virtual Machine Running Linux.

  2. Quick Create With the Quick Create method, Azure creates a new cloud service for the virtual machine and you must configure endpoints after Azure creates the virtual machine. For more information, see How to quickly create a virtual machine.

The Azure Load Balancer works at Layer 4, the Transport layer of the OSI model. This means that it operates on individual streams of TCP or UDP traffic, as defined by their source and destination IP addresses and port numbers. Load balancing at Layer 4 ensures that all of the packets for a given TCP connection or UDP message exchange are routed to the same destination.

The Azure load balancer distributes load among a set of available servers (virtual machines) by computing a hash function on the traffic received on a given input endpoint. The Azure Load Balancer uses the following fields from an incoming packet to compute a hash value:

  • The source IP address

  • The destination IP address

  • The protocol type (TCP or UDP)

  • The source port

  • The destination port

The Azure Load Balancer then uses this hash value to map the traffic to an available server. All the packets from the same TCP connection or UDP message exchange are mapped to the same server in the load-balanced set. When the client closes and re-opens the connection or starts a new session from the same source IP address, the source port typically changes. This creates a new and different hash value and a new mapping of the traffic to an available server.

The result is a random distribution of traffic across the members of the load-balanced set. Based on this random distribution, it is possible for different connections to get mapped to the same server.

For additional information, see Microsoft Azure Load Balancing Services in the Azure blog.

The Azure Load Balancer supports the configuration of the TCP idle timeout, a value in minutes after which the Azure Load Balancer abandons the mapping for an open, but inactive TCP connection. The default value of the TCP idle timeout is 4 minutes. You can now set this between 4 and 30 minutes on a load-balanced set with the Set-AzureLoadBalancedEndpoint Windows PowerShell cmdlet. For more information, see New: Configurable Idle Timeout for Azure Load Balancer in the Azure blog.

See Also

Show:
© 2014 Microsoft