Exporteren (0) Afdrukken
Alles uitvouwen
EN
Deze inhoud is niet beschikbaar in uw taal, maar wel in het Engels.

About Traffic Manager Load Balancing Methods

Updated: May 29, 2014

Windows Azure Traffic Manager

There are three load balancing methods available in Traffic Manager. Each Traffic Manager profile can use only one load balancing method at a time, although you can select a different load balancing method for your profile at any time.

It’s important to note that all load balancing methods rely on monitoring. After you configure your Traffic Manager profile to specify the load balancing method that best fits your requirements, configure your monitoring settings. When monitoring is correctly configured, Traffic Manager will monitor the state of your endpoints, consisting of cloud services and web sites, and won’t send traffic to endpoints it thinks are unavailable. For information about Traffic Manager monitoring, see About Traffic Manager Monitoring. For information about configuring your monitoring settings, see Configure Traffic Manager Monitoring.

The three Traffic Manager load balancing methods are:

  • Failover: Select Failover when you have endpoints in the same or different Azure datacenters (known as regions in the Management Portal) and want to use a primary endpoint for all traffic, but provide backups in case the primary or the backup endpoints are unavailable. For more information, see Failover load balancing method.

  • Round Robin: Select Round Robin when you want to distribute load across a set of endpoints in the same datacenter or across different datacenters. For more information, see Round Robin load balancing method.

  • Performance: Select Performance when you have endpoints in different geographic locations and you want requesting clients to use the "closest" endpoint in terms of the lowest latency. For more information, see Performance load balancing method.

Note that Azure Web Sites already provides failover and round-robin load balancing functionality for web sites within a datacenter, regardless of the web site mode. Traffic Manager allows you to specify failover and round-robin load balancing for web sites in different datacenters.

Often an organization wants to provide reliability for its services. It does this by providing backup services in case their primary service goes down. A common pattern for service failover is to provide a set of identical endpoints and send traffic to a primary service, with a list of one or more backups. If the primary service is not available, requesting clients are referred to the next in order. If both the first and second services in the list are not available, the traffic goes to the third and so on.

When configuring the Failover load balancing method, the order of the selected endpoints is important. The primary endpoint is listed first. If you configure this setting in the Management Portal using Quick Create, you must separately configure the failover order on the Configuration page for the profile. You cannot set the endpoint failover order during Quick Create.

Figure 1 shows an example of the Failover load balancing method for a set of endpoints.

Traffic Manager Failover load balancing

Figure 1

Example of Failover load balancing
The following numbered steps correspond to the numbers in Figure 1.

  1. Traffic Manager receives an incoming request from a client through a DNS server (that is not shown) and locates the profile.

  2. The profile contains an ordered list of endpoints. Traffic Manager checks which endpoint is first in the list. If the endpoint is online (based on the ongoing endpoint monitoring), it will specify that endpoint’s DNS name in the DNS response to the client. If the endpoint is not available, Traffic Manager determines the next online endpoint in the list. In this example HS-A is unavailable, but HS-B is available.

  3. Traffic Manager returns HS-B’s domain name in the DNS response to the client. The client then resolves HS-B’s domain name to its IP address.

  4. The client initiates traffic to HS-B.

Points to note:

  • It is important that Traffic Manager endpoint monitoring is configured correctly, otherwise all traffic will always be sent to the primary endpoint.

  • You must specify the endpoint failover order on the Configuration page for your profile after you create it.

  • The DNS Time-to-Live (TTL) informs DNS clients and resolvers on DNS servers how long to cache the resolved names. Clients will continue to use a given endpoint when resolving its domain name until the local DNS cache entry for the name expires.

A common load balancing pattern is to provide a set of identical endpoints and send traffic to each in a round-robin fashion. The Round Robin method splits up traffic across various endpoints. It selects a healthy endpoint at random and will not send traffic to services that are detected as being down. For more information, see About Traffic Manager Monitoring.

Round Robin load balancing also supports weighted distribution of network traffic. However, to configure weights at this time, you must use either REST (see Create Definition) or Windows PowerShell (see New-AzureTrafficManagerProfile).

Figure 2 shows an example of the Round Robin load balancing method for a set of endpoints.

Traffic Manager Round Robin load balancing

Figure 2

Example of Round Robin load balancing
The following numbered steps correspond to the numbers in Figure 2.

  1. Traffic Manager receives an incoming request from a client and locates the profile.

  2. The profile contains a list of endpoints. Traffic Manager knows which endpoint was referred to in the last request. In this example, this is endpoint HS-B.

  3. The Traffic Manager returns a domain name of an endpoint in the list in the DNS response to the client. In this example, this is endpoint HS-C.

  4. The Traffic Manager updates itself so that it knows the last traffic went to endpoint HS-C.

  5. The client resolves endpoint HS-C’s domain name to its IP address and initiates traffic.

Points to note:

  • The DNS TTL informs DNS clients and resolvers on DNS servers how long to cache resolved names. Clients will continue to use a given endpoint when resolving its domain name until the local DNS cache entry for the name expires. To test the profile from a single client and observe the round robin behavior, verify that the DNS name is resolved to different IP addresses for each request after the DNS client entry is removed or expires.

Figure 3 shows an example of Round Robin load balancing weighted for a set of endpoints.

Round Robin Weighted Load Balancing

Figure 3

Example of weighted Round Robin load balancing

Round Robin weighted load balancing enables you to distribute load to various endpoints based on an assigned ‘weight’ value of each endpoint. The higher the weight, the more frequently an endpoint will be returned. Scenarios where this method can be useful include:

  • Gradual application upgrade: Allocate a percentage of traffic to route to a new endpoint, and gradually increase the traffic over time to 100%.

  • Application migration to Azure: Create a profile with both Azure and external endpoints, and specify the weight of traffic that is routed to each endpoint.

  • Cloud-bursting for additional capacity: Quickly expand an on-premises deployment into the cloud by putting it behind a Traffic Manager Profile. When you need extra capacity in the cloud, you can add or enable more endpoints and specify what portion of traffic goes to each endpoint.

At this time, you cannot use the Management Portal to configure weighted load balancing. Azure provides programmatic access to this method using the associated Service Management REST API and Azure PowerShell cmdlets. For information about using the REST operations, see Operations on Traffic Manager (REST API Reference). For information about using the PowerShell cmdlets, see Azure Traffic Manager Cmdlets.

In order to load balance endpoints that are located in different datacenters across the globe, you can direct incoming traffic to the closest endpoint in terms of the lowest latency between the requesting client and the endpoint. Usually, the “closest” endpoint directly corresponds to the shortest geographic distance. The Performance load balancing method will allow you to distribute based on location and latency, but cannot take into account real-time changes in network configuration or load.

The Performance load balancing method locates the origin of the requesting client and refers it to the closest endpoint. “Closeness” is determined by a network performance table showing the round trip time between various IP addresses and each Azure datacenter. This table is updated at periodic intervals and is not meant to be a real time reflection of performance across the Internet. It does not take into account the load on a given service, although Traffic Manager monitors your endpoints based on the method you choose and will not include them in DNS query responses if they are unavailable. In other words, Performance load balancing also incorporates the Failover load balancing method.

Figure 4 shows an example of the Performance load balancing method for a set of endpoints.

Traffic Manager Performance load balancing

Figure 4

Example of performance load balancing
The following numbered steps correspond to the numbers in Figure 4.

  1. Traffic Manager builds the Performance Times Table periodically. The Traffic Manager infrastructure runs tests to determine the round trip times between different points in the world and the Azure datacenters that host endpoints. These tests are run at the discretion of the Azure system.

  2. Traffic Manager receives an incoming request from a client through a DNS server and locates the profile.

  3. Traffic Manager locates the row in the Performance Times Table for the IP address of the incoming request.

  4. Traffic Manager locates the datacenter (the column) with the smallest time for the datacenters that host the endpoints defined in the profile. In this example, that is HS-D.

  5. Traffic Manager returns HS-D’s domain name in the DNS response to the client. The client then resolves HS-D’s domain name to its IP address.

  6. The client initiates traffic to HS-D.

Points to note:

  • If your profile contains multiple endpoints in the same datacenter, then traffic directed to that datacenter is distributed evenly across the endpoints that are available and healthy according to endpoint monitoring.

  • If all endpoints in a given datacenter are not available (according to endpoint monitoring), traffic for those endpoints will be distributed across all other available endpoints that are specified in the profile, not to the next-closest endpoint(s). This is to help avoid a cascading failure that could potentially occur if the next-closest endpoint becomes overloaded.

  • When the performance table is updated, you may notice a difference in traffic patterns and load on your endpoints. These changes should be minimal.

  • The DNS TTL informs DNS clients and resolvers on DNS servers how long to cache the resolved names. Clients will continue to use a given endpoint when resolving its domain name until the local DNS cache entry for the name expires.

  • When using the Performance load balancing method with external endpoints, you will need to specify the location of those endpoints. Choose the Azure region closest to your deployment.

See Also

Weergeven:
© 2014 Microsoft