Was this page helpful?
Your feedback about this content is important. Let us know what you think.
Additional feedback?
1500 characters remaining
Asynchronous Messaging Primer

Asynchronous Messaging Primer

Messaging Cloud Guidance and Primers Show All

Messaging is a key strategy employed in many distributed environments such as the cloud. It enables applications and services to communicate and cooperate, and can help to build scalable and resilient solutions. Messaging supports asynchronous operations, enabling you to decouple a process that consumes a service from the process that implements the service.

Message Queuing Essentials

Asynchronous messaging in the cloud is usually implemented by using message queues. Regardless of the technology used to implement them, most message queues support three fundamental operations:

  • A sender can post a message to the queue.
  • A receiver can retrieve a message from the queue (the message is removed from the queue).
  • A receiver can examine (or peek) the next available message in the queue (the message is not removed from the queue).

Sending and Receiving Messages by Using a Message Queue

Conceptually, you can think of a message queue as a buffer that supports send and receive operations. A sender constructs a message in an agreed format, and posts the message to a queue. A receiver retrieves the message from the queue and processes it. If a receiver attempts to retrieve a message from an empty queue, the receiver may be blocked until a new message arrives on that queue. Many message queues enable a receiver to query the current length of a queue, or peek to see whether one or messages are available, enabling the receiver to avoid being blocked if the queue is empty.

The infrastructure that implements a queue is responsible for ensuring that, once a message has been successfully posted, it will not be lost.

Figure 1 - Sending and receiving messages by using a message queue

Figure 1 - Sending and receiving messages by using a message queue

Dn589781.note(en-us,PandP.10).gifNote:
Some message queuing systems support transactions to ensure atomicity of queue operations, allow senders to define the lifespan of a message on the queue, attach proprietary properties to messages being enqueued and provide other advanced messaging functionality.

Message queuing is ideally suited to performing asynchronous operations. A sender can post a message to a queue, but it does not have to wait while the message is retrieved and processed. A sender and receiver do not even have to be running concurrently.

Message queues are often shared between many senders and receivers. Any number of senders can post messages to the same queue, and each message could be processed by any of the receivers that retrieve messages from this queue.

Figure 2 - Sharing a message queue between many senders and receivers

Figure 2 - Sharing a message queue between many senders and receivers

Dn589781.note(en-us,PandP.10).gifNote:
By default, senders compete for messages and no two senders should be able to retrieve the same message simultaneously.

Retrieving a message is normally a destructive operation. When a message is retrieved, it is removed from the queue. A message queue may also support message peeking. This is a nondestructive receive operation that retrieves a copy of a message from the queue but leaves the original message on the queue. This mechanism can be useful if several receivers are retrieving messages from the same queue, but each receiver only wishes to handle specific messages. The receiver can examine the message it has peeked, and decide whether to retrieve the message (which removes it from the queue) or leave it on the queue for another receiver to handle.

Message Queuing in Microsoft Azure

Azure provides several technologies that enable you to build messaging solutions. These include Azure storage queues, Service Bus queues, and Service Bus topics and subscriptions. At the highest level of abstraction, these technologies all offer very similar features. However, they are generally used in different situations.

For example, an Azure storage queue is typically used to communicate between roles running as part of the same Azure cloud service. A Service Bus queue is more suited to use in large-scale integration solutions, enabling disparate applications and services to connect and communicate. Service Bus topics and subscriptions extend the capabilities of message queuing to enable a system to broadcast messages to multiple receivers.

Dn589781.note(en-us,PandP.10).gifNote:
The article Microsoft Azure Queues and Microsoft Azure Service Bus Queues - Compared and Contrasted on MSDN contains detailed information about the different types of queues that Azure provides.

Basic Message Queuing Patterns

Distributed applications typically use message queues to implement one or more of the following basic message exchange patterns:

  • One-way messaging. This is the most basic pattern for communicating between a sender and a receiver. In this pattern, the sender simply posts a message to the queue in the expectation that a receiver will retrieve it and process it at some point.
  • Request/response messaging. In this pattern a sender posts a message to a queue and expects a response from the receiver. You can use this pattern to implement a reliable system where you must confirm that a message has been received and processed. If the response is not delivered within a reasonable interval, the sender can either send the message again or handle the situation as a timeout or failure. This pattern usually requires a separate communications channel in the form of a dedicated message queue to which the receiver can post its response messages (the sender can provide the details of this queue as part of the message that it posts to the receiver). The sender listens for a response on this queue. This pattern typically requires some form of correlation to enable the sender to determine which response message corresponds to which request sent to the receiver.

    Figure 3 - Request/response messaging with dedicated response queues for each sender

    Figure 3 - Request/response messaging with dedicated response queues for each sender

    Dn589781.note(en-us,PandP.10).gifNote:
    Messages posted to Azure Service Bus Queues contain a ReplyTo property that can be populated by a sender to specify the queue to which any replies should be sent.

  • Broadcast messaging. In this pattern a sender posts a message to a queue, and multiple receivers can read a copy of the message (receivers do not compete for messages in this scenario). This mechanism can be used to notify receivers that an event has occurred of which they should all be aware, and may be used to implement a publisher/subscriber model. This pattern depends on the message queue being able to disseminate the same message to multiple receivers. Azure Service Bus topics and subscriptions provide a mechanism for broadcast messaging, as shown in Figure 4. A topic acts like a queue to which the senders can post messages that include metadata in the form of attributes. Each receiver can create a subscription for the topic, specifying a filter that examines the values of message attributes. Any messages posted to the topic with attribute values that match the filter are automatically forwarded to that subscription. A receiver retrieves messages from a subscription in a similar way to a queue.

    Figure 4 - Broadcast messaging by using a topic and subscriptions

    Figure 4 - Broadcast messaging by using a topic and subscriptions

Scenarios for Asynchronous Messaging

The basic message queuing patterns enable you to construct solutions that address most common asynchronous messaging scenarios. The following list contains some examples:

  • Decoupling workloads. Using a message queue enables you to decouple the logic that generates work from that logic that performs the work. For example, components in the user interface of a web application could generate messages in response to user input and post these messages to a queue. Receivers can retrieve these messages and process them, performing whatever work is required. In this way the user interface can remain responsive. It is not blocked while the messages are handled asynchronously.
  • Temporal decoupling. A sender and a receiver do not have to be running at the same time. A sender can post a message to the queue when the receiver is not available to process it, and a receiver can read messages from the queue even when the sender is not available.
  • Load balancing. You can use message queues to distribute processing across servers and improve throughput. Senders may post a large number of requests to a queue that is serviced by many receivers. Receivers can run on different servers to spread the load. Receivers can be added dynamically to scale out the system if the queue length grows, and they can be removed when the queue has drained. You may be able to use autoscaling to scale the system automatically based on the queue length. This is described in more detail in the Autoscaling Guidance.
  • Load leveling. This scenario covers sudden bursts of activity by senders. A large number of senders might suddenly generate a large volume of messages. Starting a large number of receivers to handle this work could overwhelm the system. Instead, the message queue acts as a buffer, and receivers gradually drain the queue at their own pace without stressing the system. The Queue-based Load Leveling pattern provides more information. You can also use this approach to implement service throttling, and to prevent an application from exhausting the available resources.
  • Cross-platform integration. Message queues can be beneficial for implementing solutions that need to integrate components running on different platforms, and that are built by using different programming languages and technologies. The decoupled nature of senders and receivers can help to remove any implementation dependencies between them. All that is required is that senders and receivers agree on a common format for messages and their contents.
  • Asynchronous workflow. An application might implement a complex business process as a workflow. The individual steps in the workflow can be implemented by senders and receivers, coordinated by using messages posted to or read from a queue. If the work for each step is designed carefully, you may be able to eliminate any dependencies between steps. In this case, the messages can be processed in parallel by multiple receivers.
  • Deferred processing. You can use a message queue to delay processing until off peak hours, or you can arrange for messages to be processed according to a specific schedule. Senders post messages to a queue. At the appointed time the receivers are started up and process the messages in the queue. When the queue has drained, or the timeslot for processing messages has elapsed, the receivers are shut down. Any unprocessed messages will be handled the next time the receivers are started.
  • Reliable messaging. Using a message queue can help to ensure that messages are not lost, even if communication between a sender and a receiver fails. The sender can post messages to a queue and the receiver can retrieve these messages from the queue when communications are reestablished. The sender is not blocked unless it loses connectivity with the queue.
    Dn589781.note(en-us,PandP.10).gifNote:
    How a sender or receiver handles loss of connectivity with the queue is an application design consideration. In many cases, such failures are transient and the application can simply repeat the operation that posts the message to the queue by following the Retry pattern. If the failure is likely to be more long lived, you can implement the Circuit Breaker pattern to prevent continual retries from blocking the sender.

  • Resilient message handling: You can use a message queue to add resiliency to the receivers in your system. In some message queue implementations, a receiver can peek and lock the next available message in a queue. This action retrieves a copy of the message leaving the original on the queue, but also locks it to prevent the same message being read by another receiver. If the receiver fails, the lock will time out and be released. Another receiver can then process the message. Note that if the message processing performed by the receiver updates the system state, this processing should be idempotent to prevent a repeated update from causing multiple changes to the state.
    Dn589781.note(en-us,PandP.10).gifNote:
    This scenario requires that the message queue can lock messages. Azure Service Bus provides a peek-lock mode that can be used to lock a message in a queue without removing it. This lock can also be renewed if it is likely to timeout before the receiver has finished processing the message. Azure Storage queues also provide the ability to peek at messages without dequeueing them, but an application must modify the message to lock it. For more information, see the section “How to: Change the Contents of a Message” of the topic How to Use the Storage Queue Service on MSDN.

  • Non-blocking receivers. In many message queue implementations, by default a receiver blocks when it attempts to retrieve a message and no messages are available in the queue. If the message queue implementation supports message peeking it may be possible for a receiver to poll the queue for messages and attempt to retrieve a message only if there is one available.

Considerations for Implementing Asynchronous Messaging

Conceptually, implementing asynchronous messaging by using message queues is a simple idea, but a solution based on this model might need to address a number of concerns. The following list summarizes some of the items that you may need to consider:

  • Message ordering. The order of messages may not be guaranteed. Some message queuing technologies specify that messages are received in the order in which they are posted, but in other cases the message ordering could depend on a variety of other factors. Some solutions may require that messages are processed in a specific order. The Priority Queue pattern provides a mechanism for ensuring specific messages are delivered before others.
  • Message grouping. When multiple receivers retrieve messages from a queue, there is usually no guarantee over which receiver handles any specific message. Messages should ideally be independent. However, there may be occasions when it is difficult to eliminate dependencies, and it may be necessary to group messages together so that they are all handled by the same receiver.
    Dn589781.note(en-us,PandP.10).gifNote:
    Azure Service Bus queues and subscriptions support message grouping by enabling a sender to place related messages in a session, specified by the SessionID property of a message. A receiver can lock the messages that are part of the same session to prevent them from being handled by a different receiver. Session state information, stored in the queue or subscription with the messages that comprise the session, records information about the session and which messages have been processed. If the receiver handling a session fails, the lock is released and another receiver can pick up the session. The new receiver can use the information in the session state to determine how to continue processing.

  • Idempotency. Some message queuing systems guarantee at least once delivery of messages, but it is possible that the same message could be received and processed more than once. This can occur if a receiver fails after having completed much of its processing and the message is returned to the queue (as described in the Resilient Message Handling scenario in the previous section of this topic). Ideally the message processing logic in a receiver should be idempotent so that, if the work performed is repeated, this repetition does not change the state of the system. However, it can be very difficult to implement idempotency, and it requires very careful design of the message processing code. For more information about idempotency, see Idempotency Patterns on Jonathon Oliver’s blog.
  • Repeated messages. It is possible that the same message could be sent more than once if, for example, the sender fails after posting a message but before completing any other work it was performing. Another sender could be started and run in its place, and this new sender could repeat the message. Some message queuing systems implement duplicate message detection and removal (also known as de-duping) based on message IDs. Message queues with this capability provide at most once delivery of messages.
    Dn589781.note(en-us,PandP.10).gifNote:
    Azure Service Bus queues provide a built-in de-duping capability. Each message can be assigned a unique ID, and a message queue can record a list of the IDs for messages that have been posted (the period during which message IDs are retained is configurable). If a message posted to a queue has the same ID as a message found in this list, the new message is discarded by the queue. Detailed information about implementing de-duping with Azure queues is available in the article Configuring Duplicate Message Detection on the CloudCasts.net website.

  • Poison messages. A poison message is a message that cannot be handled, often because it is malformed or contains unexpected information. A receiver processing the message could throw an exception and fail, causing the message to be returned to the queue for another receiver to handle (see the Resilient Message Handling scenario above). The new receiver, performing the same logic as the first, could also throw an exception and cause the message to be returned to the queue again. This cycle could continue indefinitely. Poison messages can obstruct the processing of other valid messages in the queue. Therefore it is necessary to be able to detect and discard them.
    Dn589781.note(en-us,PandP.10).gifNote:
    Azure storage queues and Service Bus queues provide support for detecting poison messages. If the number of times the same message is received exceeds a specified threshold defined by the MaxDeliveryCount property of the queue, the message can be removed from the queue and placed in an application-defined dead-letter queue.

  • Message expiration. A message might have a limited lifetime, and if it is not processed within this period it might no longer be relevant and should be discarded. A sender can specify the date and time by which the message should be processed as part of the data in the message. A receiver can examine this information before deciding whether to perform the business logic associated with the message.
    Dn589781.note(en-us,PandP.10).gifNote:
    Azure storage queues and Service Bus queues enable you to post messages with a time-to-live attribute. If this period expires before the message is received, the message is silently removed from the queue and placed in a dead-letter queue. Note that, for an Azure storage queue, the maximum time-to-live value for a message is seven days, but there is no limit on the time-to-live value for messages posted to Azure Service Bus queues and topics.

  • Message scheduling. A message might be temporarily embargoed and should not be processed until a specific date and time. The message should not be available to a receiver until this time.
    Dn589781.note(en-us,PandP.10).gifNote:
    Azure storage queues and Service Bus queues enable a sender to specify a time when the message should become available. The message remains invisible to receivers until this time occurs, whereupon it becomes accessible and a receiver can retrieve it. If the message expires before this time it will not be delivered.

Related Patterns and Guidance

The following patterns and guidance may also be relevant to your scenario when implementing asynchronous messaging:

  • Autoscaling Guidance. You may be able to start and stop instances of receivers if the length of the queue on which they are receiving messages exceeds predefined thresholds. This approach can help to maintain performance in a system that implements asynchronous messaging. The Autoscaling Guidance provides more information about the benefits and tradeoffs of this approach.
  • Circuit Breaker Pattern. If the reason that a sender or receiver cannot connect to a queue is more long lasting, it may be necessary to prevent them from repeatedly attempting to perform an operation that is likely to fail until the reason for the failure has been resolved. The Circuit Breaker pattern describes how to handle this scenario.
  • Competing Consumers Pattern. Multiple consumers may need to compete to read messages from a queue. The Competing Consumers pattern explains how to process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload.
  • Priority Queue Pattern. This pattern describes how messages posted by a sender that have a higher priority can be received and processed more quickly by a consumer than those of a lower priority.
  • Queue-based Load Leveling Pattern. This pattern uses a queue to act as a buffer between a sender and a receiver to help to minimize the impact on availability and responsiveness of intermittent heavy loads for both the sender and the receiver.
  • Retry Pattern. A sender or receiver might be unable connect to a queue, but the reasons for this failure may be temporary and quickly pass. The Retry pattern describes how to handle this situation in order to add resiliency to an application.
  • Scheduler Agent Supervisor Pattern. Messaging is often used as part of a workflow implementation. The Scheduler Agent Supervisor pattern demonstrates how messaging can be used to coordinate a set of actions across a distributed set of services and other remote resources, and enable a system to recover and retry actions that fail.

More Information




Show:
© 2015 Microsoft