Export (0) Print
Expand All
1 out of 1 rated this helpful - Rate this topic

Concurrency Management

This chapter is excerpted from Programming WCF Services, Second Edition: Building Service Oriented Applications with Windows Communication Foundation by Juval Lowy, published by O'Reilly Media

Incoming client calls are dispatched to the service on threads from the Windows I/O completion thread pool (the pool has 1,000 threads by default). Multiple clients can make multiple concurrent calls, and the service itself can sustain those calls on multiple threads. If the calls are dispatched to the same service instance, you must provide thread-safe access to the service's in-memory state or risk state corruption and errors. The same is true for the client's in-memory state during callbacks, since callbacks too are dispatched on threads from the I/O completion thread pool. In addition to synchronizing access to the instance state when applicable, all services also need to synchronize access to resources shared between instances, such as static variables. Another dimension altogether for concurrency management is ensuring that, if required, the service (or the resources it accesses) executes on particular threads.

WCF offers two modes for synchronization. Automatic synchronization instructs WCF to synchronize access to the service instance. Automatic synchronization is simple to use, but it is available only for service and callback classes. Manual synchronization, on the other hand, puts the full burden of synchronization on the developer and requires application-specific integration. The developer needs to employ .NET synchronization locks, which is by far an expert discipline. The advantages of manual synchronization are that it is available for service and non-service classes alike, and it allows developers to optimize throughput and scalability. This chapter starts by describing the basic concurrency modes available and then presents more advanced aspects of concurrency management, such as dealing with resource safety and synchronization, thread affinity and custom synchronization contexts, callbacks, and asynchronous calls. Throughout, the chapter shares best practices, concurrency management design guidelines, and custom techniques.

Instance Management and Concurrency

Service-instance thread safety is closely related to the service instancing mode. A per-call service instance is thread-safe by definition, because each call gets its own dedicated instance. That instance is accessible only by its assigned worker thread, and because no other threads will be accessing it, it has no need for synchronization. However, a per-call service is typically state-aware. The state store can be an in-memory resource such as static dictionary, and it can be subject to multithreaded access because the service can sustain concurrent calls, whether from the same client or from multiple clients. Consequently, you must synchronize access to the state store.

A per-session service always requires concurrency management and synchronization, because the client may use the same proxy and yet dispatch calls to the service on multiple client-side threads. A singleton service is even more susceptible to concurrent access, and must support synchronized access. The singleton has some in-memory state that all clients implicitly share. On top of the possibility of the client dispatching calls on multiple threads, as with a per-session service, a singleton may simply have multiple clients in different execution contexts, each using its own thread to call the service. All of these calls will enter the singleton on different threads from the I/O completion thread pool-hence the need for synchronization.

Concurrent access to the service instance is governed by the ConcurrencyMode property of the ServiceBehavior attribute:

public enum ConcurrencyMode
{
   Single,
   Reentrant,
   Multiple
}

[AttributeUsage(AttributeTargets.Class)]
public sealed class ServiceBehaviorAttribute : ...
{
   public ConcurrencyMode ConcurrencyMode
   {get;set;}
   //More members
}

The value of the ConcurrencyMode enum controls if and when concurrent calls are allowed. The name ConcurrencyMode is actually incorrect; the proper name for this property would have been ConcurrencyContextMode, since it synchronizes access not to the instance, but rather to the context containing the instance (much the same way InstanceContextMode controls the instantiation of the context, not the instance). The significance of this distinction-i.e., that the synchronization is related to the context and not to the instance-will become evident later.

ConcurrencyMode.Single

When the service is configured with ConcurrencyMode.Single, WCF will provide automatic synchronization to the service context and disallow concurrent calls by associating the context containing the service instance with a synchronization lock. Every call coming into the service must first try to acquire the lock. If the lock is unowned, the caller will be allowed in. Once the operation returns, WCF will unlock the lock, thus allowing in another caller.

The important thing is that only one caller at a time is ever allowed. If there are multiple concurrent callers while the lock is locked, the callers are all placed in a queue and are served out of the queue in order. If a call times out while blocked, WCF will remove the caller from the queue and the client will get a TimeoutException.ConcurrencyMode.Single is the WCF default setting, so these definitions are equivalent:

class MyService : IMyContract
{...}

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
class MyService : IMyContract
{...}

Because the default concurrency mode is synchronized access, the susceptible instancing modes of per-session and singleton are also synchronized by default. Note that even calls to a per-call service instance are synchronized by default.

Synchronized access and transactions

As explained in Chapter 7, Transactions, WCF will verify at service load time whether at least one operation on the service has TransactionScopeRequired set to true and that ReleaseServiceInstanceOnTransactionComplete is true. In this case, the service concurrency mode must be ConcurrencyMode.Single. This is done deliberately to ensure that the service instance can be recycled at the end of the transaction without any danger of there being another thread accessing the disposed instance.

ConcurrencyMode.Multiple

When the service is configured with ConcurrencyMode.Multiple, WCF will stay out of the way and will not synchronize access to the service instance in any way. ConcurrencyMode.Multiple simply means that the service instance is not associated with any synchronization lock, so concurrent calls are allowed on the service instance. Put differently, when a service instance is configured with ConcurrencyMode.Multiple, WCF will not queue up the client messages and dispatch them to the service instance as soon as they arrive.

Tip
A large number of concurrent client calls will not result in a matching number of concurrently executing calls on the service. The maximum number of concurrent calls dispatched to the service is determined by the configured maximum concurrent calls throttle value. As mentioned in Chapter 4, Instance Management, the default value is 16.

Obviously, this is of great concern to sessionful and singleton services, which must manually synchronize access to their instance state. The common way of doing that is to use .NET locks such as Monitor or a WaitHandle-derived class. Manual synchronization, which is covered in great depth in Chapter 8, Concurrency Management of my book Programming .NET Components, Second Edition (O'Reilly), is not for the faint of heart, but it does enable the service developer to optimize the throughput of client calls on the service instance: you can lock the service instance just when and where synchronization is required, thus allowing other client calls on the same service instance in between the synchronized sections. Example 8.1, "Manual synchronization using fragmented locking" shows a manually synchronized sessionful service whose client performs concurrent calls.

Example 8.1. Manual synchronization using fragmented locking

[ServiceContract(SessionMode = SessionMode.Required)]
interface IMyContract
{
   void MyMethod(  );
}

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyService : IMyContract
{
   int[] m_Numbers;
   List<string> m_Names;

   public void MyMethod(  )
   {
      lock(m_Numbers)
      {
         ...
      }

      /* Don't access members here */

      lock(m_Names)
      {
         ...
      }
   }
}

The service in Example 8.1, "Manual synchronization using fragmented locking" is configured for concurrent access. Since the critical sections of the operations that require synchronization are any member variable accesses, the service uses a Monitor (encapsulated in the lock statement) to lock the member variable before accessing it. I call this synchronization technique fragmented locking, since it locks only when needed and only what is being accessed. Local variables require no synchronization, because they are visible only to the thread that created them on its own call stack.

There are two problems with fragmented locking. The first is that it is error- and deadlock-prone. Fragmented locking only provides for thread-safe access if every other operation on the service is as disciplined about always locking the members before accessing them. But even if all operations lock all members, you still risk deadlocks: if one operation on thread A locks member M1 while trying to access member M2 while another operation executing concurrently on thread B locks member M2 while trying to access member M1, you will end up with a deadlock.

Tip
WCF resolves service call deadlocks by eventually timing out the call and throwing a TimeoutException. Avoid using a long send timeout, as it decreases WCF's ability to resolve deadlocks in a timely manner.

It is better to reduce the fragmentation by locking the entire service instance instead:

public void MyMethod(  )
{
   lock(this)
   {
      ...
   }

   /* Don't access members here */

   lock(this)
   {
      ...
   }
}

This approach, however, is still fragmented and thus error-prone-if at some point in the future someone adds a method call in the unsynchronized code section that does access the members, it will not be a synchronized access. It is better still to lock the entire body of the method:

public void MyMethod(  )
{
   lock(this)
   {
      ...
   }
}

The problem with this approach is that in the future someone maintaining this code may err and place some code before or after the lock statement. Your best option therefore is to instruct the compiler to automate injecting the call to lock the instance using the MethodImpl attribute with the MethodImplOptions.Synchronized flag:

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyService : IMyContract
{
   int[] m_Numbers;
   List<string> m_Names;

   [MethodImpl(MethodImplOptions.Synchronized)]
   public void MyMethod(  )
   {
      ...
   }
}

You will need to repeat the assignment of the MethodImpl attribute on all the service operation implementations.

While this code is thread-safe, you actually gain little from the use of ConcurrencyMode.Multiple: the net effect in terms of synchronization is similar to using ConcurrencyMode.Single, yet you have increased the overall code complexity and reliance on developers' discipline. In general, you should avoid ConcurrencyMode.Multiple. However, there are cases where ConcurrencyMode.Multiple is useful, as you will see later in this chapter.

Unsynchronized access and transactions

When the service is configured for ConcurrencyMode.Multiple, if at least one operation has TransactionScopeRequired set to true, then ReleaseServiceInstanceOnTransactionComplete must be set to false. For example, this is a valid definition, even though ReleaseServiceInstanceOnTransactionComplete defaults to true, because no method has TransactionScopeRequired set to true:

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyService : IMyContract
{
   public void MyMethod(  )
   {...}
   public void MyOtherMethod(  )
   {...}
}

The following, on the other hand, is an invalid definition because at least one method has TransactionScopeRequired set to true:

//Invalid configuration:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyService : IMyContract
{
   [OperationBehavior(TransactionScopeRequired = true)]
   public void MyMethod(  )
   {...}
   public void MyOtherMethod(  )
   {...}
}

A transactional unsynchronized service must explicitly set ReleaseServiceInstanceOnTransactionComplete to false:

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple,
                 ReleaseServiceInstanceOnTransactionComplete = false)]
class MyService : IMyContract
{
   [OperationBehavior(TransactionScopeRequired = true)]
   public void MyMethod(  )
   {...}
   public void MyOtherMethod(  )
   {...}
}

The rationale behind this constraint is that only a sessionful or a singleton service could possibly benefit from unsynchronized access, so in the case of transactional access, WCF wants to enforce the semantic of the configured instancing mode. In addition, this will avoid having one caller access the instance, complete the transaction, and release the instance, all while another caller is using the instance.

ConcurrencyMode.Reentrant

The ConcurrencyMode.Reentrant value is a refinement of ConcurrencyMode.Single. Similar to ConcurrencyMode.Single, ConcurrencyMode.Reentrant associates the service context with a synchronization lock, so concurrent calls on the same instance are never allowed. However, if the reentrant service calls out to another service or a callback, and that call chain (or causality) somehow winds its way back to the service instance, as shown in Figure 8.1, "Call reentrancy", that call is allowed to reenter the service instance.

Figure 8.1. Call reentrancy

Call reentrancy

The implementation of ConcurrencyMode.Reentrant is very simple-when the reentrant service calls out over WCF, WCF silently releases the synchronization lock associated with the instance context. ConcurrencyMode.Reentrant is designed to avoid the potential deadlock of reentrancy, although it will release the lock in case of a callout. If the service were to maintain the lock while calling out, if the causality tried to enter the same context, a deadlock would occur.

Reentrancy support is instrumental in a number of cases:

  • A singleton service calling out risks a deadlock if any of the downstream services it calls tries to call back into the singleton.

  • In the same app domain, if the client stores a proxy reference in some globally available variable, then some of the downstream objects called by the service use the proxy reference to call back to the original service.

  • Callbacks on non-one-way operations must be allowed to reenter the calling service.

  • If the callout the service performs is of long duration, even without reentrancy, you may want to optimize throughput by allowing other clients to use the same service instance while the callout is in progress.

Tip
A service configured with ConcurrencyMode.Multiple is by definition also reentrant, because no lock is held during the callout. However, unlike a reentrant service, which is inherently thread-safe, a service configured with ConcurrencyMode.Multiple must provide for its own synchronization (for example, by locking the instance during every call, as explained previously). It is up to the developer of such a service to decide if it should release the lock before calling out to avoid a reentrancy deadlock.

Designing for reentrancy

It is very important to recognize the liability associated with reentrancy. When a reentrant service calls out, it must leave the service in a workable, consistent state, because others could be allowed into the service instance while the service is calling out. A consistent state means that the reentrant service must have no more interactions with its own members or any other local object or static variable, and that when the callout returns, the reentrant service should simply be able to return control to its client. For example, suppose the reentrant service modifies the state of some linked list and leaves it in an inconsistent state-say, missing a head node-because it needs to get the value of the new head from another service. If the reentrant service then calls out to the other service, it leaves other clients vulnerable, because if they call into the reentrant service and access the linked list they will encounter an error.

Moreover, when the reentrant service returns from its callout, it must refresh all local method state. For example, if the service has a local variable that contains a copy of the state of a member variable, that local variable may now have the wrong value, because during the callout another party could have entered the reentrant service and modified the member variable.

Reentrancy and transactions

A reentrant service faces exactly the same design constraints regarding transactions as a service configured with ConcurrencyMode.Multiple; namely, if at least one operation has TransactionScopeRequired set to true, then ReleaseServiceInstanceOnTransactionComplete must be set to false. This is done to maintain the instance context mode semantics.

Callbacks and reentrancy

Consider now the case of a service designed for single-threaded access with ConcurrencyMode.Single and with duplex callbacks. When a call from the client enters the context, it acquires the synchronization lock. If that service obtains the callback reference and calls back to the calling client, that call out will block the thread used to issue the call from the client while still maintaining the lock on the context. The callback will reach the client, execute there, and return with a reply message from the client. Unfortunately, when the reply message is sent to the same service instance context, it will first try to acquire the lock-the same lock already owned by the original call from the client, which is still blocked waiting for the callback to return-and a deadlock will ensue. To avoid this deadlock, during the operation execution, WCF disallows callbacks from the service to its calling client as long as the service is configured for single-threaded access.

There are three ways of safely allowing the callback. The first is to configure the service for reentrancy. When the service invokes the proxy to the callback object, WCF will silently release the lock, thus allowing the reply message from the callback to acquire the lock when it returns, as shown in Example 8.2, "Configure for reentrancy to allow callbacks".

Example 8.2. Configure for reentrancy to allow callbacks

interface IMyContractCallback
{
   [OperationContract]
   void OnCallback(  );
}
[ServiceContract(CallbackContract = typeof(IMyContractCallback))]
interface IMyContract
{
   [OperationContract]
   void MyMethod(  );
}

[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)]
class MyService : IMyContract
{
   public void MyMethod(  )
   {
      IMyContractCallback callback = OperationContext.Current.
                                         GetCallbackChannel<IMyContractCallback>(  );
      callback.OnCallback(  );
   }
}

Control will only return to the service once the callback returns, and the service's own thread will need to reacquire the lock. Configuring for reentrancy is required even of a per-call service, which otherwise has no need for anything but ConcurrencyMode.Single. Note that the service may still invoke callbacks to other clients or call other services; it is the callback to the calling client that is disallowed.

You can, of course, configure the service for concurrent access with ConcurrencyMode.Multiple to avoid having any lock.

The third option (as mentioned in Chapter 5, Operations), and the only case where a service configured with ConcurrencyMode.Single can call back to its clients, is when the callback contract operation is configured as one-way because there will not be any reply message to contend for the lock.

Using the same proxy, a single client can issue multiple concurrent calls to a service. The client can use multiple threads to invoke calls on the service, or it can issue one-way calls in rapid succession on the same thread. In both of these cases, whether the calls from the same client are processed concurrently is the product of the service's configured instancing mode, the service's concurrency mode, and the configured delivery mode (that is, the transport session). The following discussion applies equally to request-reply and one-way calls.

Per-Call Services

In the case of a per-call service, if there is no transport-level session, concurrent processing of calls is allowed. Calls are dispatched as they arrive, each to a new instance, and execute concurrently. This is the case regardless of the service concurrency mode. I consider this to be the correct behavior.

If the per-call service has a transport-level session, whether concurrent processing of calls is allowed is a product of the service concurrency mode. If the service is configured with ConcurrencyMode.Single, concurrent processing of the pending calls is not allowed, and the calls are dispatched one at a time. The reason is that with ConcurrencyMode.Single WCF tries to maintain the guarantee of the transport session that messages are processed strictly in the order in which they were received in that session by having exactly one outstanding instance per channel. You should avoid lengthy processing of calls, because it may risk call timeouts.

While this is a direct result of the channel's architecture, I consider this to be a flawed design. If the service is configured with ConcurrencyMode.Multiple, concurrent processing is allowed. Calls are dispatched as they arrive, each to a new instance, and execute concurrently. An interesting observation here is that in the interest of throughput, it is a good idea to configure a per-call service with ConcurrencyMode.Multiple-the instance itself will still be thread-safe (so you will not incur the synchronization liability), yet you will allow concurrent calls from the same client.

Tip
Two clients using two different proxies will have two distinct channels and will have no issue with concurrent calls. It is only concurrent calls on the same transport session that are serialized one at a time to the per-call service.

When the service is configured with ConcurrencyMode.Reentrant, if the service does not call out, it behaves similarly to a service configured with ConcurrencyMode.Single. If the service does call out, the next call is allowed in, and the returning call has to negotiate the lock like all other pending calls.

Sessionful and Singleton Services

In the case of a sessionful or a singleton service, the configured concurrency mode alone governs the concurrent execution of pending calls. If the service is configured with ConcurrencyMode.Single, calls will be dispatched to the service instance one at a time, and pending calls will be placed in a queue. You should avoid lengthy processing of calls, because it may risk call timeouts.

If the service instance is configured with ConcurrencyMode.Multiple, concurrent processing of calls from the same client is allowed. Calls will be executed by the service instance as fast as they come off the channel (up to the throttle limit). Of course, as is always the case with a stateful unsynchronized service instance, you must synchronize access to the service instance or risk state corruption.

If the service instance is configured with ConcurrencyMode.Reentrant, it behaves just as it would with ConcurrencyMode.Single. However, if the service calls out, the next call is allowed to execute. You must follow the guidelines discussed previously regarding programming in a reentrant environment.

Tip
For a per-session service configured with ConcurrencyMode.Multiple to experience concurrent calls, the client must use multiple worker threads to access the same proxy instance. However, if the client threads rely on the auto-open feature of the proxy (that is, just invoking a method and having that call open the proxy if the proxy is not yet open) and call the proxy concurrently, then the calls will actually be serialized until the proxy is opened, and will be concurrent after that. If you want to dispatch concurrent calls regardless of the state of the proxy, the client needs to explicitly open the proxy (by calling the Open( ) method) before issuing any calls on the worker threads.

Synchronizing access to the service instance using ConcurrencyMode.Single or an explicit synchronization lock only manages concurrent access to the service instance state itself. It does not provide safe access to the underlying resources the service may be using. These resources must also be thread-safe. For example, consider the application shown in Figure 8.2, "Applications must synchronize access to resources".

Figure 8.2. Applications must synchronize access to resources

Applications must synchronize access to resources

Even though the service instances are thread-safe, the two instances try to concurrently access the same resource (such as a static variable, a helper static class, or a file), and therefore the resource itself must have synchronized access. This is true regardless of the service instancing mode. Even a per-call service could run into the situation shown in Figure 8.2, "Applications must synchronize access to resources".

Deadlocked Access

The naive solution to providing thread-safe access to resources is to provide each resource with its own lock, potentially encapsulating that lock in the resource itself, and ask the resource to lock the lock when it's accessed and unlock the lock when the service is done with the resource. The problem with this approach is that it is deadlock-prone. Consider the situation depicted in Figure 8.3, "Deadlock over resources access".

Figure 8.3. Deadlock over resources access

Deadlock over resources access

If the figure, Instance A of the service accesses the thread-safe Resource A. Resource A has its own synchronization lock, and Instance A acquires that lock. Similarly, Instance B accesses Resource B and acquires its lock. A deadlock then occurs when Instance A tries to access Resource B while Instance B tries to access Resource A, since each instance will be waiting for the other to release its lock.

The concurrency and instancing modes of the service are almost irrelevant to avoiding this deadlock. The only case that avoids it is if the service is configured both with InstanceContextMode.Single and ConcurrencyMode.Single, because a synchronized singleton by definition can only have one client at a time and there will be no other instance to deadlock with over access to resources. All other combinations are still susceptible to this kind of deadlock. For example, a per-session synchronized service may have two separate thread-safe instances associated with two different clients, yet the two instances can deadlock when accessing the resources.

Deadlock Avoidance

There are a few possible ways to avoid the deadlock. If all instances of the service meticulously access all resources in the same order (e.g., always trying to acquire the lock of Resource A first, and then the lock of Resource B), there will be no deadlock. The problem with this approach is that it is difficult to enforce, and over time, during code maintenance, someone may deviate from this strict guideline (even inadvertently, by calling methods on helper classes) and trigger the deadlock.

Another solution is to have all resources use the same shared lock. In order to minimize the chances of a deadlock, you'll also want to minimize the number of locks in the system and have the service itself use the same lock. To that end, you can configure the service with ConcurrencyMode.Multiple (even with a per-call service) to avoid using the WCF-provided lock. The first service instance to acquire the shared lock will lock out all other instances and own all underlying resources. A simple technique for using such a shared lock is locking on the service type, as shown in Example 8.3, "Using the service type as a shared lock".

Example 8.3. Using the service type as a shared lock

[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall,
                 ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyService : IMyContract
{
   public void MyMethod(  )
   {
      lock(typeof(MyService))
      {
         ...
         MyResource.DoWork(  );
         ...
      }
   }
}
static class MyResource
{
   public static void DoWork(  )
   {
      lock(typeof(MyService))
      {
       ...
      }
   }
}

The resources themselves must also lock on the service type (or some other shared type agreed upon in advance). There are two problems with the approach of using a shared lock. First, it introduces coupling between the resources and the service, because the resource developer has to know about the type of the service or the type used for synchronization. While you could get around that by providing the type as a resource construction parameter, it will likely not be applicable with third-party-provided resources. The second problem is that while your service instance is executing, all other instances (and their respective clients) will be blocked. Therefore, in the interest of throughput and responsiveness, you should avoid lengthy operations when using a shared lock.

If you think the situation in Example 8.3, "Using the service type as a shared lock", where the two instances are of the same service, is problematic, imagine what happens if the two instances are of different services. The observation to make here is that services should never share resources. Regardless of concurrency management, resources are local implementation details and therefore should not be shared across services. Most importantly, sharing resources across the service boundary is also deadlock-prone. Such shared resources have no easy way to share locks across technologies and organizations, and the services need to somehow coordinate the locking order. This necessitates a high degree of coupling between the services, violating the best practices and tenets of service-orientation.

Incoming service calls execute on worker threads from the I/O completion thread pool and are unrelated to any service or resource threads. This means that by default the service cannot rely on any kind of thread affinity (that is, always being accessed by the same thread). Much the same way, the service cannot by default rely on executing on any host-side custom threads created by the host or service developers. The problem with this situation is that some resources may rely on thread affinity. For example, user interface resources updated by the service must execute and be accessed only by the user interface (UI) thread. Other examples are a resource (or a service) that makes use of the thread local storage (TLS) to store out-of-band information shared globally by all parties on the same thread (using the TLS mandates use of the same thread), or accessing components developed using legacy Visual Basic or Visual FoxPro, which also require thread affinity (due to their own use of the TLS). In addition, for scalability and throughput purposes, some resources or frameworks may require access by their own pool of threads.

Whenever an affinity to a particular thread or threads is expected, the service cannot simply execute the call on the incoming WCF worker thread. Instead, the service must marshal the call to the correct thread(s) required by the resource it accesses.

.NET Synchronization Contexts

.NET 2.0 introduced the concept of a synchronization context. The idea is that any party can provide an execution context and have other parties marshal calls to that context. The synchronization context can be a single thread or any number of designated threads, although typically it will be just a single, yet particular thread. All the synchronization context does is assure that the call executes on the correct thread or threads.

Note that the word context is overloaded. Synchronization contexts have absolutely nothing to do with the service instance context or the operation context described so far in this book. They are simply the synchronizational context of the call.

While conceptually synchronization contexts are a simple enough design pattern to use, implementing a synchronization context is a complex programming task that is not normally intended for developers to attempt.

The SynchronizationContext class

The SynchronizationContext class from the System.Threading namespace represents a synchronization context:

public delegate void SendOrPostCallback(object state);

public class SynchronizationContext
{
   public virtual void Post(SendOrPostCallback callback,object state);
   public virtual void Send(SendOrPostCallback callback,object state);
   public static void SetSynchronizationContext(SynchronizationContext context);
   public static SynchronizationContext Current
   {get;}
   //More members
}

Every thread in .NET may have a synchronization context associated with it. You can obtain a thread's synchronization context by accessing the static Current property of SynchronizationContext. If the thread does not have a synchronization context, Current will return null. You can also pass the reference to the synchronization context between threads, so that one thread can marshal a call to another thread.

To represent the call to invoke in the synchronization context, you wrap a method with a delegate of the type SendOrPostCallback. Note that the signature of the delegate uses an object. If you want to pass multiple parameters, pack those in a structure and pass the structure as an object.

Warning
Synchronization contexts use an amorphous object. Exercise caution when using synchronization contexts, due to the lack of compile-time type safety.

Working with the synchronization context

There are two ways of marshaling a call to the synchronization context: synchronously and asynchronously, by sending or posting a work item, respectively. The Send( ) method will block the caller until the call has completed in the other synchronization context, while Post( ) will merely dispatch it to the synchronization context and then return control to its caller.

For example, to synchronously marshal a call to a particular synchronization context, you first somehow obtain a reference to that synchronization context, and then use the Send( ) method:

//Obtain synchronization context
SynchronizationContext context = ...

SendOrPostCallback doWork = (arg)=>
                            {
                               //The code here is guaranteed to
                               //execute on the correct thread(s)
                            };
context.Send(doWork,"Some argument");

Example 8.4, "Calling a resource on the correct synchronization context" shows a less abstract example.

Example 8.4. Calling a resource on the correct synchronization context

class MyResource
{
   public int DoWork(  )
   {...}
   public SynchronizationContext MySynchronizationContext
   {get;}
}
class MyService : IMyContract
{
   MyResource GetResource(  )
   {...}

   public void MyMethod(  )
   {
      MyResource resource = GetResource(  );
      SynchronizationContext context = resource.MySynchronizationContext;
      int result = 0;
      SendOrPostCallback doWork = delegate
                                  {
                                     result = resource.DoWork(  );
                                  };
      context.Send(doWork,null);
   }
}

In Example 8.4, "Calling a resource on the correct synchronization context", the service MyService needs to interact with the resource MyResource and have it perform some work by executing the DoWork( ) method and returning a result. However, MyResource requires that all calls to it execute on its particular synchronization context. MyResource makes that execution context available via the MySynchronizationContext property. The service operation MyMethod( ) executes on a WCF worker thread. MyMethod( ) first obtains the resource and its synchronization context, then defines an anonymous method that wraps the call to DoWork( ) and assigns that anonymous method to the doWork delegate of the type SendOrPostCallback. Finally, MyMethod( ) calls Send( ) and passes null for the argument, since the DoWork( ) method on the resource requires no parameters. Note the technique used in Example 8.4, "Calling a resource on the correct synchronization context" to retrieve a returned value from the invocation. Since Send( ) returns void, the anonymous method assigns the returned value of DoWork( ) into an outer variable. Without anonymous methods, this task would have required the complicated use of a synchronized member variable.

The problem with Example 8.4, "Calling a resource on the correct synchronization context" is the excessive degree of coupling between the service and the resource. The service needs to know that the resource is sensitive to its synchronization context, obtain the context, and manage the execution. You must also duplicate such code in any service using the resource. It is much better to encapsulate the need in the resource itself, as shown in Example 8.5, "Encapsulating the synchronization context".

Example 8.5. Encapsulating the synchronization context

class MyResource
{
   public int DoWork(  )
   {
      int result = 0;
      SendOrPostCallback doWork = delegate
                                  {
                                     result = DoWorkInternal(  );
                                  };
      MySynchronizationContext.Send(doWork,null);
      return result;
   }
   SynchronizationContext MySynchronizationContext
   {get;}
   int DoWorkInternal(  )
   {...}
}
class MyService :  IMyContract
{
   MyResource GetResource(  )
   {...}
   public void MyMethod(  )
   {
      MyResource resource = GetResource(  );
      int result = resource.DoWork(  );
   }
}

Compare Example 8.5, "Encapsulating the synchronization context" to Example 8.4, "Calling a resource on the correct synchronization context". All the service in Example 8.5, "Encapsulating the synchronization context" has to do is access the resource: it is up to the service internally to marshal the call to its synchronization context.

The UI Synchronization Context

The canonical case for utilizing synchronization contexts is with Windows user interface frameworks such as Windows Forms or the Windows Presentation Foundation (WPF). For simplicity's sake, the rest of the discussion in this chapter will refer only to Windows Forms, although it applies equally to WPF. A Windows UI application relies on the underlying Windows messages and a message-processing loop (the message pump) to process them. The message loop must have thread affinity, because messages to a window are delivered only to the thread that created it. In general, you must always marshal to the UI thread any attempt to access a Windows control or form, or risk errors and failures. This becomes an issue if your services need to update some user interface as a result of client calls or some other event. Fortunately, Windows Forms supports the synchronization context pattern. Every thread that pumps Windows messages has a synchronization context. That synchronization context is the WindowsFormsSynchronizationContext class:

public sealed class WindowsFormsSynchronizationContext : SynchronizationContext,...
{...}

Whenever you create any Windows Forms control or form, that control or form ultimately derives from the class Control. The constructor of Control checks whether the current thread that creates it already has a synchronization context, and if it dos not, Control installs WindowsFormsSynchronizationContext as the current thread's synchronization context.

WindowsFormsSynchronizationContext converts the call to Send( ) or Post( ) to a custom Windows message and posts that Windows message to the UI thread's message queue. Every Windows Forms UI class that derives from Control has a dedicated method that handles this custom message by invoking the supplied SendOrPostCallback delegate. At some point, the UI thread processes the custom Windows message and the delegate is invoked.

Because the window or control can also be called already in the correct synchronization context, to avoid a deadlock when calling Send( ), the implementation of the Windows Forms synchronization context verifies that marshaling the call is indeed required. If marshaling is not required, it uses direct invocation on the calling thread.

UI access and updates

When a service needs to update a user interface, it must have some proprietary mechanisms to find the window to update in the first place. And once the service has the correct window, it must somehow get hold of that window's synchronization context and marshal the call to it. Such a possible interaction is shown in Example 8.6, "Using the form synchronization context".

Example 8.6. Using the form synchronization context

partial class MyForm : Form
{
   Label m_CounterLabel;
   public SynchronizationContext MySynchronizationContext
   {get;set;}

   public MyForm(  )
   {
      InitializeComponent(  );
      MySynchronizationContext = SynchronizationContext.Current;
   }
   void InitializeComponent(  )
   {
      ...
      m_CounterLabel = new Label(  );
      ...
   }

   public int Counter
   {
      get
      {
         return Convert.ToInt32(m_CounterLabel.Text);
      }
      set
      {
         m_CounterLabel.Text = value.ToString(  );
      }
   }
}
[ServiceContract]
interface IFormManager
{
   [OperationContract]
   void IncrementLabel(  );
}
class MyService : IFormManager
{
   public void IncrementLabel(  )
   {
      MyForm form = Application.OpenForms[0] as MyForm;
      Debug.Assert(form != null);

      SendOrPostCallback callback = delegate
                                    {
                                       form.Counter++;
                                    };
      form.MySynchronizationContext.Send(callback,null);
   }
}
static class Program
{
   static void Main(  )
   {
      ServiceHost host = new ServiceHost(typeof(MyService));
      host.Open(  );

      Application.Run(new MyForm(  ));

      host.Close(  );
   }
}

Example 8.6, "Using the form synchronization context" shows the form MyForm, which provides the MySynchronizationContext property that allows its clients to obtain its synchronization context. MyForm initializes MySynchronizationContext in its constructor by obtaining the synchronization context of the current thread. The thread has a synchronization context because the constructor of MyForm is called after the constructor of its topmost base class, Control, was called, and Control has already attached the Windows Forms synchronization context to the thread in its constructor.

MyForm also offers a Counter property that updates the value of a counting Windows Forms label. Only the thread that owns the form can access that label. MyService implements the IncrementLabel( ) operation. In that operation, the service obtains a reference to the form via the static OpenForms collection of the Application class:

public class FormCollection : ReadOnlyCollectionBase
{
   public virtual Form this[int index]
   {get;}
   public virtual Form this[string name]
   {get;}
}

public sealed class Application
{
   public static FormCollection OpenForms
   {get;}
   //Rest of the members
}

Once IncrementLabel( ) has the form to update, it accesses the synchronization context via the MySynchronizationContext property and calls the Send( ) method. Send( ) is provided with an anonymous method that accesses the Counter property. Example 8.6, "Using the form synchronization context" is a concrete example of the programming model shown in Example 8.4, "Calling a resource on the correct synchronization context", and it suffers from the same deficiency: namely, tight coupling between all service operations and the form. If the service needs to update multiple controls, that also results in a cumbersome programming model. Any change to the user interface layout, the controls on the forms, and the required behavior is likely to cause major changes to the service code.

Safe controls

A better approach is to encapsulate the interaction with the Windows Forms synchronization context in safe controls or safe methods on the form, to decouple them from the service and to simplify the overall programming model. Example 8.7, "Encapsulating the synchronization context" lists the code for SafeLabel, a Label-derived class that provides thread-safe access to its Text property. Because SafeLabel derives from Label, you still have full design-time visual experience and integration with Visual Studio, yet you can surgically affect just the property that requires the safe access.

Example 8.7. Encapsulating the synchronization context

public class SafeLabel : Label
{
   SynchronizationContext m_SynchronizationContext =
                                                   SynchronizationContext.Current;
   override public string Text
   {
      set
      {
         SendOrPostCallback setText = (text)=>
                                      {
                                         base.Text = text as string;
                                      };
         m_SynchronizationContext.Send(setText,value);
      }
      get
      {
         string text = String.Empty;
         SendOrPostCallback getText = delegate
                                      {
                                         text = base.Text;
                                      };
         m_SynchronizationContext.Send(getText,null);
         return text;
      }
   }
}

Upon construction, SafeLabel caches its synchronization context. SafeLabel overrides its base class's Text property and uses an anonymous method in the get and set accessors to send the call to the correct UI thread. Note in the get accessor the use of an outer variable to return a value from Send( ), as discussed previously. Using SafeLabel, the code in Example 8.6, "Using the form synchronization context" is reduced to the code shown in Example 8.8, "Using a safe control".

Example 8.8. Using a safe control

class MyForm : Form
{
   Label m_CounterLabel;

   public MyForm(  )
   {
      InitializeComponent(  );
   }
   void InitializeComponent(  )
   {
      ...
      m_CounterLabel = new SafeLabel(  );
      ...
   }
   public int Counter
   {
      get
      {
         return Convert.ToInt32(m_CounterLabel.Text);
      }
      set
      {
         m_CounterLabel.Text = value.ToString(  );
      }
   }
}
class MyService : IFormManager
{
   public void IncrementLabel(  )
   {
      MyForm form = Application.OpenForms[0] as MyForm;
      Debug.Assert(form != null);

      form.Counter++;
   }
}

Note in Example 8.8, "Using a safe control" that the service simply accesses the form directly:

form.Counter++;

and that the form is written as a normal form. Example 8.8, "Using a safe control" is a concrete example of the programming model shown in Example 8.5, "Encapsulating the synchronization context".

Tip
ServiceModelEx contains not only SafeLabel but also other controls you are likely to update at runtime such as SafeButton, SafeListBox, SafeProgressBar, SafeStatusBar, SafeTrackBar, and SafeTextBox.

The programming techniques shown so far put the onus of accessing the resource on the correct thread squarely on the service or resource developer. It would be preferable if the service had a way of associating itself with a particular synchronization context, and could have WCF detect that context and automatically marshal the call from the worker thread to the associated service synchronization context. In fact, WCF lets you do just that. You can instruct WCF to maintain an affinity between all service instances from a particular host and a specific synchronization context. The ServiceBehavior attribute offers the UseSynchronizationContext Boolean property, defined as:

[AttributeUsage(AttributeTargets.Class)]
public sealed class ServiceBehaviorAttribute : ...
{
   public bool UseSynchronizationContext
   {get;set;}
   //More members
}

The affinity between the service type, its host, and a synchronization context is locked in when the host is opened. If the thread opening the host has a synchronization context and UseSynchronizationContext is true, WCF will establish an affinity between that synchronization context and all instances of the service hosted by that host. WCF will automatically marshal all incoming calls to the service's synchronization context. All the thread-specific information stored in the TLS, such as the client's transaction or the security information (discussed in Chapter 10, Security), will be marshaled correctly to the synchronization context.

If UseSynchronizationContext is false, regardless of any synchronization context the opening thread might have, the service will have no affinity to any synchronization context. Likewise, even if UseSynchronizationContext is true, if the opening thread has no synchronization context the service will not have one either.

The default value of UseSynchronizationContext is true, so these definitions are equivalent:

[ServiceContract]
interface IMyContract
{...}

class MyService : IMyContract
{...}
[ServiceBehavior(UseSynchronizationContext = true)]
class MyService : IMyContract
{...}

Hosting on the UI Thread

The classic use for UseSynchronizationContext is to enable the service to update user interface controls and windows directly, without resorting to techniques such as those illustrated in Example 8.6, "Using the form synchronization context" and Example 8.7, "Encapsulating the synchronization context". WCF greatly simplifies UI updates by providing an affinity between all service instances from a particular host and a specific UI thread. To achieve that end, host the service on the UI thread that also creates the windows or controls with which the service needs to interact. Since the Windows Forms synchronization context is established during the instantiation of the base window, you need to open the host before that. For example, this sequence from Example 8.6, "Using the form synchronization context":

ServiceHost host = new ServiceHost(typeof(MyService));
host.Open(  );

Application.Run(new MyForm(  ));

will not have the host associate itself with the form synchronization context, since the host is opened before the form is created.

However, this minute change in the order of the lines of instantiation will achieve the desired effect:

Form form = new MyForm(  );

ServiceHost host = new ServiceHost(typeof(MyService));
host.Open(  );

Application.Run(form);

Although this change has no apparent effect in classic .NET, it is actually monumental for WCF, since now the thread that opened the host does have a synchronization context, and the host will use it for all calls to the service. The problem with this approach is that it is fragile-most developers maintaining your code will not be aware that simply rearranging the same independent lines of code will have this effect. It is also wrong to design the form and the service that needs to update it so that they are both at the mercy of the Main( ) method and the hosting code to such a degree.

The simple solution is to have the window or form that the service needs to interact with be the one that opens the host before loading the form, as shown in Example 8.9, "The form hosting the service".

Example 8.9. The form hosting the service

class MyService : IMyContract
{...}

partial class HostForm : Form
{
   ServiceHost m_Host;
   Label m_CounterLabel;

   public HostForm(  )
   {
      InitializeComponent(  );

      m_Host = new ServiceHost(typeof(MyService));

      m_Host.Open(  );
   }
   void OnFormClosed(object sender,EventArgs e)
   {
      m_Host.Close(  );
   }

   public int Counter
   {
      get
      {
         return Convert.ToInt32(m_CounterLabel.Text);
      }
      set
      {
         m_CounterLabel.Text = value.ToString(  );
      }
   }
}
static class Program
{
   static void Main(  )
   {
      Application.Run(new HostForm(  ));
   }
}

The service in Example 8.9, "The form hosting the service" defaults to using whichever synchronization context its host encounters. The form HostForm stores the service host in a member variable so that the form can close the service when the form is closed. The constructor of HostForm already has a synchronization context, so when it opens the host, an affinity to that synchronization context is established.

Accessing the form

Even though the form hosts the service in Example 8.9, "The form hosting the service", the service instances must have some proprietary application-specific mechanism to reach into the form. If a service instance needs to update multiple forms, you can use the Application.OpenForms collections (as in Example 8.6, "Using the form synchronization context") to find the correct form. Once the service has the form, it can freely access it directly, as opposed to the code in Example 8.6, "Using the form synchronization context", which required marshaling:

class MyService : IFormManager
{
   public void IncrementLabel(  )
   {
      HostForm form = Application.OpenForms[0] as HostForm;
      Debug.Assert(form != null);
      form.Counter++;
   }
}

You could also store references to the forms to use in static variables, but the problem with such global variables is that if multiple UI threads are used to pump messages to different instances of the same form type, you cannot use a single static variable for each form type-you need a static variable for each thread used, which complicates things significantly.

Instead, the form (or forms) can store a reference to itself in the TLS, and have the service instance access that store and obtain the reference. However, using the TLS is a cumbersome and non-type-safe programming model. An improvement on this approach is to use thread-relative static variables. By default, static variables are visible to all threads in an app domain. With thread-relative static variables, each thread in the app domain gets its own copy of the static variable. You use the ThreadStaticAttribute to mark a static variable as thread-relative. Thread-relative static variables are always thread-safe because they can be accessed only by a single thread and because each thread gets its own copy of the static variable. Thread-relative static variables are stored in the TLS, yet they provide a type-safe, simplified programming model. Example 8.10, "Storing form reference in a thread-relative static variable" demonstrates this technique.

Example 8.10. Storing form reference in a thread-relative static variable

partial class HostForm : Form
{
   Label m_CounterLabel;
   ServiceHost m_Host;

   [ThreadStatic]
   static HostForm m_CurrentForm;

   public static HostForm CurrentForm
   {
      get
      {
         return m_CurrentForm;
      }
      set
      {
         m_CurrentForm = value;
      }
   }
   public int Counter
   {
      get
      {
         return Convert.ToInt32(m_CounterLabel.Text);
      }
      set
      {
         m_CounterLabel.Text = value.ToString(  );
      }
   }
   public HostForm(  )
   {
      InitializeComponent(  );

      CurrentForm = this;

      m_Host = new ServiceHost(typeof(MyService));
      m_Host.Open(  );
   }
   void OnFormClosed(object sender,EventArgs e)
   {
      m_Host.Close(  );
   }
}
[ServiceContract]
interface IFormManager
{
   [OperationContract]
   void IncrementLabel(  );
}
class MyService : IFormManager
{
   public void IncrementLabel(  )
   {
      HostForm form = HostForm.CurrentForm;
      form.Counter++;
   }
}
static class Program
{
   static void Main(  )
   {
      Application.Run(new HostForm(  ));
   }
}

The form HostForm stores a reference to itself in a thread-relative static variable called m_CurrentForm. The service accesses the static property CurrentForm and obtains a reference to the instance of HostForm on that UI thread.

Multiple UI threads

Your service host process can actually have multiple UI threads, each pumping messages to its own set of windows. Such a setup is usually required with UI-intensive applications that want to avoid having multiple windows sharing a single UI thread and hosting the services, because while the UI thread is processing a service call (or a complicated UI update), not all of the windows will be responsive. Since the service synchronization context is established per host, if you have multiple UI threads you will need to open a service host instance for the same service type on each UI thread. Each service host will therefore have a different synchronization context for its service instances. As mentioned in Chapter 1, WCF Essentials, in order to have multiple hosts for the same service type, you must provide each host with a different base address. The easiest way of doing that is to provide the form constructor with the base address to use as a construction parameter. I also recommend in such a case to use base address-relative addresses for the service endpoints. The clients will still invoke calls on the various service endpoints, yet each endpoint will now correspond to a different host, according to the base address schema and the binding used. Example 8.11, "Hosting on multiple UI threads" demonstrates this configuration.

Example 8.11. Hosting on multiple UI threads

partial class HostForm : Form
{
   public HostForm(string baseAddress)
   {
      InitializeComponent(  );

      CurrentForm = this;

      m_Host = new ServiceHost(typeof(MyService),new Uri(baseAddress));
      m_Host.Open(  );
   }
   //Rest same as Example 8-10
}
static class Program
{
   static void Main(  )
   {
      ParameterizedThreadStart threadMethod = (baseAddress)=>
                                              {
                                            string address = baseAddress as string;
                                            Application.Run(new HostForm(address));
                                              };
      Thread thread1 = new Thread(threadMethod);
      thread1.Start("http://localhost:8001/");

      Thread thread2 = new Thread(threadMethod);
      thread2.Start("http://localhost:8002/");
   }
}
/* MyService same as Example 8-10 */

////////////////////////////// Host Config File //////////////////////////////
<services>
   <service name = "MyService">
      <endpoint
         address  = "MyService"
         binding  = "basicHttpBinding"
         contract = "IFormManager"
      />
   </service>
</services>
////////////////////////////// Client Config File ////////////////////////////
<client>
   <endpoint name = "Form A"
      address  = "http://localhost:8001/MyService/"
      binding  = "basicHttpBinding"
      contract = "IFormManager"
   />
   <endpoint name = "Form B"
      Address  = "http://localhost:8002/MyService/"
      binding  = "basicHttpBinding"
      contract = "IFormManager"
   />
</client>

In Example 8.11, "Hosting on multiple UI threads", the Main( ) method launches two UI threads, each with its own instance of HostForm. Each form instance accepts as a construction parameter a base address that it in turn provides for its own host instance. Once the host is opened, it establishes an affinity to that UI thread's synchronization context. Calls from the client to the corresponding base address are now routed to the respective UI thread.

A Form As a Service

The main motivation for hosting a WCF service on a UI thread is when the service needs to update the UI or the form. The problem is, how does the service reach out and obtain a reference to the form? While the techniques and ideas shown in the examples so far certainly work, the separation between the service and the form is artificial. It would be simpler if the form were the service and hosted itself. For this to work, the form (or any window) must be a singleton service. The reason is that singleton is the only instancing mode that enables you to provide WCF with a live instance to host. In addition, it wouldn't be desirable to use a per-call form that exists only during a client call (which is usually very brief), or a sessionful form that only a single client can establish a session with and update. When a form is also a service, having that form as a singleton is the best instancing mode all around. Example 8.12, "Form as a singleton service" lists just such a service.

Example 8.12. Form as a singleton service

[ServiceContract]
interface IFormManager
{
   [OperationContract]
   void IncrementLabel(  );
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
partial class MyForm : Form,IFormManager
{
   Label m_CounterLabel;
   ServiceHost m_Host;

   public MyForm(  )
   {
      InitializeComponent(  );
      m_Host = new ServiceHost(this);
      m_Host.Open(  );
   }
   void OnFormClosed(object sender,EventArgs args)
   {
      m_Host.Close(  );
   }
   public void IncrementLabel(  )
   {
      Counter++;
   }
   public int Counter
   {
      get
      {
         return Convert.ToInt32(m_CounterLabel.Text);
      }
      set
      {
         m_CounterLabel.Text = value.ToString(  );
      }
   }
}

MyForm implements the IFormManager contract and is configured as a WCF singleton service. MyForm has a ServiceHost as a member variable, as before. When MyForm constructs the host, it uses the host constructor that accepts an object reference, as shown in Chapter 4, Instance Management. MyForm passes itself as the object. MyForm opens the host when the form is created and closes the host when the form is closed. Updating the form's controls as a result of client calls is done by accessing them directly, because the form, of course, runs in its own synchronization context.

The FormHost<F> class

You can streamline and automate the code in Example 8.12, "Form as a singleton service" using my FormHost<F> class, defined as:

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
public abstract class FormHost<F> : Form where F : Form
{
   public FormHost(params string[] baseAddresses);

   protected ServiceHost<F> Host
   {get;}
}

Using FormHost<F>, Example 8.12, "Form as a singleton service" is reduced to:

partial class MyForm : FormHost<MyForm>,IFormManager
{
   Label m_CounterLabel;

   public MyForm(  )
   {
      InitializeComponent(  );
   }
   public void IncrementLabel(  )
   {
      Counter++;
   }
   public int Counter
   {
      get
      {
         return Convert.ToInt32(m_CounterLabel.Text);
      }
      set
      {
         m_CounterLabel.Text = value.ToString(  );
      }
   }
}
Tip
The Windows Forms designer is incapable of rendering a form that has an abstract base class, let alone one that uses generics. You will have to change the base class to Form for visual editing, then revert to FormHost<F> for debugging. To compensate, copy the Debug configuration into a new solution configuration called Design, then add the DESIGN symbol to the Design configuration. Finally, define the form to render properly in design mode and to execute properly in debug and release modes:
#if DESIGN
public partial class MyForm : Form,IFormManager
#else
public partial class MyForm :
                        FormHost<MyForm>,IFormManager
#endif
{...}

Example 8.13, "Implementing FormHost<F>" shows the implementation of FormHost<F>.

Example 8.13. Implementing FormHost<F>

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
public abstract class FormHost<F> : Form where F : Form
{
   protected ServiceHost<F> Host
   {get;private set;}

   public FormHost(params string[] baseAddresses)
   {
      Host = new ServiceHost<F>(this as F,baseAddresses);

      Load += delegate
              {
                 if(Host.State == CommunicationState.Created)
                 {
                    Host.Open(  );
                 }
              };
      FormClosed += delegate
                    {
                       if(Host.State == CommunicationState.Opened)
                       {
                          Host.Close(  );
                       }
                    };
   }
}

FormHost<F> is an abstract generic class configured as a singleton service. It takes a single type parameter, F, which is constrained to be a Windows Forms Form class. FormHost<F> uses my ServiceHost<T> as a member variable, specifying F for the type parameter for the host. FormHost<F> offers the host access to the derived forms, mostly for advanced configuration, so the Host property is marked as protected. The constructor of FormHost<F> creates the host, but does not open it. The reason is that the subform may want to perform some host initialization, such as configuring a throttle, and this initialization can only be done before opening the host. The subclass should place that initialization in its own constructor:

public MyForm(  )
{
   InitializeComponent(  );
   Host.SetThrottle(10,20,1);
}

To allow for this, the constructor uses an anonymous method to subscribe to the form's Load event, where it first verifies that the subform has not yet opened the host and then opens it. In a similar manner, the constructor subscribes to the form's FormClosed event, where it closes the host.

The UI Thread and Concurrency Management

Whenever you use hosting on the UI thread (or in any other case of a single-thread affinity synchronization context), deadlocks are possible. For example, the following setup is guaranteed to result with a deadlock: a Windows Forms application is hosting a service with UseSynchronizationContext set to true, and UI thread affinity is established; the Windows Forms application then calls the service in-proc over one of its endpoints. The call to the service blocks the UI thread, while WCF posts a message to the UI thread to invoke the service. That message is never processed due to the blocking UI thread-hence the deadlock.

Another possible case for a deadlock occurs when a Windows Forms application is hosting a service with UseSynchronizationContext set to true and UI thread affinity established. The service receives a call from a remote client, which is marshaled to the UI thread and eventually executed on that thread. If the service is allowed to call out to another service, that may result in a deadlock if the callout causality tries somehow to update the UI or call back to the service's endpoint, since all service instances associated with any endpoint (regardless of the service instancing mode) share the same UI thread. Similarly, you risk a deadlock if the service is configured for reentrancy and it calls back to its client: a deadlock will occur if the callback causality tries to update the UI or enter the service, since that reentrance must be marshaled to the blocked UI thread.

UI responsiveness

Every client call to a service hosted on the UI thread is converted to a Windows message and is eventually executed on the UI thread-the same thread that is responsible for updating the UI and for continuing to respond to user input, as well as updating the user about the state of the application. While the UI thread is processing the service call, it does not process UI messages. Consequently, you should avoid lengthy processing in the service operation, because that can severely degrade the UI's responsiveness. You can alleviate this somewhat by pumping Windows messages in the service operation, either by explicitly calling the static method Application.DoEvents( ) to process all the queued-up Windows messages or by using a method such as MessageBox.Show( ) that pumps some but not all of the queued messages. The downside of trying to refresh the UI this way is that it may dispatch queued client calls to the service instance and may cause unwanted reentrancy or a deadlock.

To make things even worse, what if clients dispatch a number of calls to the service all at once? Depending on the service concurrency mode (discussed next) even if those service calls are of short duration, the calls will all be queued back-to-back in the Windows message queue, and processing them in order might take time-and all the while, the UI will not be updated.

Whenever you're hosting on a UI thread, carefully examine the calls' duration and frequency to see whether the resulting degradation in UI responsiveness is acceptable. What is acceptable may be application-specific, but as a rule of thumb, most users will not mind a UI latency of less than half a second, will notice a delay of more than three quarters of a second, and will be annoyed if the delay is more than a second. If that is the case, consider hosting parts of the UI (and the associated services) on multiple UI threads, as explained previously. By having multiple UI threads you maximize responsiveness, because while one thread is busy servicing a client call, the rest can still update their windows and controls. If using multiple UI threads is impossible in your application and processing service calls introduces unacceptable UI responsiveness, examine what the service operations do and what is causing the latency. Typically, the latency would be caused not by the UI updates but rather by performing lengthy operations, such as calling other services, or computational-intensive operations, such as image processing. Because the service is hosted on the UI thread, WCF performs all of that work on the UI thread, not just the critical part that interacts with the UI directly. If that is indeed your situation, disallow the affinity to the UI thread altogether by setting UseSynchronizationContext to false:

[ServiceBehavior(UseSynchronizationContext = false)]
class MyService : IMyContract
{
   public void MyMethod(  )
   {
      Debug.Assert(Application.MessageLoop == false);
      //Rest of the implementation
   }
}

(You can even assert that the thread executing the service call does not have a message loop.) Perform the lengthy operations on the incoming worker thread, and use safe controls (such as SafeLabel) to marshal the calls to the UI thread only when required, as opposed to all the time. The downside of this approach is that it is an expert programming model: the service cannot be the window or form itself (by relying on the simplicity of FormHost<F>), so you need a way of binding to the form, and the service developer has to work together with the UI developers to ensure they use the safe controls or provide access to the form's synchronization context.

The UI thread and concurrency modes

A service with a UI thread affinity is inherently thread-safe because only that UI thread can ever call its instances. Since only a single thread (and the same thread, at that) can ever access an instance, that instance is by definition thread-safe. Since the service is single-threaded anyway, configuring the service with ConcurrencyMode.Single adds no safety. When you configure with ConcurrencyMode.Single, concurrent client calls are first queued up by the instance lock and then dispatched to the service's message loop one at a time, in order. These client calls are therefore given the opportunity of being interleaved with other UI Windows messages. ConcurrencyMode.Single thus yields the best responsiveness, because the UI thread will alternate between processing client calls and user interactions. When you configure the service with ConcurrencyMode.Multiple, client calls are dispatched to the service message loop as soon as they arrive off the channel and are invoked in order. The problem is that this mode allows the possibility of a batch of client calls arriving either back-to-back or in close proximity to each other in the Windows message queue, and while the UI thread processes that batch, the UI will be unresponsive. Consequently, ConcurrencyMode.Multiple is the worst option for UI responsiveness. When configured with ConcurrencyMode.Reentrant, the service is not reentrant at all, and deadlocks are still possible, as explained at the beginning of this section. Clearly, the best practice with UI thread affinity is to configure the service with ConcurrencyMode.Single. Avoid ConcurrencyMode.Multiple due to its detrimental effect on responsiveness and ConcurrencyMode.Reentrant due to its unfulfilled ability.

While a synchronization context is a general-purpose pattern, out of the box, .NET only implements a single useful one: the Windows Forms synchronization context (there is also the default implementation that uses the .NET thread pool). As it turns out, the ability to automatically marshal calls to a custom synchronization context is one of the most powerful extensibility mechanisms in WCF.

The Thread Pool Synchronizer

There are two aspects to developing a custom service synchronization context: the first is implementing a custom synchronization context, and the second is installing it or even applying it declaratively on the service. ServiceModelEx contains my ThreadPoolSynchronizer class, defined as:

public class ThreadPoolSynchronizer : SynchronizationContext,IDisposable
{
   public ThreadPoolSynchronizer(uint poolSize);
   public ThreadPoolSynchronizer(uint poolSize,string poolName);

   public void Dispose(  );
   public void Close(  );
   public void Abort(  );

   protected Semaphore CallQueued
   {get;}
}

Implementing a custom synchronization context has nothing to do with WCF and is therefore not discussed in this book, although the implementation code is available with ServiceModelEx.

ThreadPoolSynchronizer marshals all calls to a custom thread pool, where the calls are first queued up, then multiplexed on the available threads. The size of the pool is provided as a construction parameter. If the pool is maxed out, any calls that come in will remain pending in the queue until a thread is available.

You can also provide a pool name (which will be the prefix of the name of each of the threads in the pool). Disposing of or closing the ThreadPoolSynchronizer kills all threads in the pool gracefully; that is, the ThreadPoolSynchronizer waits for the engaged threads to complete their tasks. The Abort( ) method is an ungraceful shutdown, as it terminates all threads abruptly.

The classic use for a custom thread pool is with a server application (such as a web server or an email server) that needs to maximize its throughput by controlling the underlying worker threads and their assignment. However, such usage is rare, since most application developers do not write servers anymore. The real use of ThreadPoolSynchronizer is as a stepping-stone to implement other synchronization contexts, which are useful in their own right.

To associate your service with the custom thread pool, you can manually attach ThreadPoolSynchronizer to the thread opening the host using the static SetSynchronizationContext( ) method of SynchronizationContext, as shown in Example 8.14, "Using ThreadPoolSynchronizer".

Example 8.14. Using ThreadPoolSynchronizer

SynchronizationContext syncContext = new ThreadPoolSynchronizer(3);

SynchronizationContext.SetSynchronizationContext(syncContext);

using(syncContext as IDisposable)
{
   ServiceHost host = new ServiceHost(typeof(MyService));
   host.Open(  );
   /* Some blocking operations */

   host.Close(  );
}

In Example 8.14, "Using ThreadPoolSynchronizer", the thread pool will have three threads. The service MyService will have an affinity to those three threads, and all calls to the service will be channeled to them, regardless of the service concurrency mode or instancing mode, and across all endpoints and contracts supported by the service. After closing the host, the example disposes of ThreadPoolSynchronizer to shut down the threads in the pool.

Note that a service executing in a custom thread pool is not thread-safe (unless the pool size is 1), so the preceding discussion of concurrency management still applies. The only difference is that now you control the threads.

Declaratively attaching a custom synchronization context

The problem with Example 8.14, "Using ThreadPoolSynchronizer" is that the service is at the mercy of the hosting code. If by design the service is required to execute in the pool, it would be better to apply the thread pool declaratively, as part of the service definition.

To that end, I wrote the ThreadPoolBehaviorAttribute:

[AttributeUsage(AttributeTargets.Class)]
public class ThreadPoolBehaviorAttribute : Attribute,
                                                 IContractBehavior,IServiceBehavior
{
   public ThreadPoolBehaviorAttribute(uint poolSize,Type serviceType);
   public ThreadPoolBehaviorAttribute(uint poolSize,Type serviceType,
                                      string poolName);
}

You apply this attribute directly on the service, while providing the service type as a constructor parameter:

[ThreadPoolBehavior(3,typeof(MyService))]
class MyService : IMyContract
{...}

The attribute provides an instance of ThreadPoolSynchronizer to the dispatchers of the service's endpoints. The key in implementing the ThreadPoolBehavior attribute is knowing how and when to hook up the dispatchers with the synchronization context. The ThreadPoolBehavior attribute supports the special WCF extensibility interface IContractBehavior, introduced in Chapter 5, Operations:

public interface IContractBehavior
{
   void ApplyDispatchBehavior(ContractDescription description,
                              ServiceEndpoint endpoint,
                              DispatchRuntime dispatchRuntime);
   //More members
}

When a service is decorated with an attribute that supports IContractBehavior, after opening the host (but before forwarding calls to the service), for each service endpoint WCF calls the ApplyDispatchBehavior( ) method and provides it with the DispatchRuntime parameter, allowing you to affect an individual endpoint dispatcher's runtime and set its synchronization context. Each endpoint has its own dispatcher, and each dispatcher has its own synchronization context, so the attribute is instantiated and ApplyDispatchBehavior( ) is called for each endpoint.

Example 8.15, "Implementing ThreadPoolBehaviorAttribute" lists most of the implementation of ThreadPoolBehaviorAttribute.

Example 8.15. Implementing ThreadPoolBehaviorAttribute

[AttributeUsage(AttributeTargets.Class)]
public class ThreadPoolBehaviorAttribute : Attribute,IContractBehavior,
                                                                   IServiceBehavior
{
   protected string PoolName
   {get;set;}
   protected uint PoolSize
   {get;set;}
   protected Type ServiceType
   {get;set;}

   public ThreadPoolBehaviorAttribute(uint poolSize,Type serviceType) :
                                                    this(poolSize,serviceType,null)
   {}
   public ThreadPoolBehaviorAttribute(uint poolSize,Type serviceType,
                                      string poolName)
   {
      PoolName    = poolName;
      ServiceType = serviceType;
      PoolSize    = poolSize;
   }
   protected virtual ThreadPoolSynchronizer ProvideSynchronizer(  )
   {
      if(ThreadPoolHelper.HasSynchronizer(ServiceType) == false)
      {
         return new ThreadPoolSynchronizer(PoolSize,PoolName);
      }
      else
      {
         return ThreadPoolHelper.GetSynchronizer(ServiceType);
      }
   }

   void IContractBehavior.ApplyDispatchBehavior(ContractDescription description,
                                                ServiceEndpoint endpoint,
                                                DispatchRuntime dispatchRuntime)
   {
      PoolName = PoolName ?? "Pool executing endpoints of " + ServiceType;

      lock(typeof(ThreadPoolHelper))
      {
         ThreadPoolHelper.ApplyDispatchBehavior(ProvideSynchronizer(  ),
                                    PoolSize,ServiceType,PoolName,dispatchRuntime);
      }
   }
   void IServiceBehavior.Validate(ServiceDescription description,
                                  ServiceHostBase serviceHostBase)
   {
      serviceHostBase.Closed += delegate
                                {
                                   ThreadPoolHelper.CloseThreads(ServiceType);
                                };
   }
   //Rest of the implementation
}
public static class ThreadPoolHelper
{
   static Dictionary<Type,ThreadPoolSynchronizer> m_Synchronizers =
                                     new Dictionary<Type,ThreadPoolSynchronizer>(  );

   [MethodImpl(MethodImplOptions.Synchronized)]
   internal static bool HasSynchronizer(Type type)
   {
      return m_Synchronizers.ContainsKey(type);
   }

   [MethodImpl(MethodImplOptions.Synchronized)]
   internal static ThreadPoolSynchronizer GetSynchronizer(Type type)
   {
      return m_Synchronizers[type];
   }
   [MethodImpl(MethodImplOptions.Synchronized)]
   internal static void ApplyDispatchBehavior(ThreadPoolSynchronizer synchronizer,
                                              uint poolSize,Type type,
                                              string poolName,
                                              DispatchRuntime dispatchRuntime)
   {
      if(HasSynchronizer(type) == false)
      {
         m_Synchronizers[type] = synchronizer;
      }
      dispatchRuntime.SynchronizationContext = m_Synchronizers[type];
   }
   [MethodImpl(MethodImplOptions.Synchronized)]
   public static void CloseThreads(Type type)
   {
      if(HasSynchronizer(type))
      {
         m_Synchronizers[type].Dispose(  );
         m_Synchronizers.Remove(type);
      }
   }
}

The constructors of the ThreadPoolBehavior attribute save the provided service type and pool name. The name is simply passed to the constructor of ThreadPoolSynchronizer.

Tip
The ApplyDispatchBehavior( ) method in Example 8.15, "Implementing ThreadPoolBehaviorAttribute" uses the ?? null-coalescing operator (introduced in C# 2.0) to assign a pool name if required. This expression:
PoolName = PoolName ??
        "Pool executing endpoints of " + ServiceType;
is shorthand for:
if(PoolName == null)
{
   PoolName = " Pool executing endpoints of " +
                                         ServiceType;
}

It is a best practice to separate the implementation of a WCF custom behavior attribute from the actual behavior: let the attribute merely decide on the sequence of events, and have a helper class provide the actual behavior. Doing so enables the behavior to be used separately (for example, by a custom host). This is why the ThreadPoolBehavior attribute does not do much. It delegates most of its work to a static helper class called ThreadPoolHelper. ThreadPoolHelper provides the HasSynchronizer( ) method, which indicates whether the specified service type already has a synchronization context, and the GetSynchronizer( ) method, which returns the synchronization context associated with the type. The ThreadPoolBehavior attribute uses these two methods in the virtual ProvideSynchronizer( ) method to ensure that it creates the pool exactly once per service type. This check is required because ApplyDispatchBehavior( ) may be called multiple times (once per endpoint). The ThreadPoolBehavior attribute is also a custom service behavior, because it implements IServiceBehavior. The Validate( ) method of IServiceBehavior provides the service host instance the ThreadPoolBehavior attribute uses to subscribe to the host's Closed event, where it asks ThreadPoolHelper to terminate all the threads in the pool by calling ThreadPoolHelper.CloseThreads( ).

ThreadPoolHelper associates all dispatchers of all endpoints of that service type with the same instance of ThreadPoolSynchronizer. This ensures that all calls are routed to the same pool. ThreadPoolHelper has to be able to map a service type to a particular ThreadPoolSynchronizer, so it declares a static dictionary called m_Synchronizers that uses service types as keys and ThreadPoolSynchronizer instances as values.

In ApplyDispatchBehavior( ), ThreadPoolHelper checks to see whether m_Synchronizers already contains the provided service type. If the type is not found, ThreadPoolHelper adds the provided ThreadPoolSynchronizer to m_Synchronizers, associating it with the service type.

The DispatchRuntime class provides the SynchronizationContext property ThreadPoolHelper uses to assign a synchronization context for the dispatcher:

public sealed class DispatchRuntime
{
   public SynchronizationContext SynchronizationContext
   {get;set;}
   //More members
}

Before making the assignment, ThreadPoolHelper verifies that the dispatcher has no other synchronization context, since that would indicate some unresolved conflict. After that, it simply assigns the ThreadPoolSynchronizer instance to the dispatcher:

dispatchRuntime.SynchronizationContext = m_Synchronizers[type];

This single line is all that is required to have WCF use the custom synchronization context from now on. In the CloseThreads( ) method, ThreadPoolHelper looks up the ThreadPoolSynchronizer instance in the dictionary and disposes of it (thus gracefully terminating all the worker threads in the pool). ThreadPoolHelper also verifies that the provided pool size value does not exceed the maximum concurrent calls value of the dispatcher's throttle (this is not shown in Example 8.15, "Implementing ThreadPoolBehaviorAttribute").

Thread Affinity

A pool size of 1 will in effect create an affinity between a particular thread and all service calls, regardless of the service's concurrency and instancing modes. This is particularly useful if the service is required not merely to update some UI but to also create a UI (for example, creating a pop-up window and then periodically showing, hiding, and updating it). Having created the window, the service must ensure that the creating thread is used to access and update it. Thread affinity is also required for a service that accesses or creates resources that use the TLS. To formalize such requirements I created the specialized AffinitySynchronizer class, implemented as:

public class AffinitySynchronizer : ThreadPoolSynchronizer
{
   public AffinitySynchronizer(  ) : this("AffinitySynchronizer Worker Thread")
   {}
   public AffinitySynchronizer(string threadName): base(1,threadName)
   {}
}

While you can install AffinitySynchronizer, as shown in Example 8.14, "Using ThreadPoolSynchronizer", if by design the service is required to always execute on the same thread it is better not to be at the mercy of the host and the thread that happens to open it. Instead, use my ThreadAffinityBehaviorAttribute:

[ThreadAffinityBehavior(typeof(MyService))]
class MyService : IMyContract
{...}

ThreadAffinityBehaviorAttribute is a specialization of ThreadPoolBehaviorAttribute that hardcodes the pool size as 1, as shown in Example 8.16, "Implementing ThreadAffinityBehaviorAttribute".

Example 8.16. Implementing ThreadAffinityBehaviorAttribute

[AttributeUsage(AttributeTargets.Class)]
public class ThreadAffinityBehaviorAttribute : ThreadPoolBehaviorAttribute
{
   public ThreadAffinityBehaviorAttribute(Type serviceType) :
                                         this(serviceType,"Affinity Worker Thread")
   {}

   public ThreadAffinityBehaviorAttribute(Type serviceType,string threadName) :
                                                     base(1,serviceType,threadName)
   {}
}

When relying on thread affinity all service instances are always thread-safe, since only a single thread (and the same thread, at that) can access them.

When the service is configured with ConcurrencyMode.Single, it gains no additional thread safety because the service instance is single-threaded anyway. You do get double queuing of concurrent calls, though: all concurrent calls to the service are first queued in the lock's queue and then dispatched to the single thread in the pool one at a time. With ConcurrencyMode.Multiple, calls are dispatched to the single thread as fast as they arrive and are then queued up to be invoked later, in order and never concurrently. Finally, with ConcurrencyMode.Reentrant, the service is, of course, not reentrant, because the incoming reentering call will be queued up and a deadlock will occur while the single thread is blocked on the callout. It is therefore best to use the default of ConcurrencyMode.Single when relying on thread affinity.

The host-installed synchronization context

If the affinity to a particular synchronization context is a host decision, you can streamline the code in Example 8.14, "Using ThreadPoolSynchronizer" by encapsulating the installation of the synchronization context with extension methods. For example, the use of thread affinity is such a socialized case, you could define the following extension methods:

public static class HostThreadAffinity
{
   public static void SetThreadAffinity(this ServiceHost host,string threadName);
   public static void SetThreadAffinity(this ServiceHost host);
}

SetThreadAffinity( ) works equally well on ServiceHost and my ServiceHost<T>:

ServiceHost<MyService> host = new ServiceHost<MyService>(  );
host.SetThreadAffinity(  );

host.Open(  );

Example 8.17, "Adding thread affinity to the host" lists the implementation of the SetThreadAffinity( ) methods.

Example 8.17. Adding thread affinity to the host

public static class HostThreadAffinity
{
   public static void SetThreadAffinity(this ServiceHost host,string threadName)
   {
      if(host.State == CommunicationState.Opened)
      {
         throw new InvalidOperationException("Host is already opened");
      }

      AffinitySynchronizer affinitySynchronizer =
                                             new AffinitySynchronizer(threadName);

      SynchronizationContext.SetSynchronizationContext(affinitySynchronizer);

      host.Closing += delegate
                      {
                         using(affinitySynchronizer);
                      };
   }
   public static void SetThreadAffinity(this ServiceHost host)
   {
      SetThreadAffinity(host,"Executing all endpoints of " +
                                                    host.Description.ServiceType);
   }
}

HostThreadAffinity offers two versions of SetThreadAffinity( ): the parameterized version takes the thread name to provide for AffinitySynchronizer's worker thread, while the parameterless version calls the other SetThreadAffinity( ) method, specifying a thread name inferred from the hosted service type (such as "Executing all endpoints of MyService"). SetThreadAffinity( ) first checks that the host has not yet been opened, because you can only attach a synchronization context before the host is opened. If the host has not been opened, SetThreadAffinity( ) constructs a new AffinitySynchronizer, providing it with the thread name to use, and attaches it to the current thread. Finally, SetThreadAffinity( ) subscribes to the host's Closing event in order to call Dispose( ) on the AffinitySynchronizer, to shut down its worker thread. Since the AffinitySynchronizer member can be null if no one calls SetThreadAffinity( ), OnClosing( ) uses the using statement, which internally checks for null assignment before calling Dispose( ).

Priority Processing[6]

By default, all calls to your WCF service will be processed in the order in which they arrive. This is true both if you use the I/O completion thread pool or a custom thread pool. Normally, this is exactly what you want. But what if some calls have higher priority and you want to process them as soon as they arrive, rather than in order? Even worse, when such calls arrive, what if the load on your service is such that the underlying service resources are exhausted? What if the throttle is maxed out? In these cases, your higher-priority calls will be queued just like all the other calls, waiting for the service or its resources to become available. Synchronization contexts offer an elegant solution to this problem: you can assign a priority to each call and have the synchronization context sort the calls as they arrive before dispatching them to the thread pool for execution. This is exactly what my PrioritySynchronizer class does:

public enum CallPriority
{
   Low,
   Normal,
   High
}
public class PrioritySynchronizer : ThreadPoolSynchronizer
{
   public PrioritySynchronizer(uint poolSize);
   public PrioritySynchronizer(uint poolSize,string poolName);

   public static CallPriority Priority
   {get;set;}
}

PrioritySynchronizer derives from ThreadPoolSynchronizer and adds the sorting just mentioned. Since the Send( ) and Post( ) methods of SynchronizationContext do not take a priority parameter, the client of PrioritySynchronizer has two ways of passing the priority of the call: via the Priority property, which stores the priority (a value of the enum type CallPriority) in the TLS of the calling thread, or via the message headers. If unspecified, Priority defaults to CallPriority.Normal.

In addition to the PrioritySynchronizer class, I also provide the matching PriorityCallsBehaviorAttribute, shown in Example 8.18, "Implementing PriorityCallsBehaviorAttribute".

Example 8.18. Implementing PriorityCallsBehaviorAttribute

[AttributeUsage(AttributeTargets.Class)]
public class PriorityCallsBehaviorAttribute : ThreadPoolBehaviorAttribute
{
   public PriorityCallsBehaviorAttribute(uint poolSize,Type serviceType) :
                                                    this(poolSize,serviceType,null)
   {}
   public PriorityCallsBehaviorAttribute(uint poolSize,Type serviceType,
                             string poolName) : base(poolSize,serviceType,poolName)
   {}
   protected override ThreadPoolSynchronizer ProvideSynchronizer(  )
   {
      if(ThreadPoolHelper.HasSynchronizer(ServiceType) == false)
      {
         return new PrioritySynchronizer(PoolSize,PoolName);
      }
      else
      {
         return ThreadPoolHelper.GetSynchronizer(ServiceType);
      }
   }
}

Using the PriorityCallsBehavior attribute is straightforward:

[PriorityCallsBehavior(3,typeof(MyService))]
class MyService : IMyContract
{...}

PriorityCallsBehaviorAttribute overrides ProvideSynchronizer( ) and provides an instance of PrioritySynchronizer instead of ThreadPoolSynchronizer. Because PrioritySynchronizer derives from ThreadPoolSynchronizer, this is transparent as far as ThreadPoolHelper is concerned.

The real challenge in implementing and supporting priority processing is providing the call priority from the client to the service, and ultimately to PrioritySynchronizer. Using the Priority property of PrioritySynchronizer is useful only for non-WCF clients that interact directly with the synchronization context; it is of no use for a WCF client, whose thread is never used to access the service. While you could provide the priority as an explicit parameter in every method, I wanted a generic mechanism that can be applied on any contract and service. To achieve that goal you have to pass the priority of the call out-of-band, via the message headers, using the techniques described in Appendix B, Headers and Contexts. Appendix B, Headers and Contexts explains in detail the use of the incoming and outgoing headers, including augmenting WCF with general-purpose management of extraneous information sent from the client to the service. In effect, I provide a generic yet type-safe and application-specific custom context via my GenericContext<T> class, available in ServiceModelEx:

[DataContract]
public class GenericContext<T>
{
   [DataMember]
   public readonly T Value;

   public GenericContext(  );
   public GenericContext(T value);
   public static GenericContext<T> Current
   {get;set;}
}

Literally any data contract (or serializable) type can be used for the type parameter in the custom context, including of course the CallPriority enum.

On the service side, any party can read the value out of the custom headers:

CallPriority priority = GenericContext<CallPriority>.Current.Value;

This is exactly what PrioritySynchronizer does when looking for the call priority. It expects the client to provide the priority either in the TLS (via the Priority property) or in the form of a custom context that stores the priority in the message headers.

The client can use my HeaderClientBase<T,H> proxy class (also discussed in Appendix B, Headers and Contexts) to pass the priority to the service in the message headers, or, even better, define a general-purpose priority-enabled proxy class, PriorityClientBase<T>, shown in Example 8.19, "Defining PriorityClientBase<T>".

Example 8.19. Defining PriorityClientBase<T>

public abstract partial class PriorityClientBase<T> :
                                   HeaderClientBase<T,CallPriority> where T : class
{
   public PriorityClientBase(  ) : this(PrioritySynchronizer.Priority)
   {}

   public PriorityClientBase(string endpointName) :
                                   this(PrioritySynchronizer.Priority,endpointName)
   {}

   public PriorityClientBase(Binding binding,EndpointAddress remoteAddress) :
                          this(PrioritySynchronizer.Priority,binding,remoteAddress)
   {}

   public PriorityClientBase(CallPriority priority) : base(priority)
   {}

   public PriorityClientBase(CallPriority priority,string endpointName) :
                                           base(priority,endpointConfigurationName)
   {}

   public PriorityClientBase(CallPriority priority,Binding binding,
              EndpointAddress remoteAddress) : base(priority,binding,remoteAddress)
   {}
   /* More constructors */
}

PriorityClientBase<T> hardcodes the use of CallPriority for the type parameter H. PriorityClientBase<T> defaults to reading the priority from the TLS (yielding CallPriority.Normal when no priority is found), so it can be used like any other proxy class. With very minor changes to your existing proxy classes, you can now add priority-processing support:

class MyContractClient : PriorityClientBase<IMyContract>,IMyContract
{
   //Reads priority from TLS
   public MyContractClient(  )
   {}

   public MyContractClient(CallPriority priority) : base(priority)
   {}
   public void MyMethod(  )
   {
      Channel.MyMethod(  );
   }
}

MyContractClient proxy = new MyContractClient(CallPriority.High);
proxy.MyMethod(  );

There are quite a few cases when a client might receive concurrent callbacks. For instance, if the client has provided a callback reference to multiple services, those services could call back to the client concurrently. Even if it has only provided a single callback reference, the service might launch multiple threads and use all of them to call on that single reference. Duplex callbacks enter the client on worker threads, and if they are processed concurrently without synchronization they might corrupt the client's state. The client must therefore synchronize access to its own in-memory state, as well as to any resources the callback thread might access. Similar to a service, a callback client can use either manual or declarative synchronization. The CallbackBehavior attribute introduced in Chapter 6, Faults offers the ConcurrencyMode and the UseSynchronizationContext properties:

[AttributeUsage(AttributeTargets.Class)]
public sealed class CallbackBehaviorAttribute : Attribute,...
{
   public ConcurrencyMode ConcurrencyMode
   {get;set;}
   public bool UseSynchronizationContext
   {get;set;}
}

Both of these properties default to the same values as with the ServiceBehavior attribute and behave in a similar manner. For example, the default of the ConcurrencyMode property is ConcurrencyMode.Single, so these two definitions are equivalent:

class MyClient : IMyContractCallback
{...}

[CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
class MyClient : IMyContractCallback
{...}

Callbacks with ConcurrencyMode.Single

When the callback class is configured with ConcurrencyMode.Single (the default), only one callback at a time is allowed to enter the callback object. The big difference, compared with a service, is that callback objects often have an existence independent of WCF. While the service instance is owned by WCF and only ever accessed by worker threads dispatched by WCF, a callback object may also interact with local client-side threads. It fact, it always interacts with at least one additional thread: the thread that called the service and provided the callback object. These client threads are unaware of the synchronization lock associated with the callback object when it is configured with ConcurrencyMode.Single. All that ConcurrencyMode.Single does for a callback object is serialize the access by WCF threads. You must therefore manually synchronize access to the callback state and any other resource accessed by the callback method, as shown in Example 8.20, "Manually synchronizing the callback with ConcurrencyMode.Single".

Example 8.20. Manually synchronizing the callback with ConcurrencyMode.Single

interface IMyContractCallback
{
   [OperationContract]
   void OnCallback(  );
}
[ServiceContract(CallbackContract = typeof(IMyContractCallback))]
interface IMyContract
{
   [OperationContract]
   void MyMethod(  );
}

class MyClient : IMyContractCallback,IDisposable
{
   MyContractClient m_Proxy;

   public void CallService(  )
   {
      m_Proxy = new MyContractClient(new InstanceContext(this));
      m_Proxy.MyMethod(  );
   }
   //This method invoked by one callback at a time, plus client threads
   public void OnCallback(  )
   {
      //Access state and resources, synchronize manually
      lock(this)
      {...}
   }
   public void Dispose(  )
   {
      m_Proxy.Close(  );
   }
}

Callbacks with ConcurrencyMode.Multiple

When you configure the callback class with ConcurrencyMode.Multiple, WCF will allow concurrent calls on the callback instance. This means you need to synchronize access in the callback operations, as shown in Example 8.21, "Manually synchronizing the callback with ConcurrencyMode.Multiple", because they could be invoked concurrently both by WCF worker threads and by client-side threads.

Example 8.21. Manually synchronizing the callback with ConcurrencyMode.Multiple

[CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
class MyClient : IMyContractCallback,IDisposable
{
   MyContractClient m_Proxy;

   public void CallService(  )
   {
      m_Proxy = new MyContractClient(new InstanceContext(this));
      m_Proxy.MyMethod(  );
   }
   //This method can be invoked concurrently by callbacks,
   //plus client threads
   public void OnCallback(  )
   {
      //Access state and resources, synchronize manually
      lock(this)
      {...}
   }
   public void Dispose(  )
   {
      m_Proxy.Close(  );
   }
}

Callbacks with ConcurrencyMode.Reentrant

The callback object can perform outgoing calls over WCF, and those calls may eventually try to reenter the callback object. To avoid the deadlock that would occur when using ConcurrencyMode.Single, you can configure the callback class with ConcurrencyMode.Reentrant as needed:

[CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)]
class MyClient : IMyContractCallback
{...}

Configuring the callback for reentrancy also enables other services to call it when the callback object itself is engaged in WCF callouts.

Like a service invocation, a callback may need to access resources that rely on some kind of thread(s) affinity. In addition, the callback instance itself may require thread affinity for its own use of the TLS, or for interacting with a UI thread. While the callback can use techniques such as those in Example 8.4, "Calling a resource on the correct synchronization context" and Example 8.5, "Encapsulating the synchronization context" to marshal the interaction to the resource synchronization context, you can also have WCF associate the callback with a particular synchronization context by setting the UseSynchronizationContext property to true. However, unlike the service, the client does not use any host to expose the endpoint. If the UseSynchronizationContext property is true, the synchronization context to use is locked in when the proxy is opened (or, more commonly, when the client makes the first call to the service using the proxy, if Open( ) is not explicitly called). If the client is using the channel factory, the synchronization context to use is locked in when the client calls CreateChannel( ). If the calling client thread has a synchronization context, this will be the synchronization context used by WCF for all callbacks to the client's endpoint associated with that proxy. Note that only the first call made on the proxy (or the call to Open( ) or CreateChannel( )) is given the opportunity to determine the synchronization context. Subsequent calls have no say in the matter. If the calling client thread has no synchronization context, even if UseSynchronizationContext is true, no synchronization context will be used for the callbacks.

Callbacks and the UI Synchronization Context

If the callback object is running in a Windows Forms synchronization context, or if it needs to update some UI, you must marshal the callbacks or the updates to the UI thread. You can use techniques such as those in Example 8.6, "Using the form synchronization context" or Example 8.8, "Using a safe control". However, the more common use for UI updates over callbacks is to have the form itself implement the callback contract and update the UI, as in Example 8.22, "Relying on the UI synchronization context for callbacks".

Example 8.22. Relying on the UI synchronization context for callbacks

partial class MyForm : Form,IMyContractCallback
{
   MyContractClient m_Proxy;

   public MyForm(  )
   {
      InitializeComponent(  );
      m_Proxy = new MyContractClient(new InstanceContext(this));
   }
   //Called as a result of a UI event
   public void OnCallService(object sender,EventArgs args)
   {
      m_Proxy.MyMethod(  ); //Affinity established here
   }
   //This method always runs on the UI thread
   public void OnCallback(  )
   {
      //No need for synchronization and marshaling
      Text = "Some Callback";
   }
   public void OnClose(object sender,EventArgs args)
   {
      m_Proxy.Close(  );
   }
}

In Example 8.22, "Relying on the UI synchronization context for callbacks" the proxy is first used in the CallService( ) method, which is called by the UI thread as a result of some UI event. Calling the proxy on the UI synchronization context establishes the affinity to it, so the callback can directly access and update the UI without marshaling any calls. In addition, since only one thread (and the same thread, at that) will ever execute in the synchronization context, the callback is guaranteed to be synchronized.

You can also explicitly establish the affinity to the UI synchronization context by opening the proxy in the form's constructor without invoking an operation. This is especially useful if you want to dispatch calls to the service on worker threads (or perhaps even asynchronously as discussed at the end of this chapter) and yet have the callbacks enter on the UI synchronization context, as shown in Example 8.23, "Explicitly opening a proxy to establish a synchronization context".

Example 8.23. Explicitly opening a proxy to establish a synchronization context

partial class MyForm : Form,IMyContractCallback
{
   MyContractClient m_Proxy;

   public MyForm(  )
   {
      InitializeComponent(  );

      m_Proxy = new MyContractClient(new InstanceContext(this));

      //Establish affinity to UI synchronization context here:
      m_Proxy.Open(  );
   }
   //Called as a result of a UI event
   public void CallService(object sender,EventArgs args)
   {
      ThreadStart invoke = delegate
                           {
                              m_Proxy.MyMethod(  );
                           };
      Thread thread = new Thread(invoke);
      thread.Start(  );
   }
   //This method always runs on the UI thread
   public void OnCallback(  )
   {
      //No need for synchronization and marshaling
      Text = "Some Callback";
   }
   public void OnClose(object sender,EventArgs args)
   {
      m_Proxy.Close(  );
   }
}

UI thread callbacks and responsiveness

When callbacks are being processed on the UI thread, the UI itself is not responsive. Even if you perform relatively short callbacks, you must bear in mind that if the callback class is configured with ConcurrencyMode.Multiple there may be multiple callbacks back-to-back in the UI message queue, and processing them all at once will degrade responsiveness. You should avoid lengthy callback processing on the UI thread, and opt for configuring the callback class with ConcurrencyMode.Single so that the callback lock will queue up the callbacks. They can then be dispatched to the callback object one at a time, giving them the chance of being interleaved among the UI messages.

UI thread callbacks and concurrency management

Configuring the callback for affinity to the UI thread may trigger a deadlock. Suppose a Windows Forms client establishes an affinity between a callback object (or even itself) and the UI synchronization context, and then calls a service, passing the callback reference. The service is configured for reentrancy, and it calls back to the client. A deadlock now occurs because the callback to the client needs to execute on the UI thread, and that thread is blocked waiting for the service call to return. For example, Example 8.22, "Relying on the UI synchronization context for callbacks" has the potential for this deadlock. Configuring the callback as a one-way operation will not resolve the problem here, because the one-way call still needs to be marshaled first to the UI thread. The only way to resolve the deadlock in this case is to turn off using the UI synchronization context by the callback, and to manually and asynchronously marshal the update to the form using its synchronization context. Example 8.24, "Avoiding a callback deadlock on the UI thread" demonstrates using this technique.

Example 8.24. Avoiding a callback deadlock on the UI thread

////////////////////////// Client Side /////////////////////
[CallbackBehavior(UseSynchronizationContext = false)]
partial class MyForm : Form,IMyContractCallback
{
   SynchronizationContext m_Context;
   MyContractClient m_Proxy;
   public MyForm(  )
   {
      InitializeComponent(  );
      m_Context = SynchronizationContext.Current;
      m_Proxy = new MyContractClient(new InstanceContext(this));
   }

   public void CallService(object sender,EventArgs args)
   {
      m_Proxy.MyMethod(  );
   }
   //Callback runs on worker threads
   public void OnCallback(  )
   {
      SendOrPostCallback setText = delegate
                                   {
                                      Text = "Manually marshaling to UI thread";
                                   };
      m_Context.Post(setText,null);
   }
   public void OnClose(object sender,EventArgs args)
   {
      m_Proxy.Close(  );
   }
}
////////////////////////// Service Side /////////////////////
[ServiceContract(CallbackContract = typeof(IMyContractCallback))]
interface IMyContract
{
   [OperationContract]
   void MyMethod(  );
}
interface IMyContractCallback
{
   [OperationContract]
   void OnCallback(  );
}
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)]
class MyService : IMyContract
{
   public void MyMethod(  )
   {
      IMyContractCallback callback = OperationContext.Current.
                                        GetCallbackChannel<IMyContractCallback>(  );
      callback.OnCallback(  );
   }
}

As shown in Example 8.24, "Avoiding a callback deadlock on the UI thread", you must use the Post( ) method of the synchronization context. Under no circumstances should you use the Send( ) method-even though the callback is executing on the worker thread, the UI thread is still blocked on the outbound call. Calling Send( ) would trigger the deadlock you are trying to avoid because Send( ) will block until the UI thread can process the request. The callback in Example 8.24, "Avoiding a callback deadlock on the UI thread" cannot use any of the safe controls (such as SafeLabel) either, because those too use the Send( ) method.

Callback Custom Synchronization Contexts

As with a service, you can install a custom synchronization context for the use of the callback. All that is required is that the thread that opens the proxy (or calls it for the first time) has the custom synchronization context attached to it. Example 8.25, "Setting custom synchronization context for the callback" shows how to attach my ThreadPoolSynchronizer to the callback object by setting it before using the proxy.

Example 8.25. Setting custom synchronization context for the callback

interface IMyContractCallback
{
   [OperationContract]
   void OnCallback(  );
}
[ServiceContract(CallbackContract = typeof(IMyContractCallback))]
interface IMyContract
{
   [OperationContract]
   void MyMethod(  );
}

class MyClient : IMyContractCallback
{
   //This method always invoked by the same thread
   public void OnCallback(  )
   {....}
}

MyClient client = new MyClient(  );
InstanceContext callbackContext = new InstanceContext(client);
MyContractClient proxy = new MyContractClient(callbackContext);

SynchronizationContext synchronizationContext = new ThreadPoolSynchronizer(3);
SynchronizationContext.SetSynchronizationContext(synchronizationContext);

using(synchronizationContext as IDisposable)
{
   proxy.MyMethod(  );
   /* Some blocking operations until after the callback*/
   proxy.Close(  );
}

While you could manually install a custom synchronization context (as in Example 8.25, "Setting custom synchronization context for the callback") by explicitly setting it before opening the proxy, it is better to do so declaratively, using an attribute. To affect the callback endpoint dispatcher, the attribute needs to implement the IEndpointBehavior interface presented in Chapter 6, Faults:

public interface IEndpointBehavior
{
   void ApplyClientBehavior(ServiceEndpoint endpoint,ClientRuntime clientRuntime);
   //More members
}

In the ApplyClientBehavior method, the ClientRuntime parameter contains a reference to the endpoint dispatcher with the CallbackDispatchRuntime property:

public sealed class ClientRuntime
{
   public DispatchRuntime CallbackDispatchRuntime
   {get;}
   //More members
}

The rest is identical to the service-side attribute, as demonstrated by my CallbackThreadPoolBehaviorAttribute, whose implementation is shown in Example 8.26, "Implementing CallbackThreadPoolBehaviorAttribute".

Example 8.26. Implementing CallbackThreadPoolBehaviorAttribute

[AttributeUsage(AttributeTargets.Class)]
public class CallbackThreadPoolBehaviorAttribute : ThreadPoolBehaviorAttribute,
                                                                  IEndpointBehavior
{
   public CallbackThreadPoolBehaviorAttribute(uint poolSize,Type clientType) :
                                                     this(poolSize,clientType,null)
   {}
   public CallbackThreadPoolBehaviorAttribute(uint poolSize,Type clientType,
                              string poolName) : base(poolSize,clientType,poolName)
   {
      AppDomain.CurrentDomain.ProcessExit += delegate
                                             {
                                        ThreadPoolHelper.CloseThreads(ServiceType);
                                             };
   }
   void IEndpointBehavior.ApplyClientBehavior(ServiceEndpoint serviceEndpoint,
                                              ClientRuntime clientRuntime)
   {
      IContractBehavior contractBehavior = this;
      contractBehavior.ApplyDispatchBehavior(null,serviceEndpoint,
                                            clientRuntime.CallbackDispatchRuntime);
   }
   //Rest of the implementation
}

In fact, I wanted to reuse as much of the service attribute as possible in the callback attribute. To that end, CallbackThreadPoolBehaviorAttribute derives from ThreadPoolBehaviorAttribute. Its constructors pass the client type as the service type to the base constructors. The CallbackThreadPoolBehavior attribute's implementation of ApplyClientBehavior( ) queries its base class for IContractBehavior (this is how a subclass uses an explicit private interface implementation of its base class) and delegates the implementation to ApplyDispatchBehavior( ).

The big difference between a client callback attribute and a service attribute is that the callback scenario has no host object to subscribe to its Closed event. To compensate, the CallbackThreadPoolBehavior attribute monitors the process exit event to close all the threads in the pool.

If the client wants to expedite closing those threads, it can use ThreadPoolBehavior.CloseThreads( ), as shown in Example 8.27, "Using the CallbackThreadPoolBehavior attribute".

Example 8.27. Using the CallbackThreadPoolBehavior attribute

interface IMyContractCallback
{
   [OperationContract]
   void OnCallback(  );
}

[ServiceContract(CallbackContract = typeof(IMyContractCallback))]
interface IMyContract
{
   [OperationContract]
   void MyMethod(  );
}

[CallbackThreadPoolBehavior(3,typeof(MyClient))]
class MyClient : IMyContractCallback,IDisposable
{
   MyContractClient m_Proxy;

   public MyClient(  )
   {
      m_Proxy = new MyContractClient(new InstanceContext(this));
   }

   public void CallService(  )
   {
      m_Proxy.MyMethod(  );
   }

   //Called by threads from the custom pool
   public void OnCallback(  )
   {...}

   public void Dispose(  )
   {
      m_Proxy.Close(  );
      ThreadPoolHelper.CloseThreads(typeof(MyClient));
   }
}

Callback thread affinity

Just like on the service side, if you want all the callbacks to execute on the same thread (perhaps to create some UI on the callback side), you can configure the callback class to have a pool size of 1. Or, better yet, you can define a dedicated callback attribute such as my CallbackThreadAffinityBehaviorAttribute:

[AttributeUsage(AttributeTargets.Class)]
public class CallbackThreadAffinityBehaviorAttribute :
                                                CallbackThreadPoolBehaviorAttribute
{
   public CallbackThreadAffinityBehaviorAttribute(Type clientType) :
                                          this(clientType,"Callback Worker Thread")
   {}
   public CallbackThreadAffinityBehaviorAttribute(Type clientType,
                                 string threadName) : base(1,clientType,threadName)
   {}
}

The CallbackThreadAffinityBehavior attribute makes all callbacks across all callback contracts the client supports execute on the same thread, as shown in Example 8.28, "Applying the CallbackThreadAffinityBehavior attribute".

Example 8.28. Applying the CallbackThreadAffinityBehavior attribute

[CallbackThreadAffinityBehavior(typeof(MyClient))]
class MyClient : IMyContractCallback,IDisposable
{
   MyContractClient m_Proxy;

   public void CallService(  )
   {
      m_Proxy = new MyContractClient(new InstanceContext(this));
      m_Proxy.MyMethod(  );
   }
   //This method invoked by same callback thread, plus client threads
   public void OnCallback(  )
   {
      //Access state and resources, synchronize manually
   }
   public void Dispose(  )
   {
      m_Proxy.Close(  );
   }
}

Note that although WCF always invokes the callback on the same thread, you still may need to synchronize access to it if other client-side threads access the method as well.

When a client calls a service, usually the client is blocked while the service executes the call, and control returns to the client only when the operation completes its execution and returns. However, there are quite a few cases in which you will want to call operations asynchronously; that is, you'll want control to return immediately to the client while the service executes the operation in the background and then somehow let the client know that the method has completed execution and provide the client with the results of the invocation. Such an execution mode is called asynchronous operation invocation, and the action is known as an asynchronous call. Asynchronous calls allow you to improve client responsiveness and availability.

Requirements for an Asynchronous Mechanism

To make the most of the various options available with WCF asynchronous calls, you should be aware of the generic requirements set for any service-oriented asynchronous call support. These requirements include the following:

  • The same service code should be used for both synchronous and asynchronous invocation. This allows service developers to focus on business logic and cater to both synchronous and asynchronous clients.

  • A corollary of the first requirement is that the client should be the one to decide whether to call a service synchronously or asynchronously. That, in turn, implies that the client will have different code for each case (whether to invoke the call synchronously or asynchronously).

  • The client should be able to issue multiple asynchronous calls and have multiple asynchronous calls in progress, and it should be able to distinguish between multiple methods' completions.

  • Since a service operation's output parameters and return values are not available when control returns to the client, the client should have a way to harvest the results when the operation completes.

  • Similarly, communication errors or errors on the service side should be communicated back to the client side. Any exception thrown during operation execution should be played back to the client later.

  • The implementation of the mechanism should be independent of the binding and transfer technology used. Any binding should support asynchronous calls.

  • The mechanism should not use technology-specific constructs such as .NET exceptions or delegates.

  • The asynchronous calls mechanism should be straightforward and simple to use (this is less of a requirement and more of a design guideline). For example, the mechanism should, as much as possible, hide its implementation details, such as the worker threads used to dispatch the call.

The client has a variety of options for handling operation completion. After it issues an asynchronous call, it can choose to:

  • Perform some work while the call is in progress and then block until completion.

  • Perform some work while the call is in progress and then poll for completion.

  • Receive notification when the method has completed. The notification will be in the form of a callback on a client-provided method. The callback should contain information identifying which operation has just completed and its return values.

  • Perform some work while the call is in progress, wait for a predetermined amount of time, and then stop waiting, even if the operation execution has not yet completed.

  • Wait simultaneously for completion of multiple operations. The client can also choose to wait for all or any of the pending calls to complete.

WCF offers all of these options to clients. The WCF support is strictly a client-side facility, and in fact the service is unaware it is being invoked asynchronously. This means that intrinsically any service supports asynchronous calls, and that you can call the same service both synchronously and asynchronously. In addition, because all of the asynchronous invocation support happens on the client side regardless of the service, you can use any binding for the asynchronous invocation.

Tip
The WCF asynchronous calls support presented in this section is similar but not identical to the delegate-based asynchronous calls support .NET offers for regular CLR types.

Proxy-Based Asynchronous Calls

Because the client decides if the call should be synchronous or asynchronous, you need to create a different proxy for the asynchronous case. In Visual Studio 2008, when adding a service reference, you can click the Advanced button in the Add Service Reference dialog to bring up the settings dialog that lets you tweak the proxy generation. Check the "Generate asynchronous operations" checkbox to generate a proxy that contains asynchronous methods in addition to the synchronous ones. For each operation in the original contract, the asynchronous proxy and contract will contain two additional methods of this form:

[OperationContract(AsyncPattern = true)]
IAsyncResult Begin<Operation>(<in arguments>,
                              AsyncCallback callback,object asyncState);
<returned type> End<Operation>(<out arguments>,IAsyncResult result);

The OperationContract attribute offers the AsyncPattern Boolean property, defined as:

[AttributeUsage(AttributeTargets.Method)]
public sealed class OperationContractAttribute : Attribute
{
   public bool AsyncPattern
   {get;set;}
   //More members
}

The AsyncPattern property defaults to false. AsyncPattern has meaning only on the client side; it is merely a validation flag indicating to the proxy to verify that the method on which this flag is set to true has a Begin<Operation>( )-compatible signature and that the defining contract has a matching method with an End<Operation>( )-compatible signature. These requirements are verified at the proxy load time. AsyncPattern binds the underlying synchronous method with the Begin/End pair and correlates the synchronous execution with the asynchronous one. Briefly, when the client invokes a method of the form Begin<Operation>( ) with AsyncPattern set to true, this tells WCF not to try to directly invoke a method with that name on the service. Instead, WCF should use a thread from the thread pool to synchronously call the underlying method. The synchronous call will block the thread from the thread pool, not the calling client. The client will be blocked for only the slightest moment it takes to dispatch the call request to the thread pool. The reply method of the synchronous invocation is correlated with the End<Operation>( ) method.

Example 8.29, "Asynchronous contract and proxy" shows a calculator contract and its implementing service, and the generated asynchronous proxy.

Example 8.29. Asynchronous contract and proxy

////////////////////////// Service Side //////////////////////
[ServiceContract]
interface ICalculator
{
   [OperationContract]
   int Add(int number1,int number2);
   //More operations
}
class Calculator : ICalculator
{
   public int Add(int number1,int number2)
   {
      return number1 + number2;
   }
   //Rest of the implementation
}
////////////////////////// Client Side //////////////////////
[ServiceContract]
public interface ICalculator
{
   [OperationContract]
   int Add(int number1,int number2);

   [OperationContract(AsyncPattern = true)]
   IAsyncResult BeginAdd(int number1,int number2,
                         AsyncCallback callback,object asyncState);
   int EndAdd(IAsyncResult result);
   //Rest of the methods
}
partial class CalculatorClient : ClientBase<ICalculator>,ICalculator
{
   public int Add(int number1,int number2)
   {
      return Channel.Add(number1,number2);
   }
   public IAsyncResult BeginAdd(int number1,int number2,
                                AsyncCallback callback,object asyncState)
   {
      return Channel.BeginAdd(number1,number2,callback,asyncState);
   }
   public int EndAdd(IAsyncResult result)
   {
      return Channel.EndAdd(result);
   }
   //Rest of the methods and constructors
}

Asynchronous Invocation

Begin<Operation>( ) accepts the input parameters of the original synchronous operation, which may include data contracts passed by value or by reference (using the ref modifier). The original method's return values and any explicit output parameters (designated using the out and ref modifiers) are part of the End<Operation>( ) method. For example, for this operation definition:

[OperationContract]
string MyMethod(int number1,out int number2,ref int number3);

the corresponding Begin<Operation>( ) and End<Operation>( ) methods look like this:

[ServiceOperation(AsyncPattern = true)]
IAsyncResult BeginMyMethod(int number1,ref int number3,
                           AsyncCallback callback,object asyncState);
string EndMyMethod(out int number2,ref int number3,IAsyncResult result);

Begin<Operation>( ) accepts two additional input parameters that are not present in the original operation signature: callback and asyncState. The callback parameter is a delegate targeting a client-side method-completion notification event. asyncState is an object that conveys whatever state information the party handling the method completion requires. These two parameters are optional: the caller can choose to pass in null instead of either one of them. For example, you could use code like the following to asynchronously invoke the Add( ) method of the Calculator service from Example 8.29, "Asynchronous contract and proxy" using the asynchronous proxy, if you have no interest in the results or the errors:

CalculatorClient proxy = new CalculatorClient(  );
proxy.BeginAdd(2,3,null,null); //Dispatched asynchronously
proxy.Close(  );

As long as the client has the definition of the asynchronous contract, you can also invoke the operation asynchronously using a channel factory:

ChannelFactory<ICalculator> factory = new ChannelFactory<ICalculator>(  );
ICalculator proxy = factory.CreateChannel(  );
proxy.BeginAdd(2,3,null,null);
ICommunicationObject channel = proxy as ICommunicationObject;
channel.Close(  );

The problem with such an invocation is that the client has no way of getting its results.

The IAsyncResult interface

Every Begin<Operation>( ) method returns an object implementing the IAsyncResult interface, defined in the System.Runtime.Remoting.Messaging namespace as:

public interface IAsyncResult
{
   object AsyncState
   {get;}
   WaitHandle AsyncWaitHandle
   {get;}
   bool CompletedSynchronously
   {get;}
   bool IsCompleted
   {get;}
}

The returned IAsyncResult implementation uniquely identifies the method that was invoked using Begin<Operation>( ). You can pass the IAsyncResult-implementation object to End<Operation>( ) to identify the specific asynchronous method execution from which you wish to retrieve the results. End<Operation>( ) will block its caller until the operation it's waiting for (identified by the IAsyncResult-implementation object passed in) completes and it can return the results or errors. If the method is already complete by the time End<Operation>( ) is called, End<Operation>( ) will not block the caller and will just return the results. Example 8.30, "Simple asynchronous execution sequence" shows the entire sequence.

Example 8.30. Simple asynchronous execution sequence

CalculatorClient proxy = new CalculatorClient(  );
IAsyncResult result1 = proxy.BeginAdd(2,3,null,null);
IAsyncResult result2 = proxy.BeginAdd(4,5,null,null);

/* Do some work */

int sum;

sum = proxy.EndAdd(result1); //This may block
Debug.Assert(sum == 5);
sum = proxy.EndAdd(result2); //This may block
Debug.Assert(sum == 9);

proxy.Close(  );

As simple as Example 8.30, "Simple asynchronous execution sequence" is, it does demonstrate a few key points. The first point is that the same proxy instance can invoke multiple asynchronous calls. The caller can distinguish among the different pending calls using each unique IAsyncResult-implementation object returned from Begin<Operation>( ). In fact, when the caller makes asynchronous calls, as in Example 8.30, "Simple asynchronous execution sequence", it must save the IAsyncResult-implementation objects. In addition, the caller should make no assumptions about the order in which the pending calls will complete. It is quite possible that the second call will complete before the first one.

Although it isn't evident in Example 8.30, "Simple asynchronous execution sequence", there are two important programming points regarding asynchronous calls:

  • End<Operation>( ) can be called only once for each asynchronous operation. Trying to call it more than once results in an InvalidOperationException.

  • You can pass the IAsyncResult-implementation object to End<Operation>( ) only on the same proxy object used to dispatch the call. Passing the IAsyncResult-implementation object to a different proxy instance results in an AsyncCallbackException. This is because only the original proxy keeps track of the asynchronous operations it has invoked.

Asynchronous calls and transport sessions

If the proxy is not using a transport session, the client can close the proxy immediately after the call to Begin<Operation>( ) and still be able to call End<Operation>( ) later:

CalculatorClient proxy = new CalculatorClient(  );
IAsyncResult result = proxy.BeginAdd(2,3,null,null);
proxy.Close(  );

/*Do some work */

//Sometime later:
int sum = proxy.EndAdd(result);
Debug.Assert(sum == 5);

Polling or Waiting for Completion

When a client calls End<Operation>( ), the client is blocked until the asynchronous method returns. This may be fine if the client has a finite amount of work to do while the call is in progress, and if after completing that work the client cannot continue its execution without the returned value or the output parameters of the operation. However, what if the client only wants to check that the operation has completed? What if the client wants to wait for completion for a fixed timeout and then, if the operation has not completed, do some additional finite processing and wait again? WCF supports these alternative programming models to calling End<Operation>( ).

The IAsyncResult interface object returned from Begin<Operation>( ) has the AsyncWaitHandle property, of type WaitHandle:

public abstract class WaitHandle : ...
{
   public static bool WaitAll(WaitHandle[] waitHandles);
   public static int WaitAny(WaitHandle[] waitHandles);
   public virtual void Close(  );
   public virtual bool WaitOne(  );
   //More memebrs
}

The WaitOne( ) method of WaitHandle returns only when the handle is signaled. Example 8.31, "Using IAsyncResult.AsyncWaitHandle to block until completion" demonstrates using WaitOne( ).

Example 8.31. Using IAsyncResult.AsyncWaitHandle to block until completion

CalculatorClient proxy = new CalculatorClient(  );
IAsyncResult result = proxy.BeginAdd(2,3,null,null);

/* Do some work */

result.AsyncWaitHandle.WaitOne(  ); //This may block
int sum = proxy.EndAdd(result); //This will not block
Debug.Assert(sum == 5);

proxy.Close(  );

Logically, Example 8.31, "Using IAsyncResult.AsyncWaitHandle to block until completion" is identical to Example 8.30, "Simple asynchronous execution sequence", which called only End<Operation>( ). If the operation is still executing when WaitOne( ) is called, WaitOne( ) will block. But if by the time WaitOne( ) is called the method execution is complete, WaitOne( ) will not block, and the client will proceed to call End<Operation>( ) for the returned value. The important difference between Example 8.31, "Using IAsyncResult.AsyncWaitHandle to block until completion" and Example 8.30, "Simple asynchronous execution sequence" is that the call to End<Operation>( ) in Example 8.31, "Using IAsyncResult.AsyncWaitHandle to block until completion" is guaranteed not to block its caller.

Example 8.32, "Using WaitOne( ) to specify wait timeout" demonstrates a more practical way of using WaitOne( ), by specifying a timeout (10 milliseconds in this example). When you specify a timeout, WaitOne( ) returns when the method execution is completed or when the timeout has elapsed, whichever condition is met first.

Example 8.32. Using WaitOne( ) to specify wait timeout

CalculatorClient proxy = new CalculatorClient(  );
IAsyncResult result = proxy.BeginAdd(2,3,null,null);
while(result.IsCompleted == false)
{
   result.AsyncWaitHandle.WaitOne(10,false); //This may block
   /* Do some optional work */
}
int sum = proxy.EndAdd(result); //This will not block

Example 8.32, "Using WaitOne( ) to specify wait timeout" uses another handy property of IAsyncResult, called IsCompleted. IsCompleted lets you check the status of the call without waiting or blocking. You can even use IsCompleted in a strict polling mode:

CalculatorClient proxy = new CalculatorClient(  );
IAsyncResult result = proxy.BeginAdd(2,3,null,null);

//Sometime later:
if(result.IsCompleted)
{
   int sum = proxy.EndAdd(result); //This will not block
   Debug.Assert(sum == 5);
}
else
{
  //Do some optional work
}
proxy.Close(  );

The AsyncWaitHandle property really shines when you use it to manage multiple concurrent asynchronous methods in progress. You can use WaitHandle's static WaitAll( ) method to wait for completion of multiple asynchronous methods, as shown in Example 8.33, "Waiting for completion of multiple methods".

Example 8.33. Waiting for completion of multiple methods

CalculatorClient proxy = new CalculatorClient(  );
IAsyncResult result1 = proxy.BeginAdd(2,3,null,null);
IAsyncResult result2 = proxy.BeginAdd(4,5,null,null);

WaitHandle[] handleArray = {result1.AsyncWaitHandle,result2.AsyncWaitHandle};

WaitHandle.WaitAll(handleArray);

int sum;
//These calls to EndAdd(  ) will not block

sum = proxy.EndAdd(result1);
Debug.Assert(sum == 5);

sum = proxy.EndAdd(result2);
Debug.Assert(sum == 9);

proxy.Close(  );

To use WaitAll( ), you need to construct an array of handles. Note that you still need to call End<Operation>( ) to access the returned values. Instead of waiting for all of the methods to return, you can choose to wait for any of them to return, using the WaitAny( ) static method of the WaitHandle class. Like WaitOne( ), both WaitAll( ) and WaitAny( ) have overloaded versions that let you specify a timeout to wait instead of waiting indefinitely.

Completion Callbacks

Instead of blocking, waiting, and polling for asynchronous call completion, WCF offers another programming model altogether-completion callbacks. With this model, the client provides WCF with a method and requests that WCF call that method back when the asynchronous method completes. The client can have the same callback method handle completion of multiple asynchronous calls. When each asynchronous method's execution is complete, instead of quietly returning to the pool, the worker thread calls the completion callback. To designate a completion callback method, the client needs to provide Begin<Operation>( ) with a delegate of the type AsyncCallback, defined as:

public delegate void AsyncCallback(IAsyncResult result);

That delegate is provided as the penultimate parameter to Begin<Operation>( ).

Example 8.34, "Managing asynchronous call with a completion callback" demonstrates asynchronous call management using a completion callback.

Example 8.34. Managing asynchronous call with a completion callback

class MyClient : IDisposable
{
   CalculatorClient m_Proxy = new CalculatorClient(  );

   public void CallAsync(  )
   {
      m_Proxy.BeginAdd(2,3,OnCompletion,null);
   }
   void OnCompletion(IAsyncResult result)
   {
      int sum = m_Proxy.EndAdd(result);
      Debug.Assert(sum == 5);
   }
   public void Dispose(  )
   {
      m_Proxy.Close(  );
   }
}

Unlike the programming models described so far, when you use a completion callback method, there's no need to save the IAsyncResult-implementation object returned from Begin<Operation>( ). This is because when WCF calls the completion callback, WCF provides the IAsyncResult-implementation object as a parameter. Because WCF provides a unique IAsyncResult-implementation object for each asynchronous method, you can channel multiple asynchronous method completions to the same callback method:

m_Proxy.BeginAdd(2,3,OnCompletion,null);
m_Proxy.BeginAdd(4,5,OnCompletion,null);

Instead of using a class method as a completion callback, you can just as easily use a local anonymous method or a lambda expression:

CalculatorClient proxy = new CalculatorClient(  );
int sum;
AsyncCallback completion = (result)=>
                           {
                              sum = proxy.EndAdd(result);
                              Debug.Assert(sum == 5);
                              proxy.Close(  );
                           };
proxy.BeginAdd(2,3,completion,null);

Note that the anonymous method assigns to an outer variable (sum) to provide the result of the Add( ) operation.

Callback completion methods are by far the preferred model in any event-driven application. An event-driven application has methods that trigger events (or requests) and methods that handle those events and fire their own events as a result. Writing an application as event-driven makes it easier to manage multiple threads, events, and callbacks and allows for scalability, responsiveness, and performance.

The last thing you want in an event-driven application is to block, since then your application does not process events. Callback completion methods allow you to treat the completion of the asynchronous operation as yet another event in your system. The other options (waiting, blocking, and polling) are available for applications that are strict, predictable, and deterministic in their execution flow. I recommend that you use completion callback methods whenever possible.

Completion callbacks and thread safety

Because the callback method is executed on a thread from the thread pool, you must provide for thread safety in the callback method and in the object that provides it. This means that you must use synchronization objects and locks to access the member variables of the client, even outer variables to anonymous completion methods. You need to provide for synchronization between client-side threads and the worker thread from the pool, and potentially synchronizing between multiple worker threads all calling concurrently into the completion callback method to handle their respective asynchronous call completion. Therefore, you need to make sure the completion callback method is reentrant and thread-safe.

Passing state information

The last parameter to Begin<Operation>( ) is asyncState. The asyncState object, known as a state object, is provided as an optional container for whatever need you deem fit. The party handling the method completion can access such a container object via the AsyncState property of IAsyncResult. Although you can certainly use state objects with any of the other asynchronous call programming models (blocking, waiting, or polling), they are most useful in conjunction with completion callbacks. The reason is simple: when you are using a completion callback, the container object offers the only way to pass in additional parameters to the callback method, whose signature is predetermined.

Example 8.35, "Passing an additional parameter using a state object" demonstrates how you might use a state object to pass an integer value as an additional parameter to the completion callback method. Note that the callback must downcast the AsyncState property to the actual type.

Example 8.35. Passing an additional parameter using a state object

class MyClient : IDisposable
{
   CalculatorClient m_Proxy = new CalculatorClient(  );

   public void CallAsync(  )
   {
      int asyncState = 4; //int, for example
      m_Proxy.BeginAdd(2,3,OnCompletion,asyncState);
   }
   void OnCompletion(IAsyncResult result)
   {
      int asyncState = (int)result.AsyncState;
      Debug.Assert(asyncState == 4);

      int sum = m_Proxy.EndAdd(result);
   }
   public void Dispose(  )
   {
      m_Proxy.Close(  );
   }
}

A common use for the state object is to pass the proxy used for Begin<Operation>( ) instead of saving it as a member variable:

class MyClient
{
   public void CallAsync(  )
   {
      CalculatorClient proxy = new CalculatorClient(  );
      proxy.BeginAdd(2,3,OnCompletion,proxy);
   }
   void OnCompletion(IAsyncResult result)
   {
      CalculatorClient proxy = result.AsyncState as CalculatorClient;
      Debug.Assert(proxy != null);

      int sum = proxy.EndAdd(result);
      Debug.Assert(sum == 5);

      proxy.Close(  );
   }
}

Completion callback synchronization context

The completion callback, by default, is called on a thread from the thread pool. This presents a serious problem if the callback is to access some resources that have an affinity to a particular thread or threads and are required to run in a particular synchronization context. The classic example is a Windows Forms application that dispatches a lengthy service call asynchronously (to avoid blocking the UI), and then wishes to update the UI with the result of the invocation. Using the raw Begin<Operation>( ) is disallowed, since only the UI thread is allowed to update the UI. You must marshal the call from the completion callback to the correct synchronization context, using any of the techniques described previously (such as safe controls). Example 8.36, "Relying on completion callback synchronization context" demonstrates such a completion callback that interacts directly with its containing form, ensuring that the UI update will be in the UI synchronization context.

Example 8.36. Relying on completion callback synchronization context

partial class CalculatorForm : Form
{
   CalculatorClient m_Proxy;
   SynchronizationContext m_SynchronizationContext;

   public CalculatorForm(  )
   {
      InitializeComponent(  );
      m_Proxy = new CalculatorClient(  );
      m_SynchronizationContext = SynchronizationContext.Current;
   }
   public void CallAsync(object sender,EventArgs args)
   {
      m_Proxy.BeginAdd(2,3,OnCompletion,null);
   }
   void OnCompletion(IAsyncResult result)
   {
      SendOrPostCallback callback = delegate
                                    {
                                       Text = "Sum = " + m_Proxy.EndAdd(result);
                                    };
      m_SynchronizationContext.Send(callback,null);
   }
   public void OnClose(object sender,EventArgs args)
   {
      m_Proxy.Close(  );
   }
}

To better handle this situation, the ClientBase<T> base class in .NET 3.5 is extended with a protected InvokeAsync( ) method that picks up the synchronization context of the client and uses it to invoke the completion callback, as shown in Example 8.37, "Async callback management in ClientBase<T>".

Example 8.37. Async callback management in ClientBase<T>

public abstract class ClientBase<T> : ...
{
   protected delegate IAsyncResult BeginOperationDelegate(object[] inValues,
                                         AsyncCallback asyncCallback,object state);

   protected delegate object[] EndOperationDelegate(IAsyncResult result);

   //Picks up sync context and used for completion callback
   protected void InvokeAsync(BeginOperationDelegate beginOpDelegate,
                              object[] inValues,
                              EndOperationDelegate endOpDelegate,
                              SendOrPostCallback opCompletedCallback,
                              object userState);
   //More members
}

ClientBase<T> also provides an event arguments helper class and two dedicated delegates used to invoke and end the asynchronous call. The generated proxy class that derives from ClientBase<T> makes use of the base functionality. The proxy will have a public event called <Operation>Completed that uses a strongly typed event argument class specific to the results of the asynchronous method, and two methods called <Operation>Async that are used to dispatch the call asynchronously:

partial class AddCompletedEventArgs : AsyncCompletedEventArgs
{
   public int Result
   {get;}
}

class CalculatorClient : ClientBase<ICalculator>,ICalculator
{
   public event EventHandler<AddCompletedEventArgs> AddCompleted;

   public void AddAsync(int number1,int number2,object userState);
   public void AddAsync(int number1,int number2);

   //Rest of the proxy
}

The client can subscribe an event handler to the <Operation>Completed event to have that handler called upon completion. The big difference with using <Operation>Async as opposed to Begin<Operation> is that the <Operation>Async methods will pick up the synchronization context of the client and will fire the <Operation>Completed event on that synchronization context, as shown in Example 8.38, "Synchronization-context-friendly asynchronous call invocation".

Example 8.38. Synchronization-context-friendly asynchronous call invocation

partial class CalculatorForm : Form
{
   CalculatorClient m_Proxy;

   public CalculatorForm(  )
   {
      InitializeComponent(  );

      m_Proxy = new CalculatorClient(  );
      m_Proxy.AddCompleted += OnAddCompleted;
   }
   void CallAsync(object sender,EventArgs args)
   {
      m_Proxy.AddAsync(2,3); //Sync context picked up here
   }
   //Called on the UI thread
   void OnAddCompleted(object sender,AddCompletedEventArgs args)
   {
      Text = "Sum = " + args.Result;
   }
}

One-Way Asynchronous Operations

There is little sense in trying to invoke a one-way operation asynchronously, because while one of the main features of asynchronous calls is their ability to retrieve and correlate a reply message, no such message is available with a one-way call. If you do invoke a one-way operation asynchronously, End<Operation>( ) will return as soon as the worker thread has finished dispatching the call. Aside from communication errors, End<Operation>( ) will not encounter any exceptions. If a completion callback is provided for an asynchronous invocation of a one-way operation, the callback is called immediately after the worker thread used in Begin<Operation>( ) dispatches the call. The only justification for invoking a one-way operation asynchronously is to avoid the potential blocking of the one-way call, in which case you should pass a null for the state object and the completion callback, as shown in Example 8.39, "Invoking a one-way operation asynchronously".

Example 8.39. Invoking a one-way operation asynchronously

[ServiceContract]
interface IMyContract
{
   [OperationContract(IsOneWay = true)]
   void MyMethod(string text);

   [OperationContract(IsOneWay = true,AsyncPattern = true)]
   IAsyncResult BeginMyMethod(string text,
                              AsyncCallback callback,object asyncState);
   void EndMyMethod(IAsyncResult result);
}
MyContractClient proxy = MyContractClient(  );
proxy.BeginMyMethod("Async one way",null,null);

//Sometime later:
proxy.Close(  );

The problem with Example 8.39, "Invoking a one-way operation asynchronously" is the potential race condition of closing the proxy. It is possible to push the asynchronous call with Begin<Operation>( ) and then close the proxy before the worker thread used has had a chance to invoke the call. If you want to close the proxy immediately after asynchronously invoking the one-way call, you need to provide a completion method for closing the proxy:

MyContractClient proxy = MyContractClient(  );

AsyncCallback completion = (result)=>
                           {
                              proxy.Close(  );
                           };
proxy.BeginMyMethod("Async one way",completion,null);

Asynchronous Error Handling

Output parameters and return values are not the only elements unavailable at the time an asynchronous call is dispatched: exceptions are missing as well. After calling Begin<Operation>( ), control returns to the client, but it may be some time before the asynchronous method encounters an error and throws an exception, and some time after that before the client actually calls End<Operation>( ). WCF must therefore provide some way for the client to know that an exception was thrown and allow the client to handle it. When the asynchronous method throws an exception, the proxy catches it, and when the client calls End<Operation>( ) the proxy rethrows that exception object, letting the client handle the exception. If a completion callback is provided, WCF calls that method immediately after the exception is received. The exact exception thrown is compliant with the fault contract and the exception type, as explained in Chapter 6, Faults.

Tip
If fault contracts are defined on the service operation contract, the FaultContract attribute should be applied only on the synchronous operations.

Asynchronous calls and timeouts

Since the asynchronous invocation mechanism is nothing but a convenient programming model on top of the actual synchronous operation, the underlying synchronous call can still time out. This will result in a TimeoutException when the client calls End<Operation>( ). It is therefore wrong to equate asynchronous calls with lengthy operations. By default, asynchronous calls are still relatively short (under a minute), but unlike synchronous calls, they are non-blocking. For lengthy asynchronous calls you will need to provide an adequately long send timeout.

Cleaning up after End<Operation>( )

When the client calls Begin<Operation>( ), the returned IAsyncResult will have a reference to a single WaitHandle object, accessible via the AsyncWaitHandle property. Calling End<Operation>( ) on that object will not close the handle. Instead, that handle will be closed when the implementing object is garbage-collected. As with any other case of using an unmanaged resource, you have to be mindful about your application-deterministic finalization needs. It is possible (in theory, at least) for the application to dispatch asynchronous calls faster than .NET can collect the handles, resulting in a resource leak. To compensate, you can explicitly close that handle after calling End<Operation>( ). For example, using the same definitions as those in Example 8.34, "Managing asynchronous call with a completion callback":

void OnCompletion(IAsyncResult result)
{
   int sum = m_Proxy.EndAdd(result);
   Debug.Assert(sum == 5);
   result.AsyncWaitHandle.Close(  );
}

Asynchronous Calls and Transactions

Transactions do not mix well with asynchronous calls, for a few reasons. First, well-designed transactions are of short duration, yet the main motivation for using asynchronous calls is because of the latency of the operations. Second, the client's ambient transaction will not by default flow to the service, because the asynchronous operation is invoked on a worker thread, not the client's thread. While it is possible to develop a proprietary mechanism that uses cloned transactions, this is esoteric at best and should be avoided. Finally, when a transaction completes, it should have no leftover activities to do in the background that could commit or abort independently of the transaction; however, this will be the result of spawning an asynchronous operation call from within a transaction. In short, do not mix transactions with asynchronous calls.

Synchronous Versus Asynchronous Calls

Although it is technically possible to call the same service synchronously and asynchronously, the likelihood that a service will be accessed both ways is low.

The reason is that using a service asynchronously necessitates drastic changes to the workflow of the client, and consequently the client cannot simply use the same execution sequence logic as with synchronous access. Consider, for example, an online store application. Suppose the client (a server-side object executing a customer request) accesses a Store service, where it places the customer's order details. The Store service uses three well-factored helper services to process the order: Order, Shipment, and Billing. In a synchronous scenario, the Store service first calls the Order service to place the order. Only if the Order service succeeds in processing the order (i.e., if the item is available in the inventory) does the Store service then call the Shipment service, and only if the Shipment service succeeds does the Store service access the Billing service to bill the customer. This sequence is shown in Figure 8.4, "Synchronous processing of an order".

Figure 8.4. Synchronous processing of an order

Synchronous processing of an order

The downside to the workflow shown in Figure 8.4, "Synchronous processing of an order" is that the store must process orders synchronously and serially. On the surface, it might seem that if the Store service invoked its helper objects asynchronously, it would increase throughput, because it could process incoming orders as fast as the client submitted them. The problem in doing so is that it is possible for the calls to the Order, Shipment, and Billing services to fail independently, and if they do, all hell will break loose. For example, the Order service might discover that there were no items in the inventory matching the customer request, while the Shipment service tried to ship the nonexisting item and the Billing service had already billed the customer for it.

Using asynchronous calls on a set of interacting services requires that you change your code and your workflow. As illustrated in Figure 8.5, "Revised workflow for asynchronous processing of an order", to call the helper services asynchronously, you need to string them together. The Store service should call only the Order service, which in turn should call the Shipment service only if the order processing was successful, to avoid the potential inconsistencies just mentioned. Similarly, only in the case of successful shipment should the Shipment service asynchronously call the Billing service.

Figure 8.5. Revised workflow for asynchronous processing of an order

Revised workflow for asynchronous processing of an order

In general, if you have more than one service in your asynchronous workflow, you should have each service invoke the next one in the logical execution sequence. Needless to say, such a programming model introduces tight coupling between services (they have to know about each other) and changes to their interfaces (you have to pass in additional parameters, which are required for the desired invocation of services downstream).

The conclusion is that using asynchronous instead of synchronous invocation introduces major changes to the service interfaces and the client workflow. Asynchronous invocation on a service that was built for synchronous execution works only in isolated cases. When dealing with a set of interacting services, it is better to simply spin off a worker thread to call them and use the worker thread to provide asynchronous execution. This will preserve the service interfaces and the original client execution sequence.


[6] I first presented my technique for priority processing of WCF calls in my article "Synchronization Contexts in WCF" (MSDN Magazine, November 2007).

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.