span.sup { vertical-align:text-top; }

Foundations

Workflow Tips and Tricks

Matt Milner

Code download available at:Foundations2008_08.exe(187 KB)

Contents

Creating Template Activities
Managing Client Endpoints with ChannelManagerService
Sharing Persistence Stores across Applications

As an author and instructor, I often get asked questions about Windows® Workflow Foundation (WF) that do not require enough depth to warrant an entire column. However, the responses and techniques have proven to be useful to many developers and often can be used in a wide variety of scenarios. This month I will cover several different topics, some that are intended to address specific reader questions, such as how to safely share a persistence database across apps, and others that show how to leverage parts of the framework that do not get much coverage, such as the ChannelManagerService.

Creating Template Activities

One common request from developers is to be able to create activity templates. People are looking for a way to create most of the logic in an activity but leave some extensibility points for consumers of the activity to fill in with their own activities. Unfortunately, Windows WF does not support that model directly, but there are two alternative options that work in many scenarios.

The first option is to create a Visual Studio® new-item template that contains the template structure of your activity, and then let the users add that new item to their project and customize it. In this scenario, developers are the expected user because the solution requires Visual Studio to create an activity from the template and a compiler to build the new activity so that it can be used.

I wrote about creating custom templates in Visual Studio in the January 2006 issue of MSDN® Magazine (msdn.microsoft.com/magazine/cc188697), so I will not spend time on the details here. The benefit of this model is that it leverages the tools available and provides a quick way to ensure a consistent base structure for your activities.

The second option for creating a template is to build a custom toolbox item that can create the template when the user drags an item onto the Workflow Designer surface. In the December 2006 issue of MSDN Magazine (msdn.microsoft.com/magazine/cc163504), I built a custom switch activity that was a composite activity with multiple child branches.

Without a custom toolbox item, when a user dragged my switch activity from the toolbox to the design surface, the default toolbox behavior was to simply add an instance of the switch activity. Someone who did not know about the activity would likely think there was a problem with it, as there would be no indication of where to put his custom logic. The toolbox item allowed me to initialize the switch activity with two branches so the user would know where to begin customizing the activity, as shown in Figure 1.

fig01.gif

Figure 1 Switch Activity with and without a Custom Toolbox Item
(Click the image for a larger view)

The switch activity is a composite activity that provides control flow logic but not any custom business logic. Simple composite activities, the kind you get when you choose to add a new activity to your project, generally derive from the SequenceActivity class and rely on contained activities to define business process logic.

An example of this type of composite activity would be a manager-approval process where the general process is reusable, but the individual steps can vary for each process instance. You might, for example, want to create a template that consists of resolving manager details such as e-mail or instant messenger address, sending him a message, and handling all errors around those steps. You might also want to allow someone to customize this process so he can more easily manipulate the message being sent or the parameters around how it should be sent.

Included in the code download for this column are a few sample activities including simple activity stubs for getting manager details and sending an instant message (IM). The included NotifyManager­ActivityToolboxItem creates activities, adds error handling, and binds properties on several activities together. The key overrides in the ActivityToolboxItem base class are CreateComponentsCore and OnComponentsCreated.

CreateComponentsCore provides access to the IDesignerHost that can be used to retrieve a number of services and expects, in return, an array of components to be added to the design surface. Figure 2 shows the code to create the hierarchy for the custom activity to notify a manager.

Figure 2 Creating an Activity Hierarchy

protected override System.ComponentModel.IComponent[] 
  CreateComponentsCore(System.ComponentModel.Design.IDesignerHost host) {

  //configure the activities
  SequenceActivity parent = new SequenceActivity();
  GetManagerActivity getMgr = new GetManagerActivity();
  SendIMActivity sendIM = new SendIMActivity();

  //create the hierarchy
  parent.Activities.Add(getMgr);
  parent.Activities.Add(sendIM);

  //add fault handling
  FaultHandlersActivity handlers = new FaultHandlersActivity();
  FaultHandlerActivity handler = new FaultHandlerActivity();
  handler.FaultType = typeof(System.ArgumentException);

  WriteLineActivity errorWriteLine = new WriteLineActivity();

  handler.Activities.Add(errorWriteLine);
  handlers.Activities.Add(handler);
  parent.Activities.Add(handlers);

  //return the activities to be used
  return new System.ComponentModel.IComponent[] { parent };
}

It is important to note that in this example I created a single parent activity because my goal was to include error handling. By using a composite activity as a parent activity, I was able to add the fault-handling logic into the hierarchy. I could just as easily have returned an array containing the GetManagerActivity and the SendIMActivity with no root parent activity, and both would get added to the designer. Another option would be to have a custom parent activity that provides the ability to carry the customization beyond the initial adding of activities to the design surface.

Also note that I did not actually set up any activity binding in this method. I only created the activity tree. Because I have not given my activities names yet and instead rely on the designer services to do that for me, I have delayed any configuration among the activities for the OnComponentsCreated method. In this overridden method, I gain access to all of the activities created by my toolbox item, and I can then configure them with activity bindings, as shown in Figure 3.

Once the custom toolbox item class has been written, the Toolbox­Item attribute is used to link it to a specific activity:

[ToolboxItem(typeof(NotifyManagerActivityToolboxItem))]
public partial class NotifyManager: SequenceActivity
{ ... }

Figure 3 Configuring Activity Binding

protected override void OnComponentsCreated(
  System.Drawing.Design.ToolboxComponentsCreatedEventArgs args) {

  base.OnComponentsCreated(args);

  //get a handle to the sequence
  SequenceActivity parent = args.Components[0] as SequenceActivity;

  if (parent != null) {
    GetManagerActivity getMgr = null;
    SendIMActivity sendIM = null;
    FaultHandlerActivity faultHandler = null;

    //get activity references
    foreach (Activity a in parent.Activities) {
      if (a is FaultHandlersActivity)
        faultHandler = 
          ((FaultHandlersActivity)a).Activities[0] as FaultHandlerActivity;

      if (a is GetManagerActivity)
        getMgr = a as GetManagerActivity;

      if (a is SendIMActivity)
        sendIM = a as SendIMActivity;
    }

    //bind send IM to get manager
    if (getMgr != null && sendIM != null) {
      ActivityBind sipBind = new ActivityBind(
        getMgr.QualifiedName, "ManagerSIPAddress");
      sendIM.SetBinding(SendIMActivity.SIPAddressProperty, sipBind);
    }

    //bind write line activity to fault message
    if (faultHandler != null) {
      ActivityBind faultBind = new ActivityBind(
        faultHandler.QualifiedName, "Fault.Message");
      WriteLineActivity errorWrite = 
        faultHandler.Activities[0] as WriteLineActivity;

      errorWrite.SetBinding(WriteLineActivity.OutputTextProperty, faultBind);
    }
  }
}

Notice that because, in this case, the toolbox item actually creates the structure of the activity, the definition of my activity structure is empty. The activity acts only as a shim for the toolbox item so that it can create the template.

In addition to being able to build and configure the activities themselves, it is often useful to work with the services available in the IDesignerHost that is passed to the CreateComponentsCore method. One service interface that is especially helpful in this regard is the IExtendedUIService, shown in Figure 4. This service provides useful methods for selecting or manipulating properties and adding assembly or Web service references.

Figure 4 IExtendedUIService Interface Definition

public interface IExtendedUIService {
  void AddAssemblyReference(AssemblyName assemblyName);
  void AddDesignerActions(DesignerAction[] actions);
  DialogResult AddWebReference(out Uri url, out Type proxyClass);
  Type GetProxyClassForUrl(Uri url);
  ITypeDescriptorContext GetSelectedPropertyContext();
  Uri GetUrlForProxyClass(Type proxyClass);
  Dictionary<string, Type> GetXsdProjectItemsInfo();
  bool NavigateToProperty(string propName);
  void RemoveDesignerActions();
  void ShowToolsOptions();
}

One flaw with both solutions to providing templates is that the created template is "white box" because once you create it, the user can easily change it. The best way to guard against this is to write a custom activity validator, as discussed in my column on building custom activities. The validator will execute both in the designer and at compilation, and will enforce any structure or relationships required by the template.

Managing Client Endpoints with ChannelManagerService

In the Launch 2008 issue of MSDN Magazine (msdn.microsoft.com/magazine/cc164251), I discussed how to use the new features in the Microsoft® .NET Framework 3.5 to integrate workflow with Windows Communication Foundation (WCF). One optional but very important class that was new in the .NET Framework 3.5 is Channel­ManagerService. The ChannelManagerService class, as its name suggests, is a runtime service that you add into WorkflowRuntime in order to provide better support for the Send activity. Because it is an optional service, it does not get as much coverage as the activities it supports. But as you begin building real workflow applications with the Send and Receive activities, you may find its features to be very compelling. Two specific features that the service provides are client channel caching across workflow instances and the ability to add configured client endpoints through code, rather than through the standard WCF configuration file entries.

When using the Send activity without ChannelManagerService, a new ChannelFactory and new security session are negotiated for each call. Keep in mind that opening channels and renegotiating security is expensive. If you do not need new security contexts for each call, then the ideal solution would be to have a cache of client channels that the workflows can use, which is what Channel­ManagerService provides. It keeps a pool of client channels for endpoints and allows the Send activity to use a client channel from the pool rather than having to create a new channel for each call. The following code creates the service and adds it to the workflow runtime:

ChannelManagerService channelMgr = 
  new ChannelManagerService();
workflowRuntime.AddService(channelMgr);

To understand the difference between using the Send activity with and without ChannelManagerService, I created a simple WCF echo service that returns the data sent and the session ID from the operation context. The service itself is configured to require sessions. In the client application, I added a service reference and then created a workflow with two Send activities configured to use the same ChannelToken. The client program runs the workflow twice in parallel, then once more after the first two have completed.

As you can see in Figure 5, when running without Channel­ManagerService, each call, whether in the same workflow instance or a different one, uses a new channel factory and, therefore, gets a new session ID. (You also can use the tracing facilities of WCF to see that multiple channels are opened and secure sessions negotiated.) With the Channel­ManagerService added to the runtime, the first two workflows each get a unique channel from the pool and use that channel for both of the service calls in the instance. The third workflow then is assigned one of the existing channels from the pool and uses that channel to call the service.

fig05.gif

Figure 5 Service Calls with and without ChannelManagerService
(Click the image for a larger view)

When managing a pool of resources, it is important to have some control over the pool itself. Channel­ManagerService can be constructed with a parameter of type ChannelPoolSettings that lets you configure three key throttling settings. The LeaseTimeout property indicates the amount of time after a channel is returned to the pool that it is removed from the pool (default value is 10 minutes). The IdleTimeout property indicates how long a channel can be idle before being closed (default value is 2 minutes). Finally, the MaxOutboundChannelsPerEndpoint property indicates how many channels can be in the cache for a given remote endpoint (default value is 10). The default values are intentionally set low and should be reviewed prior to deploying a solution in a test or production environment.

The other feature of ChannelManagerService is the ability to manage endpoints in code much like you can with service endpoints when hosting the service. SendActivity requires you to configure ChannelToken, which in turn requires the name of an endpoint. By default, the framework will attempt to find the endpoint configuration information in the configuration file. ChannelManagerService allows you to pass a collection of named ServiceEndpoints into the constructor, and it will then use those names when looking for the endpoints configured on SendActivity.

This kind of freedom enables parity in the client and server programming model by allowing endpoints to be arbitrarily added to the runtime. If conflicting endpoint names are found between code and configuration, the configuration will take precedence. It's important to note that when using the context bindings to communicate with workflow services, ChannelToken helps manage the context identifiers, so it is not required that you use ChannelManagerService.

Sharing Persistence Stores across Applications

One powerful feature of Windows WF is that workflows can be persisted, their state stored, and the process resumed later. The framework ships with a SQL Server® persistence service and the scripts to create the related database structures and stored procedures. The simple database structure that's defined can either be created in a standalone database or included in an existing application database.

In addition to providing simple state persistence, one of the common use cases for persistence is to share a persistence database across processes to enable workflows to run in multiple hosts. For example, when creating a load-balanced Web application, both Web servers should be configured with the same connection string to allow either server to access the state of a workflow and resume processing when the user interacts with the app.

In some cases it is desirable to have a single persistence database and share it across instances of WorkflowRuntime, usually running in different applications. For example, in a business with several Web applications sharing databases for user login, sessions, and other infrastructure-related stores, it would be nice to be able to share a single database for storing workflow state.

The problem with the workflow persistence is that it does not inherently understand the notion of different applications using the same database. This leads to problems because a workflow that belongs to one application might get loaded by the persistence service that is running in another application.

To see how workflows might get loaded into the wrong application, it is important to understand how the persistence services work with regard to timers or delays. When a workflow has a delay activity in it that causes the workflow to become idle, the persistence service can be configured to indicate that the workflow should be unloaded from memory after being persisted to the data store. To correctly resume the workflow, the persistence service needs to know when the delay or timer expires in order to reload the workflow at that time. It is the job of the persistence service to track when the next timer expires for the workflow instance and to load that workflow into memory when the timer has, in fact, expired.

The SQL persistence service accomplishes this by saving the next timer in the database table along with the workflow state, then polling the database table for expired timers on a configurable interval. In the scenario where two applications are sharing the same database, there is a persistence provider in each application polling for those expired timers. A workflow can be started and go idle in the first application, and it is possible, and in fact likely, that some of the time the persistence service in the other application will query the database for expired timers and find the persisted workflow instance and attempt to load it.

Several problems can occur when the persistence service attempts to load a workflow into an application that does not know about it. First, when a workflow is loaded from the database it must be deserialized. For that to succeed, the type information for the workflow and the activities within it must be discoverable in the host process. Since the workflow was built for another application, the load generally will fail because the workflow type information cannot be found in the second application. You can see this by registering for the Services­ExceptionNotHandled event on the WorkflowRuntime class:

workflowRuntime.ServicesExceptionNotHandled += 
  delegate(object sender, 
  ServicesExceptionNotHandledEventArgs snhe) {

  Console.WriteLine(
  "Runtime Service exception: {0}", 
  snhe.Exception.Message);
};

In addition to failing to load the workflow and taking the hit of processing an exception each time it tries to load the workflow, the persistence provider actually locks the workflow at the database level for its configured OwnershipTimeout. This means the workflow cannot be loaded by any other persistence provider or runtime until the configured time has passed, essentially making the workflow inaccessible and the original workflow instance's delay activity last longer than expected.

To address this scenario and allow multiple applications to share the same persistence database, it is essential that the persistence service have some knowledge of a logical application where the workflow is running. Fortunately, SQL Server provides the answer in a way with which almost all .NET developers are familiar: the connection string. One of the many parameters that can be included in the connection string is an application name, which can be used by SQL Server tools and, more important, queries to identify the client application when executing queries. When configuring the persistence service for a workflow runtime, the connection string can be augmented to include the logical application name, as shown here:

SqlWorkflowPersistenceService sql =
  new SqlWorkflowPersistenceService(
  "Application Name=HostA;database=wfstate;Integrated Security=SSPI");

Unfortunately, providing the application name is only half the answer because the tables and stored procedures that are included as part of the framework do not use the application information that is sent as part of the connection. To support the application separation, it is necessary to modify the structure and logic of the persistence database slightly.

First, the InstanceState table must be updated to include a column for the application name. In the simplest form, this is an NVarChar column added to the table, though it also would be possible to create a separate Applications table with an ID and name that would allow storing only the application ID in the InstanceState table. For ease of demonstration, I have chosen to show the simple solution in the code download.

Once the table has the ability to store the application information, the stored procedures that save and load workflow state need to be updated to use the application name. SQL Server makes the programming changes simple by providing the APP_NAME function, which returns the current application name for the session that will match the name specified in the connection string.

The primary stored procedure that needs to be updated is Insert­InstanceState. This procedure is responsible for inserting the current state of the workflow into the InstanceState table, and it must be updated to also insert or update the ApplicationName column. This procedure actually has four different statements where the SQL needs to be updated, as it handles inserts, updates, and locking in both cases.

The other stored procedures that need to be updated are shown in Figure 6 and are all queries to get the workflows for the current runtime. Each needs to be updated to filter the results to include only those rows from the InstanceState table where the application name is equal to the result of the APP_NAME function.

Figure 6 Stored Procedures that Must Filter Based on Application Name

Stored Procedure Purpose
RetrieveExpiredTimerIds Called to get the expired timers. This is the most important query to update, as it is most likely to cause problems.
RetrieveAllInstanceDescriptions Gets the metadata about all instances stored in the persistence database. This should be filtered so that, when querying the runtime for instances, only those in the current application are returned.
RetrieveANonblockingInstanceStateId Gets an individual identifier for a workflow that has work to do.
RetrieveNonblockingInstanceStateIds Retrieves a list of workflow IDs for workflows that have work to do.

The combination of the application name being set on the connection string and the use of the APP_NAME function in the database provides the application concept and isolation needed to share a persistence database across applications. Now when a particular host queries for expired timers, it will only receive those results for workflows that belong to its logical application and that it should be able to successfully load and process. Notice that this solution continues to support the notion of load balancing, as a Web application hosted on two different servers need only use the same connection string, including the application name, to be able to share the workflow state across processes.

Send your questions and comments to mmnet30@microsoft.com.

Matt Milner is an independent software consultant specializing in Microsoft technologies including .NET, Web services, Windows Workflow Foundation, Windows Communication Foundation, and BizTalk Server. As an instructor for Pluralsight, Matt teaches courses on Workflow, BizTalk Server, and Windows Communication Foundation. Matt lives in Minnesota with his wife, Kristen, and his two sons. Contact Matt via his blog at mattmilner.com/blog/archive/Default.aspx.