Export (0) Print
Expand All
Abortable Thread Pool
The Analytic Hierarchy Process
API Test Automation in .NET
Asynchronous HttpWebRequests, Interface Implementation, and More
Bad Code? FxCop to the Rescue
Basics of .NET Internationalization
Behind the Scenes: Discover the Design Patterns You're Already Using in the .NET Framework
BigInteger, GetFiles, and More
Binary Serialization of DataSets
Building Voice User Interfaces
Can't Commit?: Volatile Resource Managers in .NET Bring Transactions to the Common Type
CLR Inside Out: Base Class Library Performance Tips and Tricks
CLR Inside Out: Ensuring .NET Framework 2.0 Compatibility
CLR Inside Out: Extending System.Diagnostics
CLR Profiler: No Code Can Hide from the Profiling API in the .NET Framework 2.0
Concurrent Affairs: Build a Richer Thread Synchronization Lock
Custom Cultures: Extend Your Code's Global Reach With New Features In The .NET Framework 2.0
Cutting Edge: Collections and Data Binding
Const in C#, Exception Filters, IWin32Window, and More
Creating a Custom Metrics Tool
DataGridView
DataSets vs. Collections
Determining .NET Assembly and Method References
Experimenting with F#
File Copy Progress, Custom Thread Pools
Finalizers, Assembly Names, MethodInfo, and More
Got Directory Services?: New Ways to Manage Active Directory using the .NET Framework 2.0
High Availability: Keep Your Code Running with the Reliability Features of the .NET Framework
How Microsoft Uses Reflection
ICustomTypeDescriptor, Part 2
ICustomTypeDescriptor, Part 1
Iterating NTFS Streams
JIT and Run: Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects
Lightweight UI Test Automation with .NET
Low-Level UI Test Automation
Make Your Apps Fly with the New Enterprise Performance Tool
Managed Spy: Deliver The Power Of Spy++ To Windows Forms With Our New Tool
Memory Models: Understand the Impact of Low-Lock Techniques in Multithreaded Apps
Microsoft Java Virtual Machine Update
Microsoft .NET Framework Delivers the Platform for an Integrated, Service-Oriented Web, Part 2
Mini Dump Snapshots and the New SOS
Mutant Power: Create A Simple Mutation Testing System With The .NET Framework
NamedGZipStream, Covariance and Contravariance
.NET Internationalization Utilities
.NET Profiling: Write Profilers With Ease Using High-Level Wrapper Classes
No More Hangs: Advanced Techniques To Avoid And Detect Deadlocks In .NET Apps
The Perfect Host: Create and Host Custom Designers with the .NET Framework 2.0
Phoenix Rising
Scheme Is Love
Security Enhancements in the .NET Framework 2.0
Sepia Tone, StringLogicalComparer, and More
Software Testing Paradoxes
Stay Alert: Use Managed Code To Generate A Secure Audit Trail
Stream Decorator, Single-Instance Apps
StringStream, Methods with Timeouts
SUPERASSERT Goes .NET
Tailor Your Application by Building a Custom Forms Designer with .NET
Test Harness Design Patterns
ThreadPoolPriority, and MethodImplAttribute
ThreadPoolWait and HandleLeakTracker
Three Vital FXCop Rules
A Tidal Wave of Change
To Confirm is Useless, to Undo Divine
Touch All the Bases: Give Your .NET App Brains and Brawn with the Intelligence of Neural Networks
Transactions for Memory
Trustworthy Software
Tune in to Channel 9
UDP Delivers: Take Total Control Of Your Networking With .NET and UDP
UI on the Fly: Use the .NET Framework to Generate and Execute Custom Controls at Run Time
Unexpected Errors in Managed Applications
Unhandled Exceptions and Tracing in the .NET Framework 2.0
Using Combinations to Improve Your Software Test Case Generation
Wandering Code: Write Mobile Agents In .NET To Roam And Interact On Your Network
What Makes Good Code Good?
XML Comments, Late-bound COM, and More
Expand Minimize

Introducing System.Transactions in the .NET Framework 2.0

 

Juval Lowy

December 2005

Applies to:
   Microsoft .NET Framework 2.0

Summary: Build robust, high-performance enterprise applications with System.Transactions, an innovative and practical architecture that is new in the .NET Framework 2.0. (35 printed pages)

Contents

Introduction
.NET 1.x Transaction Programming Models
Transaction Management in the .NET Framework Version 2.0
Working with System.Transactions
Advanced Topics
Conclusion

Introduction

Developers on the Microsoft Windows platform traditionally choose between two transactional programming models: explicit transaction management or declarative transaction flow and management. Both these programming models have their advantages and disadvantages and neither one is superior to the other in every respect. Version 2.0 of the .NET Framework introduces a new transactional programming model available in the System.Transactions namespace. The new model allows developers to easily write transactional code with the lowest overhead possible while minimizing the amount of hand-crafted code and separating it from the application hosting environment and instance management. This whitepaper starts by stating the problem with the traditional programming models and the motivation for the new model. The whitepaper then presents the new programming model, its features and its capabilities, and some advanced features such as asynchronous work, events, security, concurrency management and interoperability.

.NET 1.x Transaction Programming Models

ADO.NET 1.0 offers an explicit transaction management programming model. The developer is responsible for explicitly starting and managing the transaction, as shown in Example 1.

Example 1. Explicit transaction management in ADO.NET

string connectionString = "...";
IDbConnection connection = new SqlConnection(connectionString);
connection.Open();
IDbCommand command = new SqlCommand();
command.Connection = connection;

IDbTransaction transaction;
transaction = connection.BeginTransaction(); //Enlisting database
command.Transaction = transaction;
try
{
   /* Interact with database here, then commit the transaction */
   transaction.Commit();
}
catch
{
   transaction.Rollback(); //Abort transaction
}
finally
{
   connection.Close();
}

You obtain an object representing the underlying database transaction by calling BeginTransaction() on the connection object. BeginTransaction() returns an implementation of the interface IDbTransaction used to manage the transaction. If all updates or other changes made to the database are consistent, simply call Commit() on the transaction object. If any error occurred, you need to abort the transaction by calling Rollback().

While the explicit programming model is straightforward, it is most-suitable for a single object interacting with a single database (or a single transactional resource), as shown in Figure 1.

Figure 1. Single object / single resource transaction

The explicit model is specifically not well suited to transactions that involve multiple objects or multiple resources. This is due to transaction coordination. Consider for example an application where multiple objects interact with each other and with a resource such as a database, as shown in Figure 2. The question now is which one of the participating objects is responsible for beginning the transaction and enlisting the resource? If all of them will do that you will end up with multiple transactions. Furthermore, which one of the objects is responsible for committing or rolling back the transactions? How would one object know what the rest of the objects feel about the transaction? How would the object managing the transaction inform other objects about the transaction's outcome? The objects could also be deployed in different processes or even across different machines, and issues such as network or machine crashes introduce yet additional complexity for managing the transaction. One possible solution in to couple the objects by adding logic for the transaction coordination, but such an approach is very fragile and would not withstand even minor changes to the business flow or the number of participating objects. In addition, the objects in Figure 2 could have been developed by different vendors, which would preclude any such co-ordination.

Figure 2. Multiple objects / single resource transaction

The situation gets significantly more complex when multiple resources are involved (as shown in Figure 3).

Figure 3. Multiple objects accessing multiple resources

On top of the challenges involving multiple objects in a single transaction, the introduction of multiple resources introduces additional management requirements. A transaction that spans multiple resources must deliver all-or-nothing semantics: either all of the resources must commit their updates on behalf of the transaction, or none of them should. Coordination of updates across multiple participants requires a distributed transaction.

A distributed transaction coordinates updates that might include application code running in multiple objects, or durable data managed by multiple resource managers, or both multiple objects and multiple resources. It is impractical for applications to independently manage all the potential error cases of a distributed transaction. For a distributed transaction, you need to rely on a dedicated transaction manager.

In Windows, the Distributed Transaction Coordinator (DTC) system service provides transaction management capabilities to applications. DTC manages transactions across objects or components, across processes and machines, and across multiple resource managers. DTC implements a two-phased commit protocol, and can manage transactional resources such as Oracle or IBM DB2 databases running on any platform. DTC can also manage Windows-native transactional resource managers using a protocol called OLE Transactions (OleTx)- such resource managers include SQL Server and MSMQ. While it is possible to program directly against the DTC, in applications that use the .NET Framework version 1.x, the most common and easy way to utilize DTC transactions is to use the Enterprise Services available via the System.EnterpriseServices namespace. Example 2 shows the use of Enterprise Services transaction.

Example 2. Declarative transaction management via Enterprise Services

using System.EnterpriseServices;

[Transaction]
public class MyComponent : ServicedComponent
{
   [AutoComplete]
   public void MyMethod()
   {
      /*Interact with other serviced components
      and resource managers */
   }
}

.NET Enterprise Services offer a declarative programming model: any class that derives from the abstract class ServicedComponent can use the Transaction attribute. The attribute ensures that when any method of the class is called, that method executes inside a transactional context. A context is the inner-most execution scope of the serviced component. .NET intercepts calls coming into the context, and starts a transaction on behalf of the object. The application code need not explicitly enlist transactional resources into the transaction—this is done automatically by the resource manager. Resources that can automatically enlist in transactions are called transactional resource managers; these include most of the popular commercial databases and durable resources (such as Microsoft Message Queue or IBM MQSeries).

For a ServicedComponent that uses the Transaction attribute, the requirements on the application are minimal: All the object has to do is inform .NET whether it should commit or abort the transaction. It is possible to do this either explicitly, using the methods of the ContextUtil helper class, or declaratively via the AutoComplete method attribute. In a method with the AutoComplete attribute, if no exception is thrown then the application will implicitly request a commit of the transaction. (Whether the transaction actually commits depends on the other participants and resources involved in the transaction.) On the other hand if an exception occurs, the application will request that the transaction be aborted. Because the commit outcome of a transaction requires unanimity, this implies that the transaction will actually abort.

While the declarative model offers significant productivity benefits, it is not without flaws:

  • Forcing inheritance from ServicedComponent occupies the precious place of a base class normally reserved for internal application modeling.
  • Use of an Enterprise Services transaction always implies the use of a distributed DTC transaction, even when a single resource and a single object are involved. The two-phase commit protocol implies a cost, both at the transaction manager level and at the resource level since the resource has to keep logging its operations. The overhead could cause degradation in performance compared with explicit transaction management.
  • Implied with the use of Enterprise Services is the COM+ hosting model. In some cases developers may find this to be an unnecessary coupling, or an unnecessary complexity.
  • Enterprise Services transactions are tightly-coupled with Enterprise Services instance management strategies. All transactional objects are also just-in-time activated, and there are some issues when it comes to combining transactions with object pooling. While this coupling is well-appreciated in a scalable application, for all other applications it forces a state-aware programming model that most developers have difficulty with.
  • Enterprise Services transactions are always thread-safe—there is no way for multiple threads to participate in the same transaction. While this greatly simplifies transaction management especially in a multithreaded environment, in some edge cases it is a limitation.

In effect, .NET 1.0 and 1.1 equates the use of a non-distributed transaction with explicit transaction management, and equates the use of a distributed transaction with that of declarative transaction via Enterprise Services. There is no way of using a declarative transaction without using a DTC transaction, nor is there an easy way in managed code to perform explicit transaction management that utilizes the DTC. Choosing a programming model (explicit or declarative) invariably chooses a transaction manager as well, and vice-versa.

Transaction Management in the .NET Framework Version 2.0

To address the problems just described with both the explicit and the declarative transactional programming models, the .NET Framework Version 2.0 introduces a new, explicit transaction programming model, complemented by a new optimized transaction manager, called the Lightweight Transaction Manager (LTM). The new programming model allows explicit transaction demarcation, for distributed transactions as well as single-resource transactions, allowing greater flexibility than .NET v1.1. The LTM provides optimizations for transactions that involve only a single resource, allowing higher performance when possible. In addition to the new support for transaction demarcation, the .NET Framework Version 2.0 also introduces new support for building transactional resources themselves.

Developers get access to this capability through new classes and interfaces within the System.Transactions namespace. For example, to explicitly start a transaction, an application can instantiate a new TransactionScope. If the application code for that transaction runs inside a single app domain, and if it involves at most a single durable resource, and if that resource supports transaction promotion, then the LTM can coordinate the transaction. For a transaction that involves application code that runs in multiple app domains (including multi-process and multi-machine scenarios), or for any transaction that involves more than one durable resource, even when all application code resides in a single app domain, or when a single transactional resource is involved but the resource does not support transaction promotion, the distributed (OleTx) transaction manager will be used. The application code itself need not concern itself with these optimizations—they just work.

Resources that can be transactionally managed in this way are called System.Transactions Resource Managers. Similar to the situation in Enterprise Services, a System.Transactions Resource Manager is a resource that can automatically enlist in an open transaction. Typically a resource does this via code in the client library, that detects the current or ambient transaction scope.

Programming against a single, common transaction management namespace (System.Transactions) allows the transaction manager implementation to vary, dynamically, without changing the application code. For example, transaction promotion: In cases where an optimized transaction commit protocol can be used, it will be used (via LTM). Otherwise the transaction will automatically be promoted to a more generalized distributed transaction commit protocol.

For example, suppose within the scope of a transaction, a single object interacts with a single SQL Server 2005. Any work performed on the instance of SQL Server can be managed by SQL Server internal transactions, with LTM acting only as a pass-through layer. This scenario will provide optimal throughput and performance. If the application were to provide the transaction to another object in another app domain on the same machine, or if the application would enlist a second durable resource manager, the transaction will automatically be promoted to a distributed transaction, and the involved participant, in this case SQL Server 2005, would be notified that the transaction had been promoted. Once promoted the transaction remains managed in its elevated state till its completion, when a distributed two-phased commit protocol would be used.

As another example, suppose an application initiates a transaction scope, then interacts with a single Oracle database. The System.Transaction runtime will automatically and transparently use a distributed transaction, because Oracle does not currently support transaction promotion. Even if the transaction ends after only involving a single object and a single instance of Oracle, a situation that would have allowed the use of the internal transaction management within the Oracle database, it will still be managed as a distributed transaction by the System.Transaction runtime. This is because System.Transaction cannot predict the future. The transaction manager must open a transaction at a durable resource manager, before any work is performed on that resource manager. At the time of the first operation on Oracle, the System.Transaction runtime cannot be certain that later, other resource managers will not be involved in the transaction. Therefore, a distributed transaction must be employed for the initial work at Oracle. If Oracle database provided support for promotion, then the LTM optimization could be used, as with SQL Server 2005.

A fundamental class in the System.Transactions namespace is Transaction. Transaction is used to enlist resources in the transaction, to abort the transaction, to set the isolation level, to obtain the transaction status and ID, and to clone the transaction. To commit the transaction, System.Transactions defines the CommittableTransaction class:

public interface ITransaction : IDisposable
{
   void Rollback();
}
[Serializable]
public class Transaction : ITransaction,ISerializable
{
   public void Rollback(); //Abort the transaction
   public static Transaction Current{get;set;}
   //Other members
}
[Serializable]
public sealed class CommittableTransaction : Transaction,IAsyncResult
{
   public void Commit();
   //Other members
}

The reason for two classes instead of one is discussed later on.

When using System.Transactions, applications should not directly utilize transactional programming interfaces on resource managers—for example the T-SQL BEGIN TRANSACTION or COMMIT TRANSACTION verbs, or the MessageQueueTransaction() object in System.Messaging namespace, when dealing with MSMQ. Those mechanisms would bypass the distributed transaction management handled by System.Transactions, and combining the use of System.Transactions with these resource manager "internal" transactions will lead to inconsistent results. As a rule, use System.Transactions in the general case, and use resource manager internal transactions only in specific cases where you are certain the transaction will not span multiple resources, and will not be composed into a larger transaction. Never mix the two.

System.Transactions defines a concept called an ambient transaction. The ambient transaction is the transaction that is present in the thread that the current application code is executing within. To obtain a reference to the ambient transaction call the static Current property of Transaction:

Transaction ambientTransaction = Transaction.Current;

If there is no ambient transaction, Current will return null.

The ambient transaction object is stored in the thread local storage (TLS). As a result, when the thread winds its way across multiple objects and methods, all objects and methods can access their ambient transaction.

Transaction Promotion

Unless specified otherwise, every System.Transactions transaction in .NET 2.0 starts as a transaction managed by the LTM. As long as at most a single durable resource manager is involved, there is nothing wrong with letting the underlying resource (such as Microsoft SQL Server 2005) manage the transaction. In such case, the LTM does not need to actually manage the transaction at all—its role should be reduced to monitoring the transaction for a need for promotion. This is exactly what the LTM does—it functions as an adapter for the underlying resource converting calls on its managed transaction to calls on the underlying resource. On the other hand, if promotion is required, the LTM must inform the resource manager to relinquish sole control over the transaction. LTM then provides the resource manager with a distributed transaction identifier, and the resource manager then understands that for any work performed thus far, and for any future work performed on behalf of that transaction, the transaction commit or rollback will be coordinated by the external distributed transaction coordinator. To support this interaction, the resource manager needs to implement the interface IPromotableSinglePhaseNotification defined as:

public interface IPromotableSinglePhaseNotification
{
   void Initialize();
   Transaction Promote();
   void Rollback(SinglePhaseEnlistment singlePhaseEnlistment);
   void SinglePhaseCommit(SinglePhaseEnlistment 8 singlePhaseEnlistment);
}

When an LTM transaction accesses a resource manager, the resource manager client library detects the ambient transaction, and enlists itself into the transaction. The LTM queries the resource manager for its implementation of IPromotableSinglePhaseNotification. Calls to commit or abort the LTM's transaction are converted to calls on IPromotableSinglePhaseNotification (SinglePhaseCommit() and Rollback() respectively).

The catch here is that LTM can only be used when the resource manager supports promotion. Consider the scenario in which, within the scope of a transaction, an application interacts only with an Oracle database. This is a good candidate for a one-participant optimization. However, the transaction manager (System.Transactions) has no way of knowing a priori that the application will not also include other durable resources in the transaction. For this reason, LTM can be used only when System.Transactions is assured that the single durable resource can support promotion if need be, at some later point during the life of the transaction.

The LTM overhead is extremely lightweight. Performance benchmarking done by Microsoft with SQL Server 2005, comparing the use of an LTM transaction to using a native transaction directly found no statistical differences between using the two methods.

System.Transactions makes it easy for developers of resource managers to support the interaction just described. To obtain a reference to the transaction the resource manager calls the static Current property of Transaction. To enlist in a transaction, the resource manager calls the Transaction object's EnlistDurable() method for a durable resource or the EnlistVolatile() method for a volatile resource (a resource that stores state only in memory):

[Serializable]
public class Transaction : ITransaction,ISerializable
{
   public Enlistment EnlistDurable(...);
   public Enlistment EnlistVolatile(...);
   //Other members
}

When the LTM decides to promote the transaction to a distributed transaction, it simply calls the Promote() method of IPromotableSinglePhaseNotification. Internally the resource manager will convert the transaction from a local transaction to a distributed transaction. The net effect is as if the resource was enlisted from the very beginning in a distributed transaction.

Since implementing IPromotableSinglePhaseNotification is key to participating in a promotable transaction, current versions of transactional resource managers such as SQL Server 2000, MSMQ, Oracle Database or IBM DB2 cannot participate in an LTM transaction. When such resources are accessed by an LTM transaction, the transaction is automatically promoted to a distributed transaction, even if only one such resource is involved. Again, this is transparent to the application developer, although there is a performance cost, as discussed above. Microsoft is working with vendors to encourage their support for the LTM optimization in future releases of their resource managers.

Triggering Promotion

As stated above, every System.Transactions transaction in .NET 2.0 starts as a lightweight transaction managed by the LTM. There are two kinds of events that trigger promotion: the first is the enlistment into the transaction of a second durable resource manager. For example, suppose an application initiates a transaction and opens a connection to a SQL Server database. The transaction will be managed by LTM. Suppose then that the application opens a connection on an Oracle database, causing Oracle to enlist in the open transaction. At this point, the LTM will detect multiple durable resources involved in the transaction and will promote the transaction to a distributed transaction.

The second promotion trigger is the serialization and transmission of the transaction object across an app domain boundary. Transaction is marshaled-by-value object, meaning, any attempt to pass it across an app domain boundary (even in the same process) will result in serialization of the transaction object.

Applications can pass a transaction object explicitly, invoking a remote method that takes a Transaction as a parameter. Applications can also serialize and transmit a transaction implicitly, by accessing a remote transactional ServicedComponent. Serializing the transaction implies promoting it because when serialized across an app domain, you are in fact distributing the transaction and the LTM (which functions merely as a pass-through) is no longer adequate. The LTM transaction class uses custom serialization. In its handling of the serialization request it will promote the transaction, and in its handling of the deserialization request it will enlist in the new app domain in a new OleTx transaction.

Working with System.Transactions

As described above, System.Transactions decouples the application programming model from transaction managers. Applications can manage transactions explicitly by utilizing the classes in the System.Transactions namespace. Applications can alternatively use declarative transactions, relying on the classes in System.EnterpriseServices.

Declarative Programming Model

The declarative transactional programming model from System.EnterpriseServices in .NET v1.x continues unchanged in .NET v2.0. The good news is, a ServicedComponent will enjoy the performance optimizations associated with System.Transactions and the LTM, with no code changes. The Enterprise Services code presented in Example 2, if compiled and run under .NET 2.0, will automatically use the LTM if possible, and will use the more general distributed transaction protocol when required. This maintains the paramount productivity and application modeling advantages of Enterprise Services while providing a performance optimization where possible. In addition, this change will not affect existing applications because it is decoupled from the application logic itself.

Explicit Programming Model

The alternative to declarative transactions via .NET Enterprise Services, is explicit transaction management, via System.Transactions. The most common and easy way of doing so is utilizing the class TransactionScope:

public class TransactionScope : IDisposable
{
   public void Complete();
   public void Dispose();
   public TransactionScope();
   public TransactionScope(Transaction transactionToUse);
   public TransactionScope(TransactionScopeOption scopeOption);
   public TransactionScope(TransactionScopeOption scopeOption, 
   public TransactionScope(TransactionScopeOption scopeOption, 
TimeSpan scopeTimeout);
   //Additional constructors
}

As the name implies, the TransactionScope class is used to create and manage a transactional scope, as demonstrated in Example 3. Internally in its constructor, the TransactionScope object creates a transaction (LTM by default), and assigns it as the ambient transaction by setting the Current property of the Transaction class. TransactionScope is a disposable object—the transaction will end once the Dispose() method is called (the end of the using statement in Example 3).

Example 3. Using the TransactionScope class.

using(TransactionScope scope = new TransactionScope())
{
   /* Perform transactional work here */
   //No errors-commit transaction
   scope.Complete();
}

The Dispose() method also restores the ambient transaction to it original state, null in the case of Example 3.

If the code inside the transactional scope (typically inside the using statement) takes a long time to complete, it may be indicative of a transactional deadlock. To address that, the transaction will automatically abort if executed for more than a pre-configured timeout. The default timeout is 60 seconds, but applications can specify a timeout using one of the alternative constructors for TransactionScope. Application administrators can also modify the default timeout by (60 seconds by default). The timeout is configurable both programmatically and administratively via a configuration file.

Finally, if the TransactionScope object is not used inside a using statement, it would become garbage once the transaction timeout is expired and the transaction is aborted.

The TransactionScope object has no way of knowing whether the transaction should commit or abort, yet the main objective of TransactionScope is to shield the developers from the need to interact with the transaction directly. To address this, every TransactionScope object has a consistency bit, which is by default set to false. You can set the consistency bit to true by calling the Complete() method. Note that you can only call Complete() once. Subsequent calls to Complete() will raise an InvalidOperation exception. This is deliberate to encourage developers to have no transactional code after the call to Complete().

If the transaction ends (due to calling Dispose() or garbage collection) and the consistency bit is set to false, the transaction will abort. For example, the following scope object will always rollback its transaction, because the consistency bit is never changed from its default value:

using(TransactionScope scope = new TransactionScope())
{
}

On the other hand, if you do call Complete() and the transaction ends with the consistency bit set to true as in Example 3, the transaction will commit, assuming all transactional resource managers can commit. After calling Complete() you cannot access the ambient transaction, and trying to do so will result with an invalid operation exception. You can access the ambient transaction (via Transaction.Current) again once the scope object (on whom you have called Complete()) has been disposed. Keep in mind that even in the case where the application calls Complete(), the transaction may yet abort. In this case, Dispose() will throw TransactionAbortedException. You can catch handle that exception, as shown in Example 4.

Example 4. TransactionScope and Error handling

try
{
   using(TransactionScope scope = new TransactionScope())
   {
      /* Perform transactional work here */
      //No errors-commit transaction
      scope.Complete();
   }
}
catch(TransactionAbortedException e)
{
   // handle abort condition
}
catch //Any other exception took place
{
   Trace.WriteLine("Cannot complete transaction");
   throw;
}

Transaction Flow Management

Transaction scopes can nest both directly and indirectly. A direct scope nesting is simply one scope nested inside another, as shown in Example 5.

Example 5. Direct scope nesting

using(TransactionScope scope1 = new TransactionScope())
{
   using(TransactionScope scope2 = new TransactionScope())
   {
      scope2.Complete();
   }
   scope1.Complete();
}

An indirect scope nesting occurs when calling a method that uses a TransactionScope from within a method that uses its own scope, as is the case with the RootMethod() in Example 6.

Example 6. Indirect scope nesting

void RootMethod()
{
   using(TransactionScope scope = new TransactionScope())
   {
      /* Perform transactional work here */
      SomeMethod();
      scope.Complete();
   }
}

void SomeMethod()
{
   using(TransactionScope scope = new TransactionScope())
   {
      /* Perform transactional work here */
      scope.Complete();
   }
}

You can have multiple scope nesting, involving both direct and indirect nesting. The top-most scope is referred to as the root scope. The question is of course what is the relation between the root scope and all the nested scopes? How will nesting a scope affect the ambient transaction? Will all the scopes participate in the same transaction? Will voting on transaction in a nested scope affect its containing scope?

There are options. The TransactionScope class provides several overloaded constructors that accept an enumeration of the type TransactionScopeOption, defined as:

public enum TransactionScopeOption
{
   Required,
   RequiresNew,
   Suppress
}

The value of TransactionScopeOption lets you control whether the scope takes part in a transaction, and if so, whether it will join the ambient transaction or become the root scope of a new transaction.

The default value for the scope option is TransactionScopeOption.Required. This is the value used when you use one of the constructors which do not accept a TransactionScopeOption parameter. Here is how an application can explicitly specify the value of the TransactionScopeOption in the scope's constructor:

using (TransactionScope scope = new 
TransactionScope(TransactionScopeOption.Required))
{...}
.

The TransactionScope object determines which transaction to belong to when it is constructed. Once determined, the scope will always belong to that transaction. TransactionScope bases its decision on two factors: whether an ambient transaction is present and the value of the TransactionScopeOption parameter.

A TransactionScope object has three options:

  • Join the ambient transaction.
  • Be a new scope root, that is, start a new transaction and have that transaction be the new ambient transaction inside its own scope.
  • Do not take part in a transaction at all.

If the scope is configured with TransactionScopeOption.Required, and an ambient transaction is present, the scope will join that transaction. If on the other hand there is no ambient transaction, then the scope will create a new transaction, and become the root scope.

If the scope is configured with TransactionScopeOption.RequiresNew then it will always be the root scope. It will start a new transaction, and its transaction will be the new ambient transaction inside the scope.

If the scope is configured with TransactionScopeOption.Suppress it will never be part of a transaction, regardless of whether an ambient transaction is present. A scope configured with TransactionScopeOption.Suppress will always have null as its ambient transaction.

The TransactionScopeOption allocation decision truth table is summarized in Table 1.

Table 1. TransactionScopeOption decision truth table

TransactionScopeOption Ambient Transaction The scope will take part in
Required No New Transaction (will be the root)
Requires New No New Transaction (will be the root)
Suppress No No Transaction
Required Yes Ambient Transaction
Requires New Yes New Transaction (will be the root)
Suppress Yes No Transaction

When a TransactionScope object joins an existing ambient transaction, disposing of the scope object does not end the transaction. If the ambient transaction was created by a root scope, only when the root scope is disposed of will the transaction end. If the ambient transaction was created manually, it will end when it is committed or aborted by its creator, or when it times out.

Example 7 demonstrates a TransactionScope object that creates three other scope objects, each configured with a different TransactionScopeOption values, and Figure 4 shows graphically the resulting transactions.

Example 7. Transaction flow across scopes

using(TransactionScope scope1 = new TransactionScope())
//Default is Required
{
   using(TransactionScope scope2 = new 
TransactionScope(TransactionScopeOption.Required))
   {...}

   using(TransactionScope scope3 = new 
TransactionScope(TransactionScopeOption.RequiresNew))
   {...}

   using(TransactionScope scope4 = new 
TransactionScope(TransactionScopeOption.Suppress))
   {...}

   ...
}

Figure 4. Transaction flow across transaction scopes

Example 7 lists a code block without any ambient transaction creating a new TransactionScope (scope1) with TransactionScopeOption.Required. scope1 will be a root scope: it creates a new transaction (Transaction A) and will make Transaction A the ambient transaction. scope1 then goes on to create three more objects, each with a different TransactionScopeOption value. For example, scope2 is configured for required support, and since there is an ambient transaction, it will join Transaction A. Note that scope3 is the root scope of a new transaction, Transaction B, and that scope4 has no transaction.

More on TransactionScopeOption

Although the default and most commonly used value of TransactionScopeOption is TransactionScopeOption.Required, each of the other values has its use.

TransactionScopeOption.Suppress is useful when the operations performed by the code section are nice to have and should not abort the ambient transaction if the operations fail. TransactionScopeOption.Suppress allows you to have a non-transnational code section inside a transactional scope, as shown in Example 8.

Example 8. Using TransactionScopeOption.Suppress

using(TransactionScope scope1 = new TransactionScope())
{
   try
   {
      //Start of non-transactional section
      using(TransactionScope scope2 = new 
 TransactionScope(TransactionScopeOption.Suppress))
      {
         //Do non-transactional work here
      }
      //Restores ambient transaction here
   }
   catch
   {}
   //Rest of scope1
}

Another example where TransactionScopeOption.Suppress is useful is when you want to provide some custom behavior, and you need to perform your own programmatic transaction support or manually enlist resources.

That said, you should be careful when mixing transactional scopes with non-transactional scopes, as it can jeopardize isolation and consistency. The non-transactional scope may have errors and could not affect any transaction outcome (threatening consistency) or it can act based on information not committed yet (threatening isolation).

When TransactionScopeOption.Required is used, the code inside the TransactionScope must not behave differently when it is the root or when it is just joining the ambient transaction. It should operate identically in both cases. There is no way your code can tell the difference anyway. Configuring the scope to TransactionScopeOption.RequiresNew is useful when you want to perform transactional work outside the scope of the ambient transaction: for example, when you want to perform some logging or audit operations, or when you want to publish events to subscribers, regardless of whether your ambient transaction commits or aborts.

You should be extremely careful when using the TransactionScopeOption.RequiresNew value, and verify that the two transactions (the ambient transaction and the one created for your scope) do not introduce an inconsistency if one aborts and the other commits.

Voting Inside a Nested Scope

It is important to realize that although a nested scope can join the ambient transaction of its parent scope, the two scope objects will have two distinct consistency bits. Calling Complete() in the nested scope has no affect on the parent scope:

using(TransactionScope scope1 = new TransactionScope())
{
   using(TransactionScope scope2 = new TransactionScope())
   {
      scope2.Complete();
   }
   //Consistency bit of scope1 is still false
}

Only if all the scopes from the root scope down to the last nested scope vote to commit the transaction, will the transaction commit.

Setting the TransactionScope Timeout

Some of the overloaded constructors of TransactionScope accept a value of type TimeSpan, used to control the timeout of the transaction, for example:

public TransactionScope(TransactionScopeOption scopeOption, TimeSpan  scopeTimeout);

To specify a timeout different from the default of 60 second, simply pass in the desired value:

TimeSpan timeout = TimeSpan.FromSeconds(30);
using(TransactionScope scope = new 8 TransactionScope(TransactionScopeOption.Required, timeout))
{...}

A timeout set to zero means an infinite timeout. Infinite timeout is useful mostly for debugging, when you want to try to isolate a problem in your business logic, by stepping through your code, and you do not want the transaction you debug to timeout while you figure out the problem. Be extremely careful with infinite timeout in all other cases, because it means there are no safeguards against transaction deadlocks.

You typically set the TransactionScope timeout to values other than default in two cases. The first is during development, when you want to test the way your application handles aborted transactions. By setting the TransactionScope timeout to a small value (such as one millisecond), you cause your transaction to fail and can thus observe your error handling code. The second case in which you set the TransactionScope transaction timeout to be less than the default timeout is when you want to more tightly constrain the period over which locks may be held on behalf of open transactions. In a highly concurrent system, application architects may want to specify that no single transaction should be open for more than, say, 5 seconds, to minimize lock contention.

In a nested TransactionScope hierarchy, the timeout is the union of all timeouts. In effect, the smallest timeout of all scopes in the hierarchy, takes precedence.

Setting the TransactionScope Isolation Level

Some of the overloaded constructors of TransactionScope accept a struct the type TransactionOptions defined as:

public struct TransactionOptions
{
   public IsolationLevel IsolationLevel{get;set;}
   public TimeSpan Timeout{get; set;}
   //Other members
}

While you can use the TransactionOptions Timeout property to specify a timeout, the main use for TransactionOptions is for specifying isolation level. By default, the transaction will execute with isolation level set to serializable. However, you could assign into TransactionOptions IsolationLevel property a value of the enum type IsolationLevel defined as:

Example 9. Specifying an isolation level

public enum IsolationLevel
{
   ReadUncommitted,
   ReadCommitted,
   RepeatableRead,
   Serializable,
   Unspecified,
   Chaos,   //No isolation whatsoever
   Snapshot //Special form of read committed 8 
supported by SQL Server 2005
}

Example 9 shows how to specify an isolation level.

TransactionOptions options = new TransactionOptions();
options.IsolationLevel = IsolationLevel.ReadCommitted;
options.Timeout = TransactionManager.DefaultTimeout;

using(TransactionScope scope = new 8 TransactionScope(TransactionScopeOption.Required, options))
{...}

Selecting an isolation level other than serialized is commonly used for read-intensive systems, and it requires a solid understanding of transaction processing theory and the semantics of the transaction itself, the concurrency issues involved, and the consequences for system consistency. In addition, not all resource managers support all levels of isolation, and they may elect to take part in the transaction at a higher level than the one configured. Every isolation level besides Serialized is susceptible to some sort of inconsistency resulting from other transactions accessing the same information. The difference between the four isolation levels is in the way the different levels use read and write locks. A lock can be held only when the transaction accesses the data in the resource manager, or it can be held until the transaction is committed or aborted. The former is better for throughput, the latter for consistency. The two kinds of locks and the two kinds of operations (read/write) give four basic isolation levels. See a transaction-processing textbook for a comprehensive description of isolation levels.

When using nested TransactionScope objects, all nested scopes must be configured to use exactly the same isolation level if they want to join the ambient transaction. If a nested TransactionScope object tries to join the ambient transaction yet it specifies a different isolation level, an ArgumentException is thrown.

TransactionScope Benefits

The TransactionScope object offers clear advantages and benefits compared with the programming model of Example 1:

  • The code inside the transactional scope is not only transactional, it is also promotable. The transaction starts with the LTM and System.Transactions will promote it as required, according to the nature of its interaction with the resources or remote objects.
  • The scope is independent of the application object model—any piece of code can use the TransactionScope and thus become transactional. There is no need for special base class or attributes.
  • There is no need to enlist resources explicitly with the transaction. Any System.Transactions resource manager will detect the ambient transaction created by the scope and automatically enlist.
  • Overall, it is a simple and intuitive programming model even for the more complex scenarios that involve transaction flow and nesting.

Advanced Topics

The TransactionScope class wraps the underlying System.Transactions transaction managers and the committable transaction object. Everything you have seen so far with using the TransactionScope class can be achieved directly as well. Example 10 demonstrates this point.

Example 10. Requiring a transaction manually

Transaction oldAmbient = Transaction.Current;
CommittableTransaction committableTransaction;
committableTransaction = oldAmbient as CommittableTransaction;
if(committableTransaction == null)
{
   committableTransaction = new CommittableTransaction();
   Transaction.Current = committableTransaction;
}

try
{
   /* Perform transactional work here */
   //No errors-commit transaction
   committableTransaction.Commit();
}
finally
{
   committableTransaction.Dispose();
   //Restore the ambient transaction
   Transaction.Current = oldAmbient;
}

Example 10 is the equivalent of Example 3 of using a TransactionScope set to TransactionScopeOption.Required. Example 10 starts by obtaining a reference to the ambient transaction object via the static property Current of the Transaction class. Current is of the type Transaction, and while Transaction has many essential methods, it does not have a Commit() method.

This is by design, so that you could pass the transaction object (or clones of it) to other parties (potentially on other threads), have them vote on the transaction, yet you reserve the act of committing the transaction to yourself. To commit a transaction, you need to use CommittableTransaction, mentioned already. CommittableTransaction is listed in Example 11.

Example 11. The CommittableTransaction class

 [Serializable]
public sealed class CommittableTransaction : 
Transaction,IAsyncResult
{
   public CommittableTransaction();
   public CommittableTransaction(TimeSpan timeout);
   public CommittableTransaction(TransactionOptions 
transactionOptions);

   public void Commit();
   public IAsyncResult BeginCommit(AsyncCallback 
asyncCallback,object asyncState);
   public void EndCommit(IAsyncResult asyncResult);
}

Instantiating a CommittableTransaction does not set the ambient transaction. If you want the newly created transaction to become the ambient transaction, the application must assign to Transaction.Current:

Transaction.Current = committableTransaction;

If you explicitly assign to Transaction.Current, you should save the prior value, and restore it when your application is finished using the CommittableTransaction object. Often this can be done within a finally clause.

While Example 10 helps to demystify how TransactionScope actually works, there is an interesting use for the CommittableTransaction—asynchronous commit. It is possible that committing the transaction will take a while, because it involves multiple database access, network latency and so on. While you do care how long a transaction executes (especially when resolving deadlocks), in high throughput applications, you may want to finish the transnational work as soon as possible, and have the actual commit itself carry out in the background. The BeginCommit() and EndCommit() methods of CommittableTransaction serve just such purpose. You use these methods like any other case of asynchronous invocation in .NET, as shown in Example 12. Calling BeginCommit() dispatches the commit holdup to a thread from the thread pool.

To be correct, the application must call EndCommit(). Certainly, to determine the outcome of the transaction, the app must call EndCommit(). But because this is an asynchronous delegate, in order to be correct, the application must call EndCommit() even if it does not care about the outcome. If the transaction failed to commit (for whatever reason), EndCommit() will raise a transaction exception. EndCommit() will block its caller until the transaction is committed (or aborted). The easiest way of calling commit asynchronously is by providing a callback method, to be called when committing is finished. You must call EndCommit() on the original committable transaction object used to invoke the call. Fortunately, you can do easily obtain that object by down-casing the IAsyncResult parameter of the callback method, since CommittableTransaction is derived IAsyncResult, and it is the same object. Also worth mentioning is that until the commit is done, the transaction maintains its locks in the resource managers (according to its isolation level).

Example 12. Asynchronously commit a transaction

public void DoTransactionalWork()
{
   Transaction oldAmbient = Transaction.Current;
   CommittableTransaction committableTransaction = new 
CommittableTransaction();
   Transaction.Current = committableTransaction;

   try
   {
      /* Perform transactional work here */
      //No errors-commit transaction asynchronously.
      committableTransaction.BeginCommit(OnCommitted,null);
   }
   finally
   {
      //Restore the ambient transaction
      Transaction.Current = oldAmbient;
   }
}
void OnCommitted(IAsyncResult asyncResult)
{
   CommittableTransaction committableTransaction;
   committableTransaction = asyncResult as CommittableTransaction;
   Debug.Assert(committableTransaction != null);
   try
   {
      using(committableTransaction)
      {
          committableTransaction.EndCommit(asyncResult);
      }
   }
   catch(TransactionException e)
   {
      //Handle the failure to commit
   }
}

Transaction Events

The Transaction class provides a public event called TransactionCompleted, defined as:

public delegate void TransactionCompletedEventHandler(object sender,
TransactionEventArgs e);
public class TransactionEventArgs: EventArgs
{
   public TransactionEventArgs();
   public Transaction Transaction{get;}
}
[Serializable]
public class Transaction : ITransaction,ISerializable
{
   public event TransactionCompletedEventHandler TransactionCompleted;
   //Rest of the members
}

TransactionCompleted is raised after the transaction is completed (either committed or aborted). The event is of the delegate type TransactionCompletedEventHandler, which takes two parameters: sender is the transaction which just completed, and e is of the type TransactionEventArgs, which also provides access to the same transaction.

You can subscribe to the TransactionCompleted event other parties that want to be notified when the transaction is completed, as shown in Example 13.

Example 13. Using the transaction completed event

public void DoTransactionalWork()
{
   using(TransactionScope scope = new TransactionScope())
   {
      Transaction transaction = Transaction.Current;
      transaction.TransactionCompleted += OnCompleted;
      /* Perform transactional work here */
      scope.Complete();
   }
}
void OnCompleted(object sender, TransactionEventArgs e)
{
   Debug.Assert(sender.GetType() == typeof(Transaction));
   Debug.Assert(Object.ReferenceEquals(sender,e.Transaction));
   Transaction transaction = e.Transaction;
   switch(transaction.TransactionInformation.Status)
   {
      case TransactionStatus.Aborted:
      {
         Trace.WriteLine("Transaction Aborted!");
      break;
      }
      case TransactionStatus.Committed:
      {
         Trace.WriteLine("Transaction Committed!");
      break;
      }
   }
}

While developers know when an LTM transaction starts (when the scope is constructed), your code may also want to know when that LTM transaction is promoted to a distributed transaction. The static class TransactionManager provides the event TransactionStarted defined as:

public delegate void TransactionStartedEventHandler(object sender,
TransactionEventArgs e);

public static class TransactionManager
{
   public event TransactionStartedEventHandler 8
DistributedTransactionStarted;
   //Rest of the members
}

The DistributedTransactionStarted event is raised whenever a distributed transaction starts. You can subscribe to both a distributed transaction's start and completion events, as shown in Example 14.

Example 14. Subscribing to the transaction start events

public void DoTransactionalWork()
{
   TransactionManager.DistributedTransactionStarted += 
OnDistributedStarted;

   using(TransactionScope scope = new TransactionScope())
   {
      Transaction transaction = Transaction.Current;
      transaction.TransactionCompleted += OnCompleted;
      /* Perform transactional work here */
      scope.Complete();
   }
}
void OnDistributedStarted(object sender,TransactionEventArgs e)
{...}
void OnCompleted(object sender,TransactionEventArgs e)
{...}

Be mindful of the work you do in the distributed transaction's start event handlers. The event handling methods should be of short duration, because the distributed transaction will not start until all the event subscribers have been notified.

Code-Access Security

An application that uses an LTM transaction can consume resources from at most a single durable recourse such as SQL Server 2005. This, however, is not the case with a distributed transaction, which can interact with multiple resources, potentially across the network. This opens the way for both denial-of-service attacks by malicious code, or even just accidental excessive use of such resources. To prevent that, System.Transactions defines the DistributedTransaction security permission. Whenever a transaction is promoted from an LTM to a distributed transaction, the code that triggered the promotion will be verified to have the DistributedTransaction permission. Verification of the security permission is done like any other code-access security verification, using a stack walk, demanding from every caller up the stack the DistributedTransaction permission. Note again that the security demand will affect the code which triggered the promotion, not necessarily the code that created the LTM transaction in the first place (although that can certainly be the case if they are on the same call stack).

This permission demand is of particular importance for Smart Client applications deployed in a partial trust environment, such as the LocalInternet zone, that want to perform transactional work against multiple resources. None of the pre-defined partial trust zones grant the DistributedTransaction permission. You will have to grant that permission using a custom code group, or manually list that permission in the application's ClickOnce deployment manifest. Another solution altogether is to introduce a middle-tier between the client application and the resources, and have the middle tier encapsulate accessing these resources transitionally.

Concurrency Management and Cloning

Imagine a transactional client that creates a worker thread to perform work concurrently to the client. The client would like to compose the work of the worker thread into its own transaction, meaning, only if the work done on the worker thread is consistent, should the client's transaction commit. However, there are two problems with this scenario. The first is the classic problem of mixing transactions and multithreading—if concurrent work is allowed in a transaction, then there could be a situation where one thread tries to commit the transaction while another tries to abort it. Second, the ambient transaction of the client is stored in the TLS, and as a result, it will not propagate to the worker thread. System.Transactions has built-in support to address the issues involving transactional concurrent work.

The Transaction class provides the method DependentClone(), defined as:

Public enum DependentCloneOption
{
   BlockCommitUntilComplete,
   RollbackIfNotComplete,
}
[Serializable]
public class Transaction : ITransaction,ISerializable
{
   public DependentTransaction 8 DependentClone(DependentCloneOption cloneOption);
   //Rest of the members
}

DependentClone() returns an instance of the sealed class DependentTransaction defined as:

[Serializable]
public sealed class DependentTransaction : Transaction
{
   public void Complete();
}

DependentTransaction derives from Transaction, and its sole purpose is to indicate (potentially to the creator of the transaction) that the work done with the cloned transaction is complete and is ready to be committed. Example 15 demonstrates cloning a dependent transaction and passing it to a worker thread.

Example 15. Using dependent clone by another thread

public class WorkerThread
{
   public void DoWork(DependentTransaction dependentTransaction)
   {
      Thread thread = new Thread(ThreadMethod);
      thread.Start(dependentTransaction);
   }
   public void ThreadMethod(object transaction)
   {
      DependentTransaction dependentTransaction;
      dependentTransaction = transaction as 8 DependentTransaction;
      Debug.Assert(dependentTransaction != null);
      Transaction oldTransaction = Transaction.Current;
      try
      {
         Transaction.Current = dependentTransaction;
      /* Perform transactional work here */
      dependentTransaction.Complete();
      }
      finally
      {
         dependentTransaction.Dispose();
      Transaction.Current = oldTransaction;
      }
   }
}
//Client code
using(TransactionScope scope = new TransactionScope())
{
   Transaction currentTransaction = Transaction.Current;
   DependentTransaction dependentTransaction;
   dependentTransaction = currentTransaction.DependentClone(8 DependentCloneOption.BlockCommitUntilComplete);
   WorkerThread workerThread = new WorkerThread();
   workerThread.DoWork(dependentTransaction);
   /* Do some transactional work here, then: */
   scope.Complete();
}

The client code creates a transactional scope which also sets the ambient transaction. Although the ambient transaction is a committable transaction, you don't want to pass that transaction to the worker thread. Instead, the client clones the current (ambient) transaction by calling DependentClone() on the current transaction. DependentClone() creates a dependent-transaction object—the underlying returned object is DependentTransaction not CommittableTransaction. Note that the Transaction class also provides a raw Clone() method, which returns a true clone of the transaction, including CommittableTransaction (if applicable). Avoid passing that dangerous clone to worker threads for the reasons listed at the beginning of this section.

The class WorkerThread provides the ThreadMethod() that will execute on the new thread. The client starts a new thread passing the dependent transaction as the thread method parameter. The problem now is synchronization on completion—what if the client reaches the end of the transactional scope before the worker thread is done? How could the client try to commit its transaction in that case?

To address this, the transaction object keeps track of all the dependent clones it created. The DependentClone() method accepts a parameter called cloneOption of the enum of the type DependentCloneOption.If cloneOption is equaled to DependentCloneOption.BlockCommitUntilComplete, when the client tries to commit the transaction, the client will be blocked until all dependent transactions are completed. In Example 15 the client will be blocked trying to dispose of the transaction object at the end of the using statement. If cloneOption is DependentCloneOption.RollbackIfNotComplete, the client will not be blocked trying to commit. Instead, if there are still active dependent transactions and the client tries to commit, the transaction is aborted and a TransactionAborted exception is thrown. Not only that, but the worker thread is not affected, and it continues to work on the transaction in vain. Unless you have a need for some custom manual synchronization, always have cloneOption set to DependentCloneOption.BlockCommitUntilComplete.

Even with cloneOption set to DependentCloneOption.BlockCommitUntilComplete, there are a few additional concurrency issues you need to be aware of:

  • If the worker thread rolls back the transaction but the client tries to commit it, a TransactionAborted exception is thrown.
  • The worker thread can only call Complete() once, and will get InvalidOperation exception otherwise.
  • Make sure to create a new dependent clone for each worker thread in the transaction. Never pass the same dependent clone to multiple threads, because only one of them can call Complete() on it.
  • If the worker thread spawns a new worker thread, make sure to create a dependent clone from the dependent clone and pass that to the new thread.

Interoperability

System.Transactions supports natively .NET Enterprise Services—any serviced component can take advantage of the new transaction managers and transaction promotion, with no code change (see Example 2).

The interesting questions are what happens when a transactional serviced component creates a TransactionScope object, or when a transactional scope creates serviced components. If the TransactionScope object executes in an Enterprise Services transactional context, should it use the Enterprise Services transaction as the ambient transaction? Which Enterprise Services context should the TransactionScope object use? What issues arise from such mix-and-match scenarios? The Enterprise Services transaction programming model is coupled to the object lifecycle and state management, and combining that with transactional scopes that are not even object-based may lead to some complicated side effects.

System.Transactions defines three levels of interoperability between itself and Enterprise Services: None, Automatic, and Full. The enum EnterpriseServicesInteropOption defined as:

public enum EnterpriseServicesInteropOption
{
   Automatic,
   Full,
   None
}

allows you to specify the interoperability level. The TransactionScope class provides constructors that accept EnterpriseServicesInteropOption, for example:

public TransactionScope(
   TransactionScopeOption scopeOption,
   TransactionOptions transactionOptions,
   EnterpriseServicesInteropOption interopOption
);

EnterpriseServicesInteropOption.None, as the name implies, means that there is no interoperability between Enterprise Services contexts and transactional scopes. Such transactional scopes will completely ignore the transactional context or their creating client, and will use their own ambient transaction. The ambient transaction will be distinct from the Enterprise Services managed transaction. As a result, you could have the transactional scope's transaction aborts while the Enterprise Services transaction around it commits, as shown in Example 16.

Example 16. Using ComplusInteropOption.None

[Transaction]
public class MyService : ServicedComponent
{
   [AutoComplete]
   public void DoSomething()
   {
      TransactionOptions options = new TransactionOptions();
      options.IsolationLevel = IsolationLevel.Serializable;
      options.Timeout = TransactionManager.DefaultTimeout;
      using(TransactionScope scope = new TransactionScope(8
         TransactionScopeOption.Required, 
         options, 8
         EnterpriseServicesInteropOption.None) 
         )
      {
         //No call to scope.Complete(), yet the COM+
      //transaction still can commit
      }
   }
}

EnterpriseServicesInteropOption.None eliminates any side effects resulting from mixing Enterprise Services transactions and ambient transactions, and as such it is the safest option. EnterpriseServicesInteropOption.None is the default used by TransactionScope with all constructors that do not accept a EnterpriseServicesInteropOption value.

If you do want to combine EnterpriseServices transactions with your ambient transaction, you need to use either EnterpriseServicesInteropOption.Automatic or EnterpriseServicesInteropOption.Full. Both these options rely on ServiceDomain function, and therefore require running on Windows XP Service Pack 2 or Windows 2003 Server. EnterpriseServicesInteropOption.Full will try to behave as much as possible as a serviced component would. If the TransactionScope object needs a transaction (either joining an existing ambient transaction or creating a new one), EnterpriseServicesInteropOption.Full will create a new Enterprise Services transactional context, flow into it any existing Enterprise Services transaction (or create a new Enterprise Services transaction) and the ambient transaction used will be the same transaction used by that Enterprise Services context. If the TransactionScope object does not require a transaction, the scope will be placed in the default Enterprise Services context.

As an example, consider the application that creates a TransactionScope, then interacts with a transactional resource manager. Suppose that the client library for this RM supports COM+ or System.EnterpriseServices transactions, but has not been updated to interrogate the Transaction.Current object provided by System.Transactions. In this case, creating the scope with EnterpriseServicesInteropOption.Full allows the resource to participate in the transaction.

EnterpriseServicesInteropOption.Automatic combines EnterpriseServicesInteropOption.None and EnterpriseServicesInteropOption.Full. EnterpriseServicesInteropOption.Automatic checks whether the scope is constructed in the default Enterprise Services context or not.

If the scope is constructed in the default Enterprise Services context, then there is no Enterprise Services transaction. EnterpriseServicesInteropOption.Automatic will behave like EnterpriseServicesInteropOption.None and will not create a new Enterprise Services context. If the scope is constructed in any Enterprise Services context besides the default context, then EnterpriseServicesInteropOption.Automatic will behave like EnterpriseServicesInteropOption.Full.

Tables 2 and 3 below illustrate the Enterprise Services context and the ambient transaction used by a TransactionScope object which requires a transaction.

Table 2. Enterprise Services contexts and a transactional scope that requires a transaction.

ES Context None Automatic Full
Default context Default context Default context Create new transactional context
Non-default context Maintain client's context Create new transactional context Create new transactional context

Table 3. Ambient transaction a transactional scope that requires transaction.

ES Context None Automatic Full
Default context ST ST ES
Non-default context ST ES ES

ST—Scope's ambient transaction is managed by System.Transactions, separate from any transactions with Enterprise Services contexts that may be present.

ES—Scope's ambient transaction is same as the Enterprise Services context's transaction.

Implementing a Resource Manager

System.Transactions provides new support for developers who wish to build transactional resource managers—any artifact whose actions are coordinated within the scope of a transaction.

A resource manager is typically exposed to an application via a library. Within that library, in order to be transactional, the RM code must interrogate the Current transaction, and enlist in the ambient transaction, if it exists. The RM may enlist as a durable or volatile participant. Durable implies that the RM manages durable state and supports failure recovery—for example recovery after a failure while one or more transactions were in doubt or not yet resolved. Volatile RMs manage volatile resources such as in-memory data structures, and need not perform recovery after application restart.

The RM should call EnlistVolatile or EnlistDurable on the transaction object, as appropriate. The enlisted RM must also support the IEnlistmentNotification interface:

public interface IEnlistmentNotification
{
   void Commit(Enlistment enlistment);
   void Prepare(PreparingEnlistment enlistment);
   void Rollback(Enlistment enlistment);
   void InDoubt(Enlistment enlistment);
}

Via this interface, the transaction manager notifies the resource manager of transaction lifecycle events. When the transaction reaches the prepare phase, the transaction manager calls the Prepare() method on the RM. At this point, a durable resource manager should log a prepare record during this phase. The record should contain all the necessary information to perform recovery, including the RecoveryInformation property on the enlistment. This RecoveryInformation must be passed to the transaction manager in the Reenlist() method during a subsequent recovery. The RM then votes on the transaction, by calling either Prepared() or ForceRollback() on the enlistment.

If and when the transaction reaches the commit stage, the TM calls the Commit() method. The RM may use this callback to release locks and log records. At completion, the RM should call Done() on the enlistment to acknowledge receipt of the commit.

The InDoubt() method is invoked on a volatile RM, when the transaction manager has invoked a single phase commit operation to a single durable resource, and then connection to the durable resource was lost prior to getting the transaction result. At that point, the transaction outcome cannot be safely determined. The implementation of this method should perform whatever recovery or containment operation necessary, and then must call the Done method on the on the enlistment parameter when it has finished its work.

Finally, the transaction manager calls Rollback() when the transaction needs to rollback. RMs should undo work, release locks, and then call Done on the enlistment object.

If the RM supports transaction promotion as described above, it should call the EnlistPromotableSinglePhase method to enlist. In this case the RM must implement the IPromotableSinglePhaseNotification interface to receive signals from the transaction manager, as described above.

Conclusion

System.Transactions is an innovative and practical architecture that is long overdue. It separates the application programming model from transaction management, it enables automatic promotions of transaction managers for optimized performance, it allows the construction of transactional resource managers, and it is an extensible architecture that will accommodate future transaction managers. Technologies such as Windows Communication Foundation (codenamed "Indigo") and platforms such as Windows Vista (codenamed "Longhorn") will rely on System.Transactions as the foundation for consistent transaction management, offering new transaction managers in a pluggable provider model. The first release of System.Transactions in the .NET Framework v2.0 is an important addition to your development arsenal in building robust, high-performance enterprise applications.

 

About the author

Juval Lowy is a software architect and the principal of IDesign, specializing in .NET architecture consulting and advanced .NET training. Juval is Microsoft's Regional Director for the Silicon Valley, working with Microsoft on helping the industry adopt .NET. His latest book is Programming .NET Components, published by O'Reilly. Juval participates in the Microsoft internal design reviews for future versions of .NET. Juval published numerous articles regarding almost every aspect of .NET development, and is a frequent presenter at development conferences. Microsoft recognized Juval as a Software Legend—one of the world's top .NET experts and industry leaders.

Show:
© 2014 Microsoft