Export (0) Print
Expand All
Expand Minimize

Performance Comparison: Transaction Control

 

Priya Dhawan
Microsoft Developer Network

February 2002

Summary: Focuses on performance aspects of transaction control that affect performance, scalability, and maintainability by comparing transactions models, including database transactions, Microsoft ADO.NET manual transactions, and ADO.NET automatic transactions in common applications scenarios with a Microsoft SQL Server 2000 database. (11 printed pages)

Contents

Introduction
Architectural Options
Test Scenarios
Test Tools and Strategy
Machine Configuration
Performance Test Results
Conclusion

Introduction

Architectural choices for transaction control affect performance, scalability, and maintainability. This article focuses on the performance aspects of these choices by comparing relative performance of various transaction models, including database transactions, Microsoft® ADO.NET manual transactions, and ADO.NET automatic transactions in common application scenarios with a Microsoft SQL Server™ 2000 database.

Code examples showing the techniques compared here are available in a related article on Transaction Control.

Architectural Options

Transaction control models include database transactions, manual transactions, and automatic transactions. Although all these models accomplish the same task of maintaining consistency across resources within a transaction boundary, they have their advantages and disadvantages and therefore are suitable for different purposes.

Database Transaction

Database transactions are implemented in Transact-SQL that wraps required operations within the BEGIN TRANSACTION and COMMIT/ROLLBACK TRANSACTION statements.

Manual Transaction

A set of ADO.NET objects is used to manually control the transaction boundary with explicit instructions to begin and end the transaction.

Automatic Transaction

The .NET class is registered with Microsoft Windows® 2000 Component Services (COM+) to participate in a transaction. The class is marked with declarative attributes that specify how it participates in the transaction.

Test Scenarios

To compare the transaction models, we used a method that inserted an order header and its details into the appropriate tables in the database. If any of the inserts failed, we rolled the transaction back. When all the inserts were successful we committed the transaction.

To make the test as realistic as possible, the database was loaded with over 100,000 rows of Customer accounts, one million rows of Orders (10 orders per Customer), and over five millions rows of Order details (5 details per Order). The data is in a SQL Server 2000 database and the SQL Server .NET data provider to connect to SQL Server. Some of the approaches compared here use the XML features of SQL Server 2000.

InsertOrder

The InsertOrder method accepts data representing an order with multiple details and makes the appropriate insertions into the Order and OrderDetails tables in the database as part of the same transaction.

Test Tools and Strategy

In our tests, an ASPX Web page calls a .NET assembly containing the test code. Although we would get better absolute performance if we were to test transaction control techniques directly rather than behind a Web server, testing in a stateless environment is realistic for common application scenarios. Moreover, there are a number of test tools available that can appropriately stress Web pages providing multithreaded testing.

For the purpose of our tests, we used Microsoft Application Center Test (ACT), which is designed to stress test Web servers and analyze performance and scalability problems with Web applications, including ASP pages and the components they use. Refer to ACT documentation for details on how to create and run tests. Application Center Test can simulate a large group of users by opening multiple connections to the server and rapidly sending HTTP requests. It also allows us to build realistic test scenarios where we can call the same method with a randomized set of parameter values. This is an important feature whereby users are not expected to call the same method with the same parameter values over and over again. The other useful feature is that Application Center Test records test results that provide the most important information about the performance of the Web application.

Applications may have to operate at different transaction isolation levels depending on their business needs. The isolation is the degree to which one transaction must be isolated from other transactions. A higher isolation level ensures data consistency, but can significantly impact your application's ability to scale. Lower isolation levels increase an application's scalability, but at the expense of data correctness. Microsoft SQL Server 2000 supports four isolation levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIAZABLE. By default, it operates at an isolation level of READ COMMITTED. Refer to Microsoft SQL Server 2000 documentation for details. COM+ (version 1.0) running on Windows 2000 supports only serializable isolation, which is the highest degree of isolation. Such high level of data protection comes at the cost of performance. (Note: COM+ 1.5, which ships with Microsoft Windows Server 2003, allows you to configure the isolation level of a transactional component). For a fair comparison between the transaction models, we ran the transactions at a Serializable isolation level.

Machine Configuration

The following tables provide a brief summary of the test bed configuration used to perform the tests.

Table 1. Client Machine Configuration

Number of clientsMachine/CPU# of CPUsMemoryDiskSoftware
1Dell Precision WorkStation 530 MT
1694 MHz
1512 MB16.9 GB
  • Windows XP
  • Application Center Test

Table 2. Web Server Configuration

Number of serversMachine/CPU# of CPUsMemoryDiskSoftware
1Compaq Proliant 400 MHz 4640 MB50 GB
  • Windows 2000 Advance Server SP 2
  • Release version of .NET Framework

Table 3. Database Server Configuration

Number of serversMachine/CPU# of CPUsMemoryDiskSoftware
1American Megatrends Atlantis 800 MHz21 GB28 GB
  • Windows 2000 Advance Server SP 2
  • SQL Server Enterprise Edition SP 2

Performance Test Results

Throughput and latency are the key performance indicators. For a given amount of data being returned, throughput is the number of client requests processed within a certain unit of time, typically within a second. Because peak throughput may occur at a response time that is unacceptable from a usability standpoint, we tracked latency—measured as response time using the report generated by Application Center Test for each of the tests run—and capped the testing of a given method once the response time exceeded one second.

InsertOrder using OPENXML

In the first set of tests, order and order details are passed from DataSet tables as XML to a Microsoft SQL Server 2000 stored procedure. Transact-SQL code in the stored procedure uses the OPENXML method to make appropriate insertions into the Orders and OrderDetails tables in a one round trip to the database. The test is first run using one order having 10 details.

ms978408.bdadotnetarch13_01(en-us,MSDN.10).gif

Figure 1. InsertOrder_OpenXml(Order=1, Details=10)

Note

  • In the DatabaseTxn approach, the stored procedure wraps the operations with the BEGIN TRANSACTION and COMMIT/ROLLBACK TRANSACTION statements.
  • In the ManualTxn approach, the transaction is controlled using the ADO.NET SQLTransaction object.
  • In ManualTxn_COM+_IP and ManualTxn_COM+_OP, the transaction is controlled using the ADO.NET SQLTransaction object, but the assembly is configured with COM+ as a library and server package, respectively.
  • The .NET assembly containing the AutomaticTxn and AutCompleteTxn implementations is registered with COM+. In AutomaticTxn, we explicitly commit or abort the transaction, whereas in AutoCompleteTxn, the .NET assembly votes to commit or abort its current transaction.
  • In AutomaticTxn_IP and AutoCompleteTxn_IP, the COM+ application containing the assembly is a library-activated application, so that it runs in the process of the client that creates it.
  • In AutomaticTxn_OP and AutoCompleteTxn_OP, the COM+ application containing the assembly is a server-activated application, so that it runs in a surrogate process (dllhost.exe).

As shown in Figure 1, the DatabaseTxn, ManualTxn and ManualTxn_COM+_IP approaches offer similar performance, although the DatabaseTxn approach maintains a slight lead in throughput over the manual transaction model. The reason is that the ADO.NET manual transaction requires additional round trips to the database to begin and end the transaction.

The COM+ interoperability overhead incurred by ManualTxn_COM+_IP is very minimal because none of the COM+ services are being used by this approach, which explains its behavior being very similar to that of ManualTxn, where COM+ does not come in the picture at all.

If you compare ManualTxn and ManualTxn_COM+_IP with AutomaticTxn_IP, you will see that automatic transactions are slower by around 30 percent on average. This is because the cost of setting up the transaction using the Distributed Transaction Controller (DTC) is high compared to the total amount of work done in the transaction (11 insertions). AutomaticTxn_IP also involves a cross-context method call requiring context transition, and hence some additional cost.

AutomaticTxn_IP offers better performance than ManualTxn_COM+_OP and AutomaticTxn_OP because the library package in the AutomaticTxn_IP case causes the assembly to run inside of the calling application's process (aspnet_wp.exe, in our case), avoiding the need for any marshaling. The overall overhead including the DTC cost in the AutomaticTxn_IP case is less than the overhead incurred by ManualTxn_COM+_OP due to data marshaling, which is why AutomaticTxn_IP maintains a slight lead in the performance over ManualTxn_COM+_OP.

As you see, the AutoCompleteTxn_IP and AutoCompleteTxn_OP AutoComplete versions behave exactly like the AutomaticTxn_IP and AutomaticTxn_OP versions, respectively. The disadvantage of using the AutoComplete commit is that it may decrease the performance of the application by lengthening the time it takes the transaction to release server resources as it votes to commit on the successful return of the method. Also, it does not allow you to throw a user-friendly message in case you want to when a transaction fails.

ms978408.bdadotnetarch13_02(en-us,MSDN.10).gif

Figure 2. InsertOrder_OpenXml(Order=1, Details=100)

The performance differences between the approaches nearly disappear when the test is run with 1 order and 100 details.

The database transaction model still maintains a slight lead in throughput over other approaches.

The reason for all the models having essentially the same performance in this test is due to the larger amount of work being done in this transaction (101 insertions). Here the overheads, such as interprocess communication with the DTC and data marshalling, are amortized over the transaction length and become insignificant.

InsertOrder using multiple DataAdapters

In the second set of tests, we associated separate DataAdapters for the Orders and OrderDetails table in the database. Stored procedures are specified for the InsertCommand on the DataAdapters. The Update method on the DataAdapter corresponding to the Orders table is called first, which returns the OrderId of the newly inserted order. The Update method on the DataAdapter corresponding to the Details table is called next. Because the Update method does not send the changes as a batch to the database, this technique uses multiple round trips to the database to do the insertions.

You may refer to the Performance Comparison: Data Access Techniques article, which compares various data access techniques, including the OPENXML and multiple DataAdapters approaches.

The test is first run using one order and 10 details.

ms978408.bdadotnetarch13_03(en-us,MSDN.10).gif

Figure 3. InsertOrder_DataSet(Order=1, Details=10)

Note

  • In the ManualTxn approach, transaction control is through ADO.NET SQLTransaction objects.
  • In ManualTxn_COM+_IP and ManualTxn_COM+_OP, the transaction is controlled using the ADO.NET SQLTransaction object, but the assembly is configured with COM+ as a library and server package, respectively.
  • The .NET assembly containing the AutomaticTxn and AutCompleteTxn implementations is registered with COM+. In AutomaticTxn, we explicitly commit or abort the transaction, whereas in AutoCompleteTxn, the .NET assembly votes to commit or abort its current transaction.
  • In AutomaticTxn_IP and AutoCompleteTxn_IP, the COM+ application containing the assembly is a library-activated application, so that it runs in the process of the client that creates it.
  • In AutomaticTxn_OP and AutoCompleteTxn_OP, the COM+ application containing the assembly is a server-activated application, so that it runs in a surrogate process (dllhost.exe).

ManualTxn and ManualTxn_COM+_IP perform very similar due to the minimal cost associated with COM+ interoperability.

AutomaticTxn_IP is significantly slower than the above-mentioned approaches because the cost of setting up the transaction using the Distributed Transaction Controller (DTC) is high compared to the total amount of work done in the transaction (11 insertions). Besides the DTC overhead, there are some additional costs due to COM+ interoperability and context transition in AutomaticTxn_IP, as we saw earlier.

AutomaticTxn_IP offers better performance than ManualTxn_COM+_OP and AutomaticTxn_OP, as in the latter cases the COM+ application is configured as a server package, which runs in a separate surrogate process (dllhost.exe) incurring the overhead of data marshaling.

The AutoCompleteTxn_IP and AutoCompleteTxn_OP AutoComplete versions behave exactly similar to the AutomaticTxn_IP and AutomaticTxn_OP versions respectively, as expected.

ms978408.bdadotnetarch13_04(en-us,MSDN.10).gif

Figure 4. InsertOrder_DataSet(Order=1, Details=100)

As in the first test, the performance difference between transaction control models is substantially reduced with a larger transaction (101 insertions). The costs associated with the DTC and data marshaling are mitigated by the length of the transaction in this case.

Conclusion

Each transaction model offers trade-offs with respect to application performance and code maintainability. Therefore, they are suitable for different scenarios.

Running a database transaction implemented in a stored procedure offers the best performance because it needs only a single round trip to the database. Although it provides good performance and flexibility, you need to code in Transact-SQL, which is not as easy to code as in .NET.

Manual transactions using ADO.NET transaction objects are easy to code and give you the flexibility of controlling the transaction boundary with explicit instructions to begin and end the transaction. The trade-off for this ease and flexibility is that ADO.NET manual transactions take at least as many round trips to the database as there are operations to execute within the transaction in addition to trips that begin and end the transaction.

The automatic transaction model incurs some extra overhead due to the interaction with the DTC and the interoperation with COM. The costs associated with the DTC are amortized over the transaction length, as shown by the tests. The advantage of using this model is that it greatly simplifies the application design and reduces coding requirements. An automatic transaction will be the only choice if your transaction spans multiple transaction-aware resource managers, which could include SQL Server databases, MSMQ message queues, etc.

Show:
© 2014 Microsoft