SQL Server 2005 Row Versioning-Based Transaction Isolation

 

SQL Server Technical Article

Kimberly L. Tripp, Neal Graves

August 2006

Applies to:
   SQL Server 2005

Summary: Microsoft SQL Server 2005 provides nonlocking, nonblocking read consistency to your users through snapshot isolation and read committed isolation using row versioning. Find out when you can use these features to improve performance and reduce latency in your applications. (59 printed pages)

Download the associated sample code, SQLServer2005RVTI.exe.

Download the Word version of this article, RowVersioningBasedIsolation.doc.

Contents

Introduction
Data Access Patterns and Usage
Usage Scenarios
Understanding Concurrency Control
Understanding Isolation
Definitions, Terminology, and Syntax for Row Versioning-Based Isolation
Development Best Practices
Administrative Best Practices
Conclusion
For More Information

Introduction

In many systems today, significant read activity is isolated from write activity in the form of a data warehouse or separated system. There are many advantages to this approach: read-intensive applications typically want more index structures, data redundancies, and even other views of data. Transaction processing systems want the best write throughput while incurring only the most minimal overhead. The access patterns of readers and writers typically differ: Readers are more prone to larger analysis types of queries, and writers are more prone to singleton inserts, updates, and deletes. When these activities are separated, the administrator can focus on recovery strategies for a smaller, more manageable transaction processing system. OLTP databases tend to be much smaller than data redundant decision-support or analysis-oriented databases. Having said this, however, this distinction cannot always be clearly made. After data is copied, transformed, and archived to an analysis-oriented database, the data must be maintained and/or rebuilt periodically. Users definitely benefit from looking at a transactionally consistent version of the database; however, that version of the database no longer has current data, may take many hours to build and index, and might not be what the user really wants. In this situation, snapshot isolation and read committed isolation using row versioning may be useful.

The primary focus of this paper is to discuss when using these isolation levels is appropriate, what the possible trade-offs are, and what are the best practices for using isolation levels.

**Note   **Before you read this document, we recommend that you read the topic Concurrency Effects in SQL Server Books Online.

Data Access Patterns and Usage

Production databases are quickly growing in size, and data-retention periods are increasing with changing business and regulatory requirements. Additionally, with drive capacity doubling every 12 to 18 months and storage costs falling, the amount of data users want to keep "online" is increasing. One solution is to separate analysis from transaction processing, and although that may have many benefits for complex detailed analysis and business intelligence probing, it does not always work with regards to disk space and manageability. With the demand for more data to be online with more active queries executing, the need for more current and real-time analysis contention for data exists.

In Microsoft SQL Server 2000, contention can be minimized under read committed transaction isolation when an active SELECT statement releases read locks after a resource is read. The default environment follows the standard SQL-92 definition in that only committed data is read and uncommitted changes are not visible. However, although only committed data can be read, the standard does not guarantee read consistency even within the life of a statement (for example, a subset of rows that are returned by a SELECT statement may include changes of a transaction that started after the SELECT but committed before the SELECT could end). The resource lock (a shared lock) is released immediately after processing the row, and that data row can be immediately modified, even while the read is still processing other rows. This modified row might reappear in the same SELECT statement if some concurrent transaction moves the row to the set of rows that have not yet been read (for example, an index key change).

**Note   **If data movement is not likely (that is splitting is reduced through thorough and appropriate index creation and maintenance), the chance of rereading a row within a single statement is so significantly low that it becomes hard to produce this anomaly.

In many situations, this is the correct and performant choice. Only committed changes are visible, and they are quickly visible with minimal resources locking. For example, if looking for the current total of sales from a continuously processing system, only an estimate is possible because the value will become "stale" only moments after being accessed when transactions continue to be processed. In fact, an even less restrictive transaction isolation level, called read uncommitted, is often specified with a lock hint by using either the WITH NOLOCK or WITH READUNCOMMITTED hints (these are synonymous). This environment allows for uncommitted data to be read; however, when the count of sales and/or total sales is only an estimate, seeing data that is "in progress" may be acceptable. When this is not acceptable, a change in isolation level—made by the programmer to ensure consistency through read repeatability of the data—must be used.

So where do you draw the line? Can statement-level or transaction-level read-consistent data be returned while a system is actively processing? Can you write a long-running query in a production environment, ask for consistency, and not block writers? You cannot address these questions in SQL Server 2000. However, in SQL Server 2005 you can through two optional database-level settings. For statement-level consistency, enabling READ_COMMITTED_SNAPSHOT automatically changes the behavior of read committed isolation. This new behavior offers nonlocking, nonblocking, statement-level read consistency. In this paper, the traditional read committed isolation will be referred to as read committed, and the optional new behavior as read committed using row versioning. For transaction-level consistency, the snapshot new isolation level has been added. Changing to this isolation level will make transaction-level consistency a controllable setting. When these new options are not set, SQL Server 2005 default database behavior works as in earlier versions. This default behavior will continue to be the preferred behavior in many systems in which transaction processing throughput and performance are the highest goals.

If you do not want a form of nonlocking snapshot (either statement level or transaction level), row versioning will be used to track row modifications. To enable this, data writers will pay the cost when an update or delete operation executes, even if there is no reader at the time. The version store in the tempdb database retains version records until all active transactions that require those row versions commit (assuming that the UPDATE or DELETE statement has already committed). Or more accurately, the version store must retain specific version records until the commit or rollback of transactions that are running under row versioning-based isolation that started before the commit of the transaction that made the change. Although this cost of taking a version is minimal, deciding to implement this should not be taken without careful consideration and many best practices in place.

Usage Scenarios

This section explores how the SQL Server 2005 snapshot isolation and read committed isolation using row versioning can help deliver improved performance, reduced latency, and greater developer and database administrator productivity in your organization. The following common business scenarios are discussed:

  • Application in online transaction processing
  • Ad hoc reporting against live data
  • Ad hoc reporting against a copy-managed database
  • Overnight reporting against live data
  • Migration to a common database technology

Application in Online Transaction Processing

At first glance, the primary use of row versioning-based isolation might seem to be in read-intensive workloads, such as data warehousing and operational reporting systems. With these systems there might be a concurrency impact caused by table-level read locks of complex, long running queries (especially aggregations) against large tables that require a transactionally consistent view of the database that can effectively lock out transactions that need to update the data. However this is not the only application of row versioning-based isolations. The optional new behavior of the read committed isolation level, which works with a snapshot of the data at the statement level, can significantly improve the throughput of mixed-workload systems and still offer transactionally consistent data—for large joins and aggregations. Because the data snapshot guarantees the consistency of the read, long-running conflicts cannot occur. Additionally, in this environment, application changes are not required. The change is made at the database options level.

When pessimistic locking (the way most database vendors traditionally implement the full ANSI standard for levels of transaction isolation) is used, applications typically exhibit blocking. Simultaneous data-access requests from readers and writers within transactions request conflicting locks. This is entirely normal and is not a significant performance bottleneck, provided the blocking is short lived. However, reader/writer contention significantly reduces performance in systems under stress, because any increase in the time it takes to process a transaction (for example, delays that are caused by over-utilized system resources, such as disk I/O, RAM or CPU; and also delays that are caused by poorly written transactions, such as those that hold locks across user interaction) can have a disproportional affect on blocking. The longer a transaction takes to execute, the longer locks are held and the greater the likelihood of blocking.

An example of this might be a car rental company that uses both an internal and Web-based reservation application to book cars on behalf of its customers. Systems such as these have transactions that contend for the same data (that is cars). The system will offer short-running queries that allow the customer service representative to check availability of cars in certain locations before booking them for the customer—this is an area where programming techniques, such as disconnected datasets, are often used to provide optimistic concurrency control, specifically:

  1. The application queries for all available cars of a certain class, in a specific date range, at a rental location. This query is probably a join of at least a few tables such as Car, Class, Reservation, and Location. Additionally, this query will run under the read committed isolation level to ensure that only committed data is returned to the user.
  2. The recordset or dataset that is obtained by the query will be disconnected from the database so as to remove any locks held on the data while the data is displayed in the caller's application. This is often called batch optimistic because it emulates the optimistic forms of database concurrency control. It is optimistic in that although the data is active, the likelihood for conflict should be low. The use of row-level timestamps enables the programmer to identify data change and manage conflicting updates with appropriate messages to the user interface.
  3. The caller will select a specific car, and the dataset will be edited to reflect the reservation.
  4. The application will then reconnect and try to synchronize the change to the database, using the row-level SQL Server timestamp column to ensure that the data has not been changed by other callers while the data was disconnected.
  5. The application then reports back to the caller to either report success (the reservation was taken) or to indicate a conflict (the car was taken by another caller) and to offer the chance to try to book another car.

Be aware that this technique is not truly optimistic. In this design pattern, a significant amount of contention can take place while the query in step 1 is running to find candidate cars. With SQL Server 2005 read committed isolation using row versioning, these requests are given a nonlocking, nonblocking, transactionally consistent version of the data—while the query runs. With this isolation, the locking/blocking load on the server can be reduced, and the live data is not blocked for other customers who want to reserve cars. While this isolation can improve the end-to-end performance for the transactions that are booking cars by eliminating lock waits, it does not necessarily improve the chances that a car that is viewed by the long running query will be available. However, this is an acceptable trade-off. The reservations occur faster and are not blocked by simultaneous requests for car rental data. This leads to increased throughput of transactions, especially under peak workloads, such as those caused by holiday bookings and business travel peak times.

After the new read committed isolation level using row versioning has been enabled by the database administrator at the database level, the programming logic that was used previously in steps 1 through 5 can take advantage of this new behavior without changing a line of code. In fact, after the database setting has been set, by default, all queries will use this form of statement-level read consistency.

Ad Hoc Reporting Against Live Data

All companies are continually striving to reduce costs while expanding the capabilities of their information systems. One of the guiding targets for SQL Server 2005 is the elimination of the latency between data being captured within a database and it being available for use for reporting by the organization—this reduction in latency enables developers to build systems that provide data outside of the traditional batch-reporting schedule.

Consider the scenario of a food retailer who is trying to balance the need to minimize the stock of fast moving consumer goods—such as sandwiches, milk, and other perishables that are held at each store—with the need to ensure that the shelves in the supermarket are stocked with items that customers want to buy. Many of these kinds of items are very sensitive to the weather. For example, barbeque items and ice cream sell more on sunny days; comfort foods sell more on rainy days.

Before the introduction of row versioning-based isolation levels, the developers of the supermarket application might have avoided long blocks on the live data by using the read uncommitted isolation level. This can be difficult to use, especially when joining across multiple tables, because the read uncommitted isolation level provides nonblocking access to a statement-level transactionally inconsistent view of the database. This is a view in which the data that is related to a business transaction may have only partially arrived in the database.

Furthermore, the practice of analyzing sales to look at the mix of other items sold together with the fastest selling items (also known as basket analysis) can be very data-intensive and, therefore, long running. This increases the chance of inconsistencies as data continues to arrive in the database.

For situations in which a transactionally consistent view of the data is a necessity, the system designers would typically design for these types of reports to run out of hours to avoid affecting the concurrency of the live system. For example, a long-running, read-only report that was started during peak usage could ultimately block all writers from updating the system, as shown in Figure 1.

Having an IT infrastructure that only provides for preplanned overnight reporting could hinder the ability of the supermarket manager to react to unexpected demands and review which products are at risk of selling out, and thus potentially miss an opportunity to order a second delivery to restock from the warehouse to meet demand, leading to loss of potential revenue, or even worse, loss of customers to competitors.

The new isolation levels provide applications with nonlocking access to a transaction-level or statement-level transactionally consistent view of the whole database; therefore, making the report writer's job much easier and also much more rewarding. In SQL Server 2005, the power of the SQL Server 2005 Database Engine has been even more tightly integrated with the advanced aggregation and analytical capabilities of the SQL Server 2005 Analysis Services component, which introduces the Universal Data Model and enables full analytical reporting without having to extract and transform data into star schema. Snapshot isolation technology has a major role to play in making data more accessible to this kind of application—being able to combine the power of the cross-selling reports with live data has the potential to change the way these business processes work.

ms345124.sql2k5snapshotisol_01(en-US,SQL.90).gif

Figure 1. Tills Blocked by a Reporting User

The new read committed isolation level using row versioning is best used for existing report systems (or systems that are purchased from third parties in which the isolation level cannot be changed) as it is likely that no application change will be required to take advantage of nonlocking reads, especially because most of these applications populate reports with the results of a single query. In this scenario, the row version (or the data snapshot) will be retained until the end of the transaction.

The new snapshot isolation level is suited to more complex requirements, such as running a series of reports that must run within the same transaction so that the same transactionally consistent view of the data can be seen. This solution is used more likely in complex financial reporting systems. In these systems, you do not want data changes to be picked up while the report suite is running, because this could easily cause anomalies in totals and checksums between reports. In this scenario the row version would be retained for the length of the transaction.

SQL Server 2005 makes it simple to enable these new isolation levels for a database. When the new form of read committed isolation level is configured, SQL Server automatically uses it without requiring any application or transactional code changes. To use transaction-level snapshot isolation, you will have to change the application to use it.

After either of these capabilities is enabled, it is then safe to provide the supermarket manager with a series of parameterized reports that can be run when unexpected demand takes place in the store. This is done without blocking the data coming in from the store's tills, thus helping the manager ensure that the needs of the store's customers are anticipated and met, leading to high customer satisfaction.

Enabling row versioning-based isolation levels makes additional demands on the database server. In the scenario described earlier, it is assumed that the back office server that was used to collect data arriving from the supermarket's tills had enough spare capacity to support the occasional requirement to run ad hoc reports against the live data. Row versioning, which supports row versioning-based isolation, imposes added load on a server running update transactions, both for data writers and data readers. For data writers, their changes must be versioned. For data readers, their reads must traverse the version chain to obtain the appropriate row version.

The additional load applies to tempdb, the database in which SQL Server stores the version store that is used to provide a transactionally consistent view of the changing data. Before deploying this new technology on production systems, we recommend that the database administrator test it on preproduction systems under simulated load, especially including long-running transactions or queries.

Be aware that simple measures such as providing more I/O bandwidth for tempdb, together with the scalability improvements made to tempdb in SQL Server 2005, should offset the effect of enabling row versioning-based isolation levels. However, if the system is already heavily loaded with a mixed update-and-read workload, the configurations that are discussed in the following scenarios might be more appropriate. This can be especially true when snapshot isolation is required instead of statement-level read committed using row versioning.

Ad Hoc Reporting Against a Copy-Managed Database

In systems that have a high percentage of data changing, enabling the use of the new snapshot isolation level may have a negative affect on overall performance, because the overhead of creating and managing the previous versions of a row can slow down transactions, particularly when tempdb or the disk subsystem is already close to being a system bottleneck. In this situation, the performance cost of enabling the new infrastructure may not be worth the value of reporting against the real-time data, especially because any reporting will likely add even more load to an already busy system.

This scenario is typical of reservation systems (such as airline and hotel reservation systems) and also order entry systems, including online systems such as Web shopping sites. The performance of updates during peak periods of load is critical. A slow update can cause a consumer to give up his or her purchase and go to another site. Conversely, the customer service departments and demand-forecasting staff must access reports that contain live data to help interact with customers, and to plan.

These conflicting requirements might best be served by creating a copy-managed database that is close to a near real-time replica of the data that lags behind the live system, but is "live enough" for reporting to take place. The goal of this replica is to offload the reporting users to another server (or even a set of servers) so that they do not add to the workload of the live system.

SQL Server 2005 provides two options for automating the maintenance of a replica database, both of which operate within the transaction logging mechanism and, therefore, on committed data: database mirroring and transactional replication.

Database Mirroring

Database mirroring is primarily designed to provide a database replica to be used as a hot standby of the production database, known as the principal database. The server instance on which the principal database resides is known as the principal server. The hot standby database is known as the mirror database, and the server instance on which it resides is known as the mirror server instance. Database mirroring is extremely easy to set up and manage, and once established all data is transferred from the principal server to the mirror server without the database administrator having to select specific tables. In fact, changes made to the principal database automatically occur on the mirror database.

SQL Server 2005 database snapshots on the mirror database can support transactionally consistent point-in-time reporting. However, database snapshots must be created manually, and it may be unrealistic to maintain one for each report (assuming each report requires access to the most current data). Furthermore, having more than a few database snapshots at a time can slow performance of the mirror server.

Other disadvantages include the fact that it is not possible to make changes solely to the mirror database; for example, filtering a subset of the data, adding reporting-only users with read-only privileges, or adding additional table indexes and indexed views designed to aid reporting performance. These changes can only be made to the principal, which may experience degradation in update performance as a result.

During each transaction commit process, the principal server sends log records to a mirror server. A database mirroring session can be configured for either synchronous or asynchronous operation:

  • With synchronous operation, the commit does not complete until the log records are written to disk on both the principal and mirror servers. Performance on the principal server is thus sensitive to the ability of the mirror server to write the log to disk. For this reason, database mirroring is not suited to offloading reporting workload as any spikes in reporting workload can directly affect production system performance. For systems that must sustain continuous high rates of update transactions, database mirroring should be seen more as an availability feature than as a tool for creating reporting databases.
  • With asynchronous operation, after the principal server writes its log to disk, it sends a confirmation to the client; and transactions commit without waiting for the mirror server to write the log to disk. Therefore, the performance impact on the principal server is typically less than for the synchronous mode. However, using a mirror server for reporting might cause it to lag behind the principal server during periods of heavy activity.

Transactional Replication

Transactional replication imposes only light overhead on the live system. This overhead can be mitigated by improving database log file I/O bandwidth. Committed transactions are read asynchronously from the publication database transaction log file, and the data associated with these transactions is copied to the distribution database. From the distribution database, the data can be fanned out to multiple Subscribers. Figure 2 illustrates a typical transactional replication setup across two servers.

ms345124.sql2k5snapshotisol_02(en-US,SQL.90).gif

Figure 2. Replication Data

This technology can be more difficult to manage than database mirroring, but it is often well known by database administrators because it has been a core component of SQL Server for many years. The data that is replicated can be a subset of the live system. You publish only the table you require, and you can apply row filters and column filters to these tables. Transactional replication also has the advantage of allowing different users, indexes, and views to be present in the subscription (reporting) database.

Disadvantages are few. There is always a lag, or latency, between a transaction that is being committed at the publication database and the corresponding transaction that will be committed at the subscription database (but the latency can be as low as server seconds). Also, an additional copy of each transaction must be maintained in the distribution database until the transaction is propagated to all Subscribers.

In SQL Server 2005, schema changes on replicated objects rarely require the sort of reworking that is required in earlier versions, and no longer require the use of replication-specific stored procedures to add or remove columns from tables. In earlier versions of SQL Server, the limited support for schema changes often makes it difficult to use replication with third-party applications; upgrades to those applications often require schema changes to the underlying database.

The main issue with the use of transactional replication in earlier versions is that the movement of data from the distribution database to the subscription database can experience the same issues that occur in systems that try to support OLTP and reporting on the same server: Long-running reports in the subscription database can block the replicated data that is arriving from the distribution database. This blocking can cause the subscription database to become increasingly out of synch and behind the publication database. This could frustrate call center employees who are trying to help customers that have recent purchases or reservations that have not yet arrived in their system. The blocking problem can be lessened by using the new SQL Server 2005 behavior for the read committed isolation level: read committed using row versioning.

The subscription database or databases can be enabled for READ_COMMITTED_SNAPSHOT so that shared locks are not required for read transactions against the subscription database, for example:

ALTER DATABASE AdventureWorks
    SET READ_COMMITTED_SNAPSHOT ON;

This setting helps remove blocking that is caused by database readers and, therefore, prevents the mostly read database from getting significantly behind the live system. Incoming data is not blocked behind long-running read transactions, and queries will execute against a transactionally consistent view of the database. Additionally, replication can maintain better transactional fidelity as it moves data around the system.

Transactional replication provides a very scalable solution. As the reporting workload grows, you can add more subscription databases on new servers. The system can cope with additional loads without any further affect on the live system, because data is fanned out from the distribution database instead of the live system. Now a customer service representative should be able to see transactions that are made by the customer in the live system while accessing the Subscriber to assist the customer as necessary.

Overnight Reporting Against Live Data

This scenario considers the classic data-processing model of an Online Day and an Overnight Batch. The Online Day matches a defined set of typical in-office "business hours" where data is entered into the system with a workload exclusively composed of short transactions. Overnight batch is where long-running processes move and report against the data that arrived during the day. This scenario is very typical in mainframe applications, with transaction processing (TP) monitors running in the day, and batch jobs running at night.

The growth of customer-facing, Internet-aware applications as well as increasing globalization of companies (with some offices coming online as others go offline) means that this model is less relevant to modern datacenters. However, there are still some lessons that can be learned by looking at this old technology:

  • User-centric workloads tend to have peaks and troughs.
  • Reports tend to be run at specific times to allow comparison with reports that were run previously at that time.
  • The workload of most databases has peaks of updates (such as data loading) and peaks of reads (such as reporting).

Consider Litwareinc.com, a fictitious Internet-facing company that supports and sells personal audio technology. It has a datacenter in New York that services its U.S. business and also several smaller offices across seven countries. Like most companies, its online systems have definite usage patterns. Here, the peak load coincides with U.S. patterns with the arrival of staff in offices and the arrival of their primary customers at the Web site.

Transaction Processing System Workload

Table 1

Time (Eastern) Business Events Datacenter Events
08:00 A.M. U.S. offices coming online Any remaining reporting is halted.
12:00 P.M. All U.S. offices online, European offices closing Peak office load.
06:00 P.M. U.S. offices start closing, smaller Asian and Australian offices coming online Peak online load.
10:00 P.M.   Lowest office and online load.

Snapshot isolation enabled; data extracts begin followed by main operational reporting suite.

02:00 A.M. European offices coming online Snapshot isolation disabled; some U.S.-specific operational reporting continues.

For this scenario, snapshot isolation is preferred to read committed isolation using row versioning. The READ_COMMITTED_SNAPSHOT database option cannot be disabled during peak usage without disrupting user connections. When READ_COMMITTED_SNAPSHOT is disabled or enabled, only the user connection that sets this option is allowed in the database. The datacenter can manage the state of snapshot isolation while the database is online; there is no need to restart the database to pick up different settings. By only activating snapshot isolation in a narrow window, Litwareinc.com continues to offer service to online and global office users, but also ensures that long-running reports that require a transactionally consistent view of the database across queries will not block those users. By disabling snapshot isolation during peak usage, Litwareinc.com also ensures that maximum throughput is available to its primary users and customers.

Litwareinc.com also runs a complex data warehouse that is used to provide information about customer and stock trends, and to run larger reports that look for other patterns in the data over time. This system is primarily read-only; however there are a limited number of users who must have update access to the database to perform accounting-style journal adjustments as well as stock adjustments following audits. This does not present a problem for Litwareinc.com because the company has adapted its strategy for using row versioning-based isolation to the requirements of this system.

Data Warehousing System Workload

Litwareinc.com runs its data warehouse system 24 hours a day, seven days a week (24x7). Litwareinc.com uses snapshot isolation to provide its report consumers with high-performance access to transactionally consistent data. If transactional consistency is not needed across multiple queries, read committed isolation using row versioning would also apply to this scenario, because, unlike the previous scenario, the READ_COMMITTED_SNAPSHOT database option is continuously enabled.

Table 2

Time (Eastern) Business Events Datacenter Events
08:00 A.M. U.S. Offices coming online Peak online reporting load.
12:00 P.M. All U.S. offices online, European offices closing Peak adjustment load (but minor).
06:00 P.M. U.S. offices start closing, smaller Asian and Australian offices coming online Reporting still online.
10:00 P.M.   Lowest reporting load.

Database placed into the bulk-logged recovery model. The reporting application queues incoming report requests, the data loads, and then data transformations begin.

02:00 A.M. European offices coming online System placed into the full recovery model. A log backup begins, long-running and queued reports start and then finally, the ad-hoc workload begins.

Litwareinc.com decided to continually operate this database with snapshot isolation enabled. In addition, it looked at the amount of data it had to take into the system each day and decided to maximize the performance of the load by using a mix of the full and bulk-logged recovery models. The full recovery model is used to protect the ad hoc adjustments that are made to data by the warehouse administrators. The bulk-logged recovery model is used to reduce the logging when loading data.

In the data-loading case, snapshot isolation settings did not have to be disabled because row insertion does not generate a version-chain entry (there is no older data to version). By running with 24x7 snapshot isolation, Litwareinc.com can enjoy both fast data loading and allow for ad hoc data adjustments to continue without being affected by long-running reports, which could otherwise block the data-load process. The only operational adjustment that was made was to switch the recovery models to reduce logging and improve the data load:

  • Bulk-logged recovery model during data load because the system is fully recoverable by using the previous full backup, its associated logs and the incoming data extract files. The load is followed by changing the recovery model to full and then performing a log backup.
  • Full recovery model the rest of the time. This allows for log backups to be taken so that accounting adjustments are not lost by hardware failure or media corruption.

The previous scenarios show how snapshot isolation can be deployed in a system that has variable transaction workloads: both online transaction processing and data warehousing. Snapshot isolation can be left active so that its benefits can be realized without significant impact on any of the key activities of the systems that underline the utility of this technology.

Migration to a Common Database Technology

Before SQL Server 2005, there were two camps in the commercial relational database management system world. The first were the systems that implemented a pessimistic concurrency model based on locking schemes that enable support for the four ANSI-standard isolation levels as defined in the SQL-99 standard (ANSI X3.135-1992, American National Standard for Information Systems—Database Language—SQL, November, 1992). These systems include Microsoft SQL Server, IBM DB2 (all of its many code bases/platforms and variants) as well as Sybase Adaptive Server. The second camp implemented a non-standard transaction isolation model with optimistic concurrency based on retaining a view of the data as of the start of the transaction—the only commercial system in this camp was Oracle. This division has led to three types of software developer:

  • Develops on Oracle, ports to Microsoft SQL Server
  • Develops on Microsoft SQL Server, ports to Oracle
  • Develops and optimizes for both camps

Generally only the largest software companies can afford to be "type 3"—companies such as SAP, Siebel and PeopleSoft. Most developers must pick between type 1 or type 2, their choice normally being predicated by the degree to which the datacenter Unix market matters to their sales.

With SQL Server 2005 and the introduction of optimistic concurrency control using the row versioning-based snapshot isolation level, it is now much easier for type 1 application vendors to make a direct port to SQL Server and extend their market beyond the Oracle/Unix platform. Also, both row versioning-based isolation levels make it easier for IT Departments reduce the complexities that are associated with supporting multiple database platforms and to avoid costs, such as:

  • Multiple database teams
  • Increased training costs
  • Reduced-volume software licensing costs
  • Management time spent interacting with multiple vendors
  • Matching differing supplier service levels

SQL Server 2005 offers the opportunity for customers to eliminate these additional costs, without having to change application vendor or experience a drop in optimal performance that is caused by the paradigm shift in the transaction isolation model.

The implementation of optimistic concurrency in SQL Server 2005 is different from Oracle. The SQL Server implementation is designed to be more controllable by the database administrator. Optimistic concurrency can be enabled and disabled on command, as shown in the scenario presented earlier in this paper. The SQL Server implementation is also more manageable. To help detect and decipher what is happening with the database, there are many Microsoft Windows System Monitor performance counters as well as SQL Server 2005 virtual tables that can be accessed through dynamic management views.

To assist the database administrator, the following table describes some differences between the SQL Server 2005 implementation of snapshot and that of Oracle.

SQL Server and Oracle Differences in Snapshot

Table 3

Microsoft SQL Server 2005 Oracle
No table modifications are required. The snapshot version store and the version chains of changed and deleted records are completely independent of the table definition. This is something the system manages on your behalf. Requires use of INITRANS >= 3 and MAXTRANS on CREATE/ALTER TABLE statements to enable space for on-page transaction information before SERIALIZABLE can be used. You must get it correct before you start using the table or face a costly data definition language (DDL) change after your users start complaining about ORA-08177: "Can't serialize access for this transaction."
The version store is held in memory and tempdb. The database administrator must make sure that tempdb is optimized for increased I/O bandwidth that is based on the version store workload. The size of the tempdb database must also be monitored (especially if the application has long running transactions). For many releases, SQL Server has supported DBA-friendly percentage and absolute autogrow settings for database and log files, but these are obviously constrained by the physical availability of disk space.

Long-running transactions can lead to long version chains in SQL Server. The version store keeps a full copy of the data row that saves the expense of reconstructing the row when it is accessed by another transaction.

The integrated SQL Server Agent event management and job scheduling subsystem can be programmed to react automatically to an out-of-physical space condition and take corrective action (such as forcing rollback of any transaction that is contributing to version store space usage).

Can require complex configuration of ROLLBACK SEGMENTS (creation and on/offline status) and defining transaction level USE ROLLBACK SEGMENT statements to avoid ORA-01555: "Snapshot too old" caused by long running transactions overwriting their versioned pages in the rollback segment.
Note   Oracle does not have a definition for long running transactions.

In recent versions (starting with Oracle9i) Oracle has introduced a technology similar to that used in SQL Server 2005 that is called "Automatic Undo Management Mode." This new method is incompatible with the previous manual method and will require code changes if USE ROLLBACK SEGMENT statements have been used

The rollback segment/undo tablespace only stores the "changed" value of the row (therefore, saving space) at the expense of reconstructing the versioned row at run time.

Oracle9i users still experience the ORA-015551; and, in response, Oracle Database 10g introduced the new UNDO_RETENTION_PERIOD initialization setting that is used together with the RETENTION GUARANTEE property of undo tablespaces, which allows the Oracle database administrator to specify how long undo data is retained. This is not an automatic setting, and changes are picked up by stopping or starting the Oracle instance. To tune this setting, the Oracle database administrator can monitor V$ROLLSTAT to track wrapping"(the reuse of undo tablespace storage), and the application can detect and report either ORA-01555 (no retention guarantee) or out-of-space conditions.

tempdb can autogrow as a percentage of current size (to elastically reduce the number of autogrow attempts) or as an absolute value. ROLLBACK SEGMENTS does not support PCTINCREASE and, so does not "autogrow." Therefore, you must get the size correct when rollback segments are created. If you are using the automatic mode, the undo tablespace behavior is similar to that of SQL Server 2005 tempdb.
Supports row-based data versioning. This means that smaller amounts of data are written to or read from the version store. Row level versioning means actual row-level serialization of transacted data access. The INITRANS setting determines how many changes can be tracked in any one data block. If this value is exceeded, the next transaction that is using SERIALIZABLE while accessing other rows on an updated data block by other transactions causes ORA-08177: "Can't serialize access for this transaction."

This is why Oracle recommends the use of SERIALIZABLE for systems with few, short update transactions where the chance of filling the INITRANS area is low.

Snapshot isolation and read committed using row versioning are enabled at the database level. Only databases that require this option must enable it and incur the overhead associated with it. For cross-database transactions that are running under snapshot isolation level, you must enable the snapshot database option on all participating databases. Data versioning is not optional; it is always enabled.
Extensive operational Performance Counters, especially the SQLServer:Transactions set of counters that enable the database administrator to monitor the state of the version store, including the following:
  • Free space in tempdb
  • Size of version store
  • Rate of growth
  • Number of conflicts
  • Longest running active transactions (excluding nonversion-consuming or generating transactions)
Oracle implements a platform-independent approach. This approach has benefits to customers who are familiar with the Oracle toolset, but excludes customers from easily integrating Oracle performance counters with the wealth of systems management products and expertise that is available on the Windows operating system.
SQL Server 2005 implements virtual tables that allow the database administrator to see whether snapshot transactions have occurred, and to monitor the size of the version store and the earliest record in the version store:
  • sys.dm_tran_active_snapshot_database_transactions
  • sys.dm_tran_active_transactions
  • sys.dm_tran_current_snapshot
  • sys.dm_tran_current_transaction
  • sys.dm_tran_database_transactions
  • sys.dm_tran_locks
  • sys.dm_tran_session_transactions
  • sys.dm_tran_top_version_generators
  • sys.dm_tran_transactions_snapshot
  • sys.dm_tran_version_store

These virtual system tables are also called dynamic management views and can be used to monitor and report on the state of active transactions and their version and lock usage.

Oracle implements Dynamic Performance Views (virtual tables and functions)—typically V$UNDOSTAT (a histogram-like record of undo statistics), V$WAITSTAT (with the Undo class), and V$TRANSACTION for per-transaction undo space usage.

1 From: http://www.oracle.com/technology/pub/articles/10gdba/week20_10gdba.html

To assist the database developer in porting an application from Oracle to Microsoft SQL Server 2005, the following table describes some similarities between the implementation of snapshot in SQL Server 2005 and that in Oracle.

SQL Server and Oracle Similarities in Snapshot

Table 4

Microsoft SQL Server 2005 Oracle
SELECT ... WITH (UPDLOCK)
Performs conflict checks immediately.
SELECT... FOR UPDATE
Locks a record within a transaction to prevent conflicts.
READ COMMITTED No equivalent
READ COMMITTED using row versioning READ COMMITTED
SNAPSHOT SERIALIZABLE
SNAPSHOT READ ONLY
READ UNCOMMITTED

Provides access to uncommitted data.

No equivalent
REPEATABLE READ No equivalent
SERIALIZABLE No equivalent. The lack of read locking can cause design challenges for the developer, as outlined in Oracle9i Application Developer's GuideFundamentals Release 2 (9.2)

Part Number A96590-01 :

"Because Oracle does not use read locks, even in SERIALIZABLE transactions, data read by one transaction can be overwritten by another. Transactions that perform database consistency checks at the application level should not assume that the data they read do not change during the execution of the transaction (even though such changes are not visible to the transaction). Database inconsistencies can result unless such application-level consistency checks are coded carefully, even when using SERIALIZABLE transactions."

Can use blocking in pessimistic isolation levels, or must handle conflicts (data row updated outside of the transaction) and retry failed transactions. Row-level versioning reduces chances of conflict. Must handle conflicts (ORA-08177: data page updated outside of the transaction) and retry failed transactions.
The application can choose an appropriate concurrency model. The application always sees potentially stale data unless manual table locking or SELECT...FOR UPDATE is used, because there is no choice between concurrency models.
Transact-SQL TRY…CATCH logic handles conflict errors, but does not handle out-of-space issues with tempdb. PL/SQL has error handling that enables error handing for ORA-08177 (conflict), but does not handle ORA-01555 (rollback segment space issue). With Undo Tablespaces, a similar out of space issue can arise.

Based on these similarities to Oracle compared with earlier releases of SQL Server, SQL Server 2005 significantly eases the porting of applications that are built to run against databases that support optimistic concurrency. Additionally, SQL Server 2005 introduces a programming model that allows for the choice between pessimistic and optimistic concurrency control, and also numerous mechanisms for implementation. Database administrator tasks are eased by having a simple, easily configured version store that is online and enabled at the database level. Also, the process for porting code by a developer is simpler because of the close functional match between the Oracle and the SQL Server 2005 schemes. However, SQL Server 2005 exhibits more granular consistency behavior in that it manages versions at the row level instead of at the data-block level.

Understanding Concurrency Control

As seen within the usage scenarios provided earlier in this paper, there are two primary models that are used in controlling concurrency: pessimistic concurrency and optimistic concurrency.

In a pessimistic concurrency control-based system, locks are used to prevent users from modifying data in a way that affects other users. After a lock has been applied, other users cannot perform actions that would conflict with the lock until the owner releases it. This level of control is used in environments where there is high contention for data, and where the cost of protecting the data by using locks is less than the cost of rolling back transactions if or when concurrency conflicts occur.

Conversely, in an optimistic concurrency control-based system, users do not lock data when they read it. When an update is performed, the system checks to see whether another user has changed the data after it was read. If another user updated the data, an error is raised. Typically, the user that receives the error rolls back the transaction, and then resubmits the transaction. This is called optimistic concurrency because it is mainly used in environments where there is low contention for data, and where the cost of occasionally rolling back a transaction outweighs the costs of locking data when it is read.

Read committed isolation using row versioning is somewhere in between pessimistic and optimistic concurrency. Under this isolation level, read operations do not acquire locks against the live data. However, with update operations the process is the same for this isolation level as it is for the default read committed isolation level: The selection of rows to update is done by using a blocking scan where an update lock is taken on the data row as data values are read.

Snapshot isolation, on the other hand, is truly optimistic because data that is to be modified is not actually locked in advance, but the data is locked when it is selected for modification. When a data row meets the update criteria, the snapshot transaction verifies that the data has not been modified by another transaction after the snapshot transaction started. If the data has not been modified by another transaction, the snapshot transaction locks the data, updates the data, releases the lock, and moves on. If the data has been modified by another transaction, an update conflict occurs and the snapshot transaction rolls back.

Before SQL Server 2005, transactions are always controlled in a pessimistic manner; this means all transactions acquire locks. Although locking can be the best concurrency-control choice for applications that require data consistency and inter-transaction protection, it can cause writers to block readers. If a transaction changes a row, another transaction cannot read the row until the writer commits. There are cases where waiting for the change to complete is the correct response; however, there are cases where the previous transactionally consistent state of the row is enough.

Row versioning-based isolation levels allow for the reader to obtain the previously committed value of the row at the cost of having to keep this version when the row is modified—even if no one is "currently" accessing the data. This means that all SELECT, UPDATE and DELETE statements (but not INSERT statements, unless reinserting over a recently deleted record) might have to pay the cost of versioning with additional I/O into or from the versioning store. You must decide to trade improved concurrency at the cost of overhead (and therefore performance). It is important to state that although each query might cost more to execute because of versioning, the result might be that you can support more throughput because of reduced contention. For this reason, it is important that row versioning-based isolation levels be enabled where contention was costing you throughput. If you use this as a solution to a performance problem that is not caused by contention, you might be solving the wrong problem and, in fact, degrading your system throughput. This is similar to throwing hardware at a problem in which the performance issue is caused by poor application design and locking conflicts.

Generally, application programming is easier when the database automatically controls the view of the data through versioning-based isolation. In this environment, you worry less about deadlocks and blocking and pay a slight additional cost in administrative management overhead and performance. In many cases, paying a cost in administrative overhead and in providing more disk throughput for tempdb can be an easier choice—this is often known as "killing it with iron" and has the benefit that programmers do not have to worry about complex programming logic. If all queries that are running under row versioning-based isolation are solely for read consistency and not the basis for later modifications, no application retry logic is necessary. However, you might end up with conflicts in transactions that use the snapshot isolation level and later perform updates. If the version is "stale," it is likely you will have to use transaction retry logic for updates. This is exactly what we described earlier with optimistic concurrency control. However, the updates that are performed under read committed isolation using row versioning do not conflict and, therefore, will never incur the cost of rollback.

In situations where blocking is required for the correctness of the application (such as queues implemented in tables), if you have enabled read committed using row versioning, you must use the WITH (READCOMMITTEDLOCK) locking hint to get the expected "classic" behavior, because the row versioning-based read will never block.

The programmer now has the option of using SQL Server 2005 conflict resolution together with application error handling or Transact-SQL error handling in place of previous timestamp management techniques. Additionally, when a workload consists of batch-style updates in which many rows are modified, we do not recommend using snapshot isolation or read committed isolation using row versioning because the chance of conflict can be significantly larger. In that case, you should choose a lock-based isolation level (READ COMMITTED, REPEATABLE READ, OR SERIALIZABLE), keep your transactions short and carefully design your transactions to minimize resource conflicts so that you minimize deadlocks.

Understanding Isolation

Because transaction isolation levels are completely controllable in SQL Server 2005, it is important to understand the most appropriate isolation for your application—for both concurrency and performance and still maintaining the appropriate level of accuracy. The concept of isolation level is not new. Information about the ANSI specifications for isolation can be found at http://www.ansi.org, and the current specification to review is ANSI INCITS 135-1999 (R1998). However, the standard is intended to be implementation-independent and being so makes it somewhat ambiguous in what the exact trade-offs are in consistency and performance as well as how to achieve these goals and standards. Therefore, numerous papers have been written to further clarify the standards, such as Generalized Isolation Level Definitions, or to even critique them, such as The Critique of ANSI Isolation Levels. Based on the philosophies that these works represent and the ambiguity in the ANSI standard, SQL Server 2005 offers many of the possible combinations that are typically wanted.

Isolation Levels Offered in SQL Server 2005

Table 5

 Possible phenomena
(as defined in ANSI SQL Standard)
 
Isolation levelDirty readNonrepeatable readPhantomConcurrency control
Read uncommittedYesYesYes(None)
Read committedNoYesYesPessimistic
Read committed using row versioningNoYesYesOptimistic reads

Pessimistic updates

Repeatable readNoNoYesPessimistic
SnapshotNoNoNoOptimistic
SerializableNoNoNoPessimistic
The application usage for each of the above varies based on the required level of "correctness" and the trade-off chosen in performance and administrative overhead.

Isolation Level and Best-Suited Application

Table 6

Isolation level Best suited for an application when:
Read uncommitted The application does not require absolute accuracy of data (and could get a larger or smaller number than the final value) and wants performance of OLTP operations above all else. There is no version store to maintain, and no locks are acquired or honored when reading data. Data accuracy of queries in this isolation may see uncommitted changes.
Read committed The application does not require point-in-time consistency for long-running aggregations or long-running queries, yet wants data values that are read to be only transactionally consistent. The application does not want the overhead of the version store when reading data with the trade-off of potential incorrectness for long-running queries because of nonrepeatable reads. This isolation level is ideally suited to transactions that rely upon the blocking behavior of locks to implement queuing applications or other ordered access to data.
Read committed Using Row Versioning The application requires absolute point-in-time consistency for long-running aggregations or queries. All data values must be transactionally consistent at the point in time where the query begins. The database administrator chooses the overhead of the version store for the benefit of increased throughput because of reduced lock contention. Additionally, the application wants transactional consistency for large queries, not transactions.
Repeatable read The application requires absolute accuracy for long-running multistatement transactions and must hold all requested data from other modifications until the transaction completes. The application requires consistency for all data that is read repeatedly within this transaction and requires that no other modifications be allowed. This can affect concurrency in a multiuser system if other transactions are trying to update data that has been locked by the reader. This is best when the application is relying on consistent data and plans to modify it later within the same transaction.
Snapshot The application requires absolute accuracy for long-running multistatement transactions but does not plan to modify the data. The application requires consistency for all data that is read repeatedly within this transaction but plans to only read data. Shared locks are not necessary to prevent modifications by other transactions because the changes will not be seen until after the data modification transactions commit or roll back, and the snapshot transaction finishes. Data can be modified within this transaction level at the risk of conflicts with transactions that have updated the same data after the snapshot transaction started. This conflict must be handled by each updating transaction. A system that has multiple readers and a single writer (such as the replicated reporting system in the scenario section earlier in this paper) will not encounter conflicts.
Serializable The application requires absolute accuracy for long-running multistatement transactions and must hold all requested data from other modifications until the transaction finishes. Additionally, the transactions are requesting sets of data and not just singleton rows. Each set must produce the same output at each request within the transaction, and other users cannot modify the data that has been read and cannot insert new rows into the set. This is best when the application is relying on consistent data, plans to modify it later within the same transaction, and requires absolute accuracy and data consistency up to the end of the transaction.

Considerations for Row Versioning-Based Isolations

Although the change to read committed using row versioning does not require application changes (unless the application depends on the underlying locking behavior), it does require administration changes—the option must be activated per database. Enabling a database to allow snapshot isolation requires administrative planning, and possibly application planning. However, to use snapshot isolation requires application changes. In both cases, the option to enable row versioning-based isolation levels occurs at the database level, and row-versioning data is stored within memory (for short-lived versions) and tempdb.

Be aware that, like other database-level settings, both row versioning-based isolation levels settings can be made on the model system database. These settings are then propagated when databases are created. model acts as a template that is applied at create time. This is useful if you want to create a set of standard database settings, because it avoids the administration task of connecting and updating each database in turn.

**Note   **You cannot enable the READ_COMMITTED_SNAPSHOT database option for tempdb and for master. The master database is always enabled for snapshot isolation.

Definitions, Terminology, and Syntax for Row Versioning-Based Isolation

To implement row versioning–based isolation in SQL Server 2005, you must be familiar with a few new concepts, terms, and syntax. In earlier versions of SQL Server, isolation level was controlled only by a session setting (SET TRANSACTION ISOLATION LEVEL or the equivalent settings on the ADO or ADO.NET call) or by a query hint (FROM table name WITH (isolation hint)).

In SQL Server 2005, to use row versioning–based isolation, one of the following database options must already be set (and not pending):

  • Read committed isolation using row versioning for statement-level read consistency
  • Snapshot isolation for transaction-level read consistency

If snapshot isolation is requested and the database is not ready to handle snapshots (for example, still pending), the statement that is requesting snapshot will fail. It is important to make changes at the appropriate time when you are enabling and disabling snapshot isolation, and to understand the state of the database and client requests at the time of the change.

Read Committed Isolation Using Row Versioning (Statement-Level Read Consistency)

When set, statement-level read consistency guarantees that each statement under read committed isolation sees only committed changes that occurred before the start of the statement. Each new statement within the transaction picks up the most recent committed changes. The version "refresh" occurs at the beginning of each SELECT statement. In other words, this version of read committed is semantically similar to traditional read committed in that only committed changes are visible, but the timing of when those changes committed differs. Each statement sees the changes that were committed before the statement began instead of when the resource is read. This is a completely new behavior of read committed that is nonlocking, non-blocking, and creates a solid point in time in which the data is accurate at the start of the statement.

Statement-level snapshot is enabled by setting the READ_COMMITTED_SNAPSHOT database option to ON. After it is set to ON, no other application changes are necessary.

ALTER DATABASE <database name>
   SET READ_COMMITTED_SNAPSHOT ON
   WITH <termination>;

Executing this statement requires that only one user connection, the one that is enabling the READ_COMMITTED_SNAPSHOT database option, to exist on the database. Be aware that this does not mean that the database must be in single-user mode. Use the ALTER DATABASE WITH <termination> statement to end other user sessions in the database and to roll back their incomplete transactions. Ideally, this change should be made off hours, and it is likely that this will be a permanent change. To see whether a database has this option set, use the sys.databases catalog view.

SELECT sd.is_read_committed_snapshot_on
FROM sys.databases AS sd
WHERE sd.[name] = '<database name>';

The value that is returned for is_read_committed_snapshot_on will be either true (1) or false (0). When the READ_COMMITTED_SNAPSHOT option is set to ON, read operations under the read committed isolation level are based on row versions and are executed in a nonlocking mode. When READ_COMMITTED_SNAPSHOT is set to OFF, the scans under read committed isolation are executed in a short-term locking mode where locks are only held for the duration of the read request.

Snapshot Isolation (Transaction-Level Read Consistency)

When snapshot isolation is set, it guarantees transaction-level read consistency where every statement within a snapshot isolation transaction sees only committed changes that occurred before the start of the transaction. Effectively, each statement in the transaction sees the same set of data while the data is available for modification outside of this transaction. Concurrent modifications are not prevented and this "snapshot" transaction is unaware of the changes that are made by other transactions. The version "refresh" occurs only in the start of each transaction as long as you run under snapshot isolation Concurrent updates can potentially cause the transaction that is running under snapshot isolation to quit in case of conflicts.

To achieve transaction-level snapshots, there are two changes that are required:

  • First, the database must allow for it by setting the ALLOW_SNAPSHOT_ISOLATION database option to ON.
  • Second, the application or user, or both, must explicitly request a snapshot transaction.

Allowing Snapshot Isolation

Administrators must set a database option to allow snapshot isolation. This database option might not take effect immediately; however, the option can be changed while users are connected to the database. If users are currently processing transactions when the state change is made, all transactions must finish before snapshot transactions can occur. This is because row versions have not been maintained for those transactions that are currently executing.

If changing the state is taking a lot of time and transactions try a snapshot transaction while the database is still "pending," the transactions will receive an error. If there are long-running transactions executing at the time of the change, the change to a versioning state can take a long time to finish. The database administrator can cancel the request; and, if the request is canceled, the versioning state is rolled back to the previous versioned (or nonversioned) state. To request snapshot isolation for the database, change the database state by using the following ALTER DATABASE statement:

ALTER DATABASE <database name>
   SET ALLOW_SNAPSHOT_ISOLATION ON;

To see whether the option has taken effect, you can examine the sys.databases catalog view. The two columns to look at are: snapshot_isolation_state and snapshot_isolation_state_desc.

snapshot_isolation_state returns a tinyint value between 0 and 3:

  • 0 = Snapshot isolation is OFF.
  • 1 = Snapshot isolation is ON.
  • 2 = Snapshot isolation state is in transition to OFF state.
  • 3 = Snapshot isolation state is in transition to ON state.

snapshot_isolation_state_desc returns an nvarchar(60), which is a character description of the following pending state:

  • OFF = Snapshot isolation is OFF.
  • ON = Snapshot isolation is ON.
  • IN_TRANSITION_TO_OFF = Snapshot isolation state is in transition to OFF state.
  • IN_TRANSITION_TO_ON = Snapshot isolation state is in transition to ON state.
SELECT sd.snapshot_isolation_state
    , sd.snapshot_isolation_state_desc
FROM sys.databases AS sd
WHERE sd.[name] = '<database name>';

Table 7

Snapshot isolation state Description
OFF Snapshot isolation state is disabled in the database. This means that transactions that are running under snapshot isolation are not allowed. Database versioning state is initially set to OFF during restart recovery. (A new SQL Server 2005 feature is that the database is made available after the REDO phase of recovery.) If versioning is enabled, after recovery finishes, the versioning state is set to ON.
PENDING_ON In the process of enabling snapshot isolation state. Before snapshot isolation is enabled, the Database Engine waits for the completion of all update transactions that are active when the ALTER DATABASE statement was issued. New update transactions in this database start paying the cost of versioning by generating record versions. Transactions under snapshot isolation cannot start.
ON Snapshot isolation state is enabled. New snapshot transactions can start in this database.

Existing snapshot transactions (in other snapshot-enabled databases), which start before versioning state is set to ON, cannot do a snapshot scan in this database. The snapshot that those transactions are interested in cannot be properly generated by the update transactions.

PENDING_OFF In the process of disabling snapshot isolation state. Cannot start new snapshot transactions. Update transactions still pay the cost of versioning in this database. Existing snapshot transactions can still do snapshot scans. PENDING_OFF does not become OFF until all existing transactions finish.

If the database option is taking a long time to move out of the pending state, you can use the following statement to return a virtual table that shows the transactions that are active and blocking the database state change.

SELECT stx.session_id
    , atx.[name]
    , stx.transaction_sequence_num  
    , stx.first_snapshot_sequence_num
    , stx.commit_sequence_num
FROM sys.dm_tran_active_transactions AS atx
INNER JOIN sys.dm_tran_active_snapshot_database_transactions AS stx
ON atx.transaction_id = stx.transaction_id

Requesting Snapshot Transactions

As mentioned previously in this paper, after the database has been enabled for snapshot isolation, developers and users must then request that their transactions be run in this snapshot mode. This must be done before starting a transaction, either by a client-side directive on the ADO.NET transaction object or within their Transact-SQL query by using the following statement:

SET TRANSACTION ISOLATION LEVEL SNAPSHOT

If users execute this session setting change before the database has completed this change, the user's transaction will fail with error 3956: "Snapshot isolation transaction failed to start in database '%.*ls' because the ALTER DATABASE command which enables snapshot isolation for this database has not finished yet. The database is in transition to pending ON state. You must wait until the ALTER DATABASE Command completes successfully."

Understanding the "Beginning" of a Transaction

Versioning is performed for all updates when the database allows snapshot; however, the version that a transaction will use is based on the first statement that accesses data, and not the BEGIN TRAN that creates the transaction. However, if the transaction isolation level is being set to use snapshot isolation, this must be done before any transaction is created (with a BEGIN TRAN or ADO.NET equivalent). Otherwise, the following is an example of the error message that is returned:

Msg 3951, Level 16, State 1, Line 7
The transaction fails in the database because the statement was run
under snapshot isolation but the transaction did not start in snapshot
isolation. You cannot change the isolation level of the transaction to
snapshot after the transaction has started unless the transaction was
originally started under snapshot isolation level.

To avoid this error message, relocate the isolation specification before the transaction is created.

For example:

SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRAN;
SELECT getdate(); -- (T1) transaction has not "officially begun"
SELECT * FROM <tablename>; -- (T2) transaction t has begun
SELECT... -- will see all committed changes as of (t)
SELECT... -- will see all committed changes as of (t)
COMMIT TRAN;

Understanding Row Versioning

Versioning effectively starts with a copy-on-write mechanism that is invoked when a row is modified or deleted. This requires that while the transaction is running the old version of the row must be available for transactions that require an earlier transactionally consistent state. Row versioning–based transactions can effectively "view" the consistent version of the data from these previous row versions. Row versions are stored within the version store that is housed within the tempdb database.

More specifically, when a record in a table or index is modified, the new record is stamped with the "sequence_number" of the transaction that is performing the modification. The old version of the record is copied to the version store, and the new record contains a pointer to the old record in the version store. If multiple long-running transactions exist and multiple "versions" are required, records in the version store might contain pointers to even earlier versions of the row. All the earlier versions of a particular record are chained in a linked list; and in the case of long-running row versioning–based transactions, the link will have to be traversed on each access to reach the transactionally consistent version of the row. Version records must be kept in the version store only as long as there are row versioning–based queries that might be interested in them.

Row Versioning in Read Committed Using Row Versioning

For queries that are running under read committed using row versioning, row versions are necessary only for the duration of each SELECT statement in the transaction. However, row versions are maintained until the end of the transaction. This is an important point to consider for tempdb storage. The storage impact will be the same as it would be for snapshot isolation.

Row Versioning in Snapshot Isolation

For queries that are running under snapshot isolation, the row versions must be kept until the end of the transaction. Because a transaction might span multiple statements and a potentially longer period of time, the version store will have to potentially accommodate multiple versions of a row for a potentially longer period of time.

In the following figure, the current version of the record is generated by transaction T3, and it is stored in the usual data page. The previous versions of the record that were generated by transaction T2 and transaction T1 are stored in pages in the version store because there are still transactions running under snapshot isolation that are accessing the previous state of the data.

ms345124.sql2k5snapshotisol_03(en-US,SQL.90).gif

Figure 3. Versions of a Row

Row versioning will slow the update performance because of the extra work that is involved in keeping old versions; however, in cases where contention is costly, you might see improved performance in the reduction in contention. Additionally, row versioning–based statements and transactions (also known as version readers) have the extra cost of traversing the version link pointers. If many row versioning–based transactions exist and are long-running transactions, a larger and faster tempdb may be necessary, and performance may degrade if tempdb is not configured properly.

DDL Statements Within Snapshot Isolation

Certain DDL statements that modify the structure of an object will be disallowed because their changes cannot be seen through row versioning. For example, consider a snapshot transaction that has read table1 and found 6 rows:

SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRAN;
SELECT count(*) FROM table1; -- (Returns 6 rows) 
...

A second transaction adds rows to this table that are not visible to this transaction because the transaction is using row versioning to maintain transaction-level read consistency. If this transaction were allowed to execute a CREATE INDEX statement, how would this work? Would the index be created on the snapshot view of the data, or would it include all rows as CREATE INDEX typically would? Considering the first part of the question, how would the Database Engine reconcile the changes that are occurring simultaneously with DDL operations that are executed by other transactions? Considering the second part of the question, what if multiple-row versions were creating additional indexes? Instead, CREATE INDEX is disallowed within a snapshot transaction. In fact, numerous DDL statements are disallowed because they violate the concept of "snapshot" and must be invoked against the actual base object, not a "version" of it.

Be aware that these restrictions do not apply to read committed using row versioning. DDL statements (or data access queries) will queue and not fail.

DDL Statements That Are Not Allowed Within Snapshot Isolation

The following statements are not allowed within a transaction that is running under snapshot isolation because of their disruptive potential on the snapshot copies of the data:

  • CREATE INDEX
  • CREATE XML INDEX
  • ALTER INDEX
  • ALTER TABLE
  • DBCC DBREINDEX
  • ALTER PARTITION FUNCTION
  • ALTER PARTITION SCHEME
  • DROP INDEX
  • Common language runtime (CLR) DDL

An attempt to run one of these statements will cause a severity level 16 message to be returned, such as:

Msg 3964, Level 16, State 1, Line 1
Transaction failed because this DDL statement is not allowed
inside asnapshot isolation transaction. Since metadata is not
versioned,a metadata change can lead to inconsistency 
if mixed within snapshot isolation.

Other DDL statements that are not listed here, for example CREATE TABLE, are allowed because other transactions cannot view earlier versions of the data because it is a new object. This does not violate the rules that are listed previously.

Other DDL Statement Changes After Snapshot Isolation Started

In most production databases the schema is relatively stable. However, changes might have to be made. If the system is highly available and user activity will occur concurrently with schema changes, programmers should be prepared for errors being raised from these changes. Because row versioning only exists for data rows, not metadata, one way of keeping the view consistent for the databases is to block all DDL operations in the server instance while a snapshot transaction is running. This would be too restrictive because a long-running snapshot transaction would prevent a database administrator from performing any DDL operation in a database for potentially a long time. Instead, DDL operations are supported while snapshot transactions are running; however, the snapshot transaction might encounter failure if the operations try to access the objects that have changed since the start of the snapshot transaction. The following shows a timeline (time is from left to right):

T1|--- Snapshot transaction ------------------------ Use object (fail) -----------|

T2 |---DDL, change the object ---commit --|

The application programmer should put in retry logic for snapshot transactions to deal with this kind of error, and administrators should try to minimize DDL changes during highly active times of day.

Snapshot Transaction Failure Due to DDL Changes Outside of the Transaction

Not all DDL changes will cause a snapshot transaction to fail. Use the following table to see the likely impact of DDL changes taking place while snapshot transactions are running. It is important to realize that stable schemas will avoid transaction failure when using snapshot isolation.

Be aware that this behavior is expected because system objects are not covered by the row versioning. For example, if they were covered, the system would potentially have to maintain multiple copies of an index (and the accompanying data). This would incur large overhead, perhaps inadvertently.

Table 8

DDL changes outside of the snapshot transaction The snapshot transaction will fail when:
CREATE TABLE

ALTER TABLE

Including column change, type change, XML type binding change, constraint change, and so on.

DROP TABLE

The snapshot transaction tries to use the table after the modification has occurred.

A sample error message:

Msg 3961, Level 16, State 1, Line 1

Snapshot isolation transaction failed in database '%.*ls' because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation.

CREATE STATISTICS

UPDATE STATISTICS

DROP STATISTICS

Allowed. These statements are not affected by snapshot isolation or read committed isolation using row versioning settings.
CREATE INDEX

Includes all index types (clustered, nonclustered, XML Indexes, full-text indexes, and so on).

ALTER INDEX

DROP INDEX

The snapshot transaction tries to use the table or view after the modification has occurred to one of the associated index or indexes of the table.
CREATE TYPE

DROP TYPE

The snapshot transaction tries to use the type after the type modification has occurred.
CREATE PROCEDURE

CREATE FUNCTION

CREATE VIEW

Includes both Transact-SQL and CLR procedures and functions, including user-defined aggregate functions.

ALTER PROCEDURE

ALTER FUNCTION

ALTER VIEW

DROP PROCEDURE

DROP FUNCTION

DROP VIEW

The snapshot transaction tries to use the procedure, function, or view after the modification has occurred.
CREATE TRIGGER

Includes both Transact-SQL and CLR triggers.

ALTER TRIGGER

DROP TRIGGER

The snapshot transaction tries to use the table after the change has occurred.
sp_addextendedproc

Refers to extended stored procedures.

sp_dropextendedproc

The snapshot transaction tries to use the procedure after the modification has occurred.
CREATE DEFAULT

CREATE RULE

DROP DEFAULT

DROP RULE

sp_bindefault/sp_bindrule

sp_unbindefault/sp_unbindrule

The snapshot transaction tries to use the table after the change has occurred.
CREATE SCHEMA

Includes XML Schema commands.

ALTER SCHEMA

DROP SCHEMA

The snapshot transaction tries to use the table after the change has occurred.
CREATE ASSEMBLY

ALTER ASSEMBLY

DROP ASSEMBLY

The snapshot transaction tries to use the assembly after the assembly modification has occurred.
CREATE PARTITION SCHEME

CREATE PARTITION FUNCTION

Includes data spaces.

ALTER PARTITION SCHEME

ALTER PARTITION FUNCTION

DROP PARTITION SCHEME

DROP PARTITION FUNCTION

The snapshot transaction tries to use the partition function or scheme.
Start full-text crawl on table The snapshot transaction tries to use the table after the change has occurred.
Change full-text catalog The snapshot transaction tries DDL on the catalog after the change has occurred.
CREATE EXTENDED TRIGGER ON DB FOR DDL The snapshot transaction DDL that must check whether there is extended trigger defined.
CREATE SERVICE

ALTER SERVICE

DROP SERVICE

The snapshot transaction tries to use the service after the change has occurred.

Development Best Practices

The database administrator for a system is responsible for evaluating the impact on the system and applications of either of the row versioning–based transaction isolation schemes. The developer is responsible for understanding how to exploit the new isolation-level behaviors to build better applications. The scenarios provided earlier in this paper show some configurations for which the new isolation levels would make sense. This section provides for detailed information about how the developer can use this new functionality.

Read Committed Using Row Versioning

SQL Server 2005 provides a nonblocking read committed transaction isolation level that is based on statement-level views of data. This option must be enabled by the database administrator and does not require any application-level changes to use. If this option is enabled, transactions that are running under the read committed isolation level use row versioning and do not acquire read locks because they read data. Instead, the version store is used to isolate the transaction from changes that are taking place as the read operation executes. This protection is at the statement level. If the application runs two SELECT statements within the same read committed transaction, the results can differ if data changes have been committed between the two statements, as they can with the traditional read committed isolation level.

This behavior is a powerful tool for developers because it enables the application to consume more data without leading to increased blocking as readers compete with writers for data. It is common to see statements such as:

SELECT COUNT(Orders) FROM sales.dbo.orders WITH(NOLOCK)

In this query the NOLOCK (same as READUNCOMMITTED) hint is required to stop the read locks that would typically be taken by this query from blocking the taking of new orders—however, it has the side-effect of returning uncommitted orders, possibly leading to an inaccurate total (because the uncommitted orders might never be committed to the database). Using the new behavior of read committed using row versioning, the query can be run without the lock hint, and an accurate view of committed data can be obtained without blocking online updaters. This is even more advantageous when you are performing a more complex query that involves several joins because a stable view of the database is provided to the statement. This avoids anomalies in the results that can be caused by late arrival of parent or child records, which could be picked up by the NOLOCK hint.

Some applications, especially those that implement queues in tables, might require the former blocking behavior—in this case the locking hint READCOMMITTEDLOCK should be used:

SELECT TOP 1 NextOrderID FROM sales.dbo.orders 
WITH(READCOMMITTEDLOCK) WHERE OrderStatus = N'Unprocessed';

In this query, the SELECT statement will be blocked until UPDATE statements commit or roll back; therefore, the order will not be picked up until it is committed.

As stated earlier in this section, the new read committed using row versioning behavior functions only at the statement level, even when it is used within a multistep transaction. This has another advantage in that the update conflicts that are possible with the snapshot isolation level cannot happen with read committed using row versioning. Therefore, as a developer, you do not have to add additional logic to handle potential conflicts, which allows the new behavior to be activated without application change.

Read committed using row versioning should be the first choice when you are implementing a transaction that includes multiple SELECT statements that do not require (as a group) a uniform, consistent view of the database.

Snapshot Isolation

Sometimes having a uniform, consistent view of a database over multiple SELECT statements within a transaction is important. Examples are available in read-only applications, such as financial and human resources reporting in which it is important that totals, subtotals, and checksums be consistent—despite the fact that they can be calculated over several SELECT statements, and sometimes over several minutes. If the system dictates a uniform view for a longer period, the database snapshot is another tool, which is not discussed in this paper. If the data changes while the transaction is running, spurious data quality issues might arise as minor discrepancies creep into reports.

Developers who are building read-only systems that run multiple data aggregation and sorting reports against a database that is constantly changing should consider snapshot isolation if a consistent view of the data is required and if their database administrator has determined that database server capacity allows for the slight increase in database I/O. This class of application can be built either against the primary system or against a replica system that is built by using SQL Server 2005 transactional replication. Snapshot isolation will prevent the data writers (either other users or the replication distributor process) from being blocked by long-running read locks that are taken by the reporting application.

In addition to reports that require consistent data across queries in the same transaction, there are other examples where the developer might want to use snapshot isolation, such as:

  • Filling data-driven dialog elements that are interrelated, again to avoid inconsistencies between pull-down lists, and other array controls
  • In database administrator-centered live system status dialogs where system statistics are being correlated from data stored across the database.

Snapshot isolation development becomes interesting when the application must perform updates against the data that is read within the transaction. This is because the transactionally consistent view of the database, as of the start of the transaction, necessarily masks any conflicting updates. These updates are only discovered when the update is sent to the database and a conflict error is raised.

This side-effect of optimistic concurrency control (the application was optimistic that no other application users would update the same data) means that the developer must do a little extra work to ensure that the user's data is not lost. Much of this logic will already exist in systems that "disconnect" their data from the database, probably based on the row level incrementing timestamp value, to provide optimistic concurrency control. If this logic already exists in your application, read committed using row versioning might be a better choice to eliminate the blocking in the data-population phase and to maintain conflict detection without the requirement to maintain an active transaction in the database. It remains best practice to get in and out of the database as quickly as you can to avoid tying up resources, in this case the version store.

Snapshot isolation provides an automatic mechanism for detecting conflicts within a transaction that avoids the requirement to add timestamp columns or to make other schema changes. If a conflict is detected when the update is sent to the database, a SqlException is thrown and the current transaction is canceled.

Consider the following Microsoft Visual C# 2005 code fragments. Be aware that best practice dictates that try/catch logic would ordinarily be wrapped around the (missing) connection Open and around the Fill command, which is where the transaction is initiated:

// (Definition of a SqlConnection object skipped)
// Define a transaction object using the Snapshot Isolation Level.
SqlTransaction DT = sqlCon.BeginTransaction(IsolationLevel.Snapshot);
// Hook up Select and Update command handlers to the dataadapter
// Use the Snapshot transaction "DT"
SqlCommand selectCMD = new SqlCommand();
selectCMD.Connection = sqlCon;
selectCMD.Transaction = DT;
selectCMD.CommandText = "select MessageNo, MessageText " +
                        " from dbo.DialogText";
sqlDataAdapter1.SelectCommand = selectCMD;
SqlCommand updateCMD = new SqlCommand();
updateCMD.Connection = sqlCon;
updateCMD.Transaction = DT;
updateCMD.CommandText = "update dbo.DialogText " +
                        "set MessageText = @MessageText " +
                        "where MessageNo = @MessageNo";
updateCMD.Parameters.Add("@MessageText", 
                         SqlDbType.NVarChar, 
                         15, "MessageText");
updateCMD.Parameters.Add("@MessageNo", 
                         SqlDbType.SmallInt, 
                         2, "MessageNo");
sqlDataAdapter1.UpdateCommand = updateCMD;
// Now get the data
sqlDataAdapter1.Fill(dataSet1, "DialogText");

The previous Visual C# code uses ADO.NET and the SqlClient to populate a dataset with data from a SQL Server 2005 database in which the database administrator has enabled snapshot isolation. A typical Windows Forms application would now commit the transaction, disconnect from the database and present the data to the user. In the previous code, the transaction is left open to allow another transaction to change the data read into the dataset1 dataset.

The following code shows the return of the data to the database:

// Bind the data to the form's grid control
dataGridView1.DataSource = dataSet1;
dataGridView1.DataMember = "DialogText";
dataGridView1.AutoGenerateColumns = true;
// ...Time passes, conflicting changes take place
// User presses "update now" button:
try
{
sqlDataAdapter1.Update(dataSet1, "DialogText");
   dialogTrans.Commit();
   dataSet1.AcceptChanges();
}
catch (SqlException h)
{
   string errorMessages = "";
   for (int i = 0; i < h.Errors.Count; i++)
   {
      errorMessages += "Index #" + i + "\n" +
      "Message: " + h.Errors[i].Message + "\n" +
      "ErrorNumber: " + h.Errors[i].Number + "\n" +
      "LineNumber: " + h.Errors[i].LineNumber + "\n" +
      "Source: " + h.Errors[i].Source + "\n" +
      "Procedure: " + h.Errors[i].Procedure + "\n";
   }
   if (dialogTrans.Connection != null)
   {
      dialogTrans.Rollback();
   }
   MessageBox.Show(errorMessages, "Conflict Errors");
}
catch (Exception i)
{
// for general exceptions make sure the transaction is rolled back
   dialogTrans.Rollback();
}

In the second snippet, the dataset object is bound to a grid control on the form, where the user is free to make multiple updates to the data—in parallel other transactions have made conflicting changes. When users request that their changes be stored to the database, the dataset's changes are sent through the sqlDataAdapter as a series of database update statements—the first update statement that detects a conflict will cause an exception to be fired that rolls back the work, if no exception is fired then the transaction is explicitly committed by the dialogTrans.Commit() statement.

The exception handler above catches the SqlException thrown and formats an error message that could be sent to an application log—the conflict can be explicitly tested as SqlException.Errors[i].Number, where 3960 is the error number. Be aware that best practice dictates the whole error collection be checked in case of other, more severe errors.

Also be aware that the SqlException handler tests the dialogTrans SqlTransaction object to see whether it is still active (that is it has a connection to the database). If it is active, it is rolled back to ensure transactional consistency. If you try to commit or roll back an inactive object, you will see a SystemException that indicates a COM+ exception code of 0xE0434F49 (-532459699) with the text: "This SqlTransaction has completed; it is no longer usable."

When the conflict is detected, the application should inform the user that their changes have been rejected and offer them the opportunity to resubmit their changes under a new transaction.

Conflict detection, resultant transaction rollback, and then the requirement to resubmit the work show a decision is facing the developer: If the optimistic concurrency control mechanism is too optimistic, and data conflicts occur frequently, then pessimistic concurrency control might be a better choice. When you decide which method of transaction isolation to deploy within your application, you must balance the blocking caused by lock contention versus the additional work that is caused by a transaction rollback.

Minimizing Update Conflicts

Applications that are designed to work under optimistic concurrency provided by snapshot isolation must invest in conflict avoidance techniques. Although conflicts can be handled, it is best to avoid the cost of transaction rollback and retry if you can.

There are two main methods to reduce the risk of a conflict:

  • Application design. When you are gating access to shared resources, you can "reserve" them for the updating application. The concurrency of the individual data item is reduced (others cannot access it while it is reserved), but the overall system concurrency is not affected. Examples of this technique are seen in online ticket-booking applications.
  • Index design. Modified rows are uniquely identifiable by ensuring appropriate indexes are used in the access path of update queries.

Illustrating Concurrency Behavior

The following tables use the simple Customers/Orders/Items schema to show the behavior of the three isolation levels:

  • New optimistic concurrency with snapshot isolation.
  • Default read committed and read committed using row versioning.
  • Read uncommitted.

Two client sessions are required:

  • updater: Sets up the sample schema and inserts and updates data.
  • reader: Runs under the four isolation levels to show their behaviors.

The simple schema is re-created at the start of each run by deletion and re-creation of the database. To drop a database requires that no sessions can be active in the database that is being dropped (including that of the dropping session).

READ COMMITTED

Preparation session:

  1. Connect to an instance of SQL Server 2005 by using SQL Server Management Studio.
  2. Open a query window.
  3. Load the Setup_WithoutRowVersioning.sql script.
  4. Execute the Setup_WithoutRowVersioning.sql script to create the [WithoutRowVersioning] database and populate a small sample schema.

Table 9

Updater session Reader session
 <disconnected>
  • Open a new query window in SQL Server Management Studio and connect to an instance of SQL Server 2005.

    Be aware that the LOCK_TIMEOUT session option is set to 0 to report lock conflicts immediately, without blocking.

  • Load the Reader_WithoutRowVersioning.sql script and make sure that the SET TRANSACTION ISOLATION LEVEL READ COMMITTED statement is not commented out, and that the SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED is commented out.
  • Execute the Reader_WithoutRowVersioning.sql script. The script will connect to the [WithoutRowVersioning] database and execute three queries, each returning data as created in the setup script.
  • Open a new query window in SQL Server Management Studio and connect to an instance of SQL Server 2005.

    This will lock records in the OrderHeaders table as the explicit transaction is left active.

  • Load the Updater_WithoutRowVersioning.sql script.
  • Execute the "Use [WithoutRowVersioning]" command to change the database context.
  • Execute statement block 1.
 
 
  • Execute the Reader_WithoutRowVersioning script.

    This time the first and third SELECT queries will fail because their data is locked. Without the 0 LOCK_TIME, the query would have hung until either the update finished or the lock wait time-out was reached. You should see a message similar to the following:

    Msg 1222, Level 16, State 51,

    Line 8

    Lock request time out period

    exceeded.

    If you select the Results tab, you will see that the second query finished successfully.

    Why did the second query work?

    If you look at the execution plans for the three SELECT statements, you will see that the second query seeks into the OrderHeaders table based on values in the OrderDetail table; therefore, avoiding the locked row because it has no corresponding OrderDetails row(s).

    To display the execution plans in SQL Server Management Studio, select the three SELECT statements and press CTRL+L.

  • Execute statement block 2.

    This will roll back the changes that were made by the INSERT statement.

  • Execute statement block 3.

    This will lock rows in both the OrderHeaders and OrderDetails tables.

 
 
  • Execute the Reader_WithoutRowVersioning.sql script.

    This time, all three SELECT queries will fail because the data for each query is locked.

  • Execute statement block 4.

    This will roll back the changes made by the previous INSERT statements.

 
 
  • Execute the REPORT 1.SELECT statement in the Reader_WithoutRowVersioning.sql script.

    This will run successfully and return the base data that was entered when the schema was created.

  • Execute statement block 5.

    This inserts and commits a new order.

 
 
  • Execute the REPORT 2 and REPORT 3 SELECT statements in the Reader_WithoutRowVersioning.sql script.

    These will run successfully and will return the base data and also the new order (because it was committed). This behavior can cause issues in report suites that expect constant data across multiple SELECT statements.

    How would you resolve this?

    For an answer, see the following discussion.

  • Close this session.
  • Close this session.

The previous sessions show the following positive behaviors of read committed:

  • Committed updates are seen immediately.
  • Locks can be used to serialize access to data; short lock waits (known as blocking) are usually acceptable to most systems.

The sessions also show the potential negative behaviors:

  • Long lock waits can cause command time-out or lock time-out errors and increase the risk of deadlocking (mutually exclusive lock requests).
  • Data can change while running a suite of queries that expect consistent data.

The following techniques are available for reducing the negative behavior of read committed:

  • Make a read-only replica of the data for reporting purposes. Before SQL Server 2005, this can be done with a one-off backup/restore and continuous, periodic log shipping or replication (various techniques).
  • To avoid lock waits, use read uncommitted. See the table in the following section.
  • To avoid data changing in between queries, use repeatable-read or serializable transaction isolation levels in a single transaction that spans the report queries.

In SQL Server 2005, the previous techniques apply. SQL Server 2005 also provides new options, such as:

  • Optimistic concurrency with snapshot isolation (across several queries.)
  • Read committed using row versioning (for a single query.)
  • Database mirroring.
  • Database snapshots.

READ UNCOMMITTED

Preparation session:

  1. Connect to an instance of SQL Server 2005 by using SQL Server Management Studio.
  2. Open a query window.
  3. Load the Setup_WithoutRowVersioning.sql script.
  4. Execute the Setup_WithoutRowVersioning.sql script to create the [WithoutRowVersioning] database and populate a small sample schema.

Table 10

Updater session Reader session
 <disconnected>
  • Open a new query window in SQL Server Management Studio and connect to an instance of SQL Server 2005.
  • Load the Reader_WithoutRowVersioning.sql script and make sure that the SET TRANSACTION ISOLATION LEVEL READ COMMITTED statement is commented out, and that the SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED is not commented.
  • Execute the Reader_WithoutRowVersioning.sql script.

    The results should be same as when you first executed this script in the previous "READ COMMITTED" section.

  • Load the Updater_WithoutRowVersioning.sql script.
  • Execute the "Use [WithoutRowVersioning]" command to change the database context.
  • Execute statement block 1.

    This will lock records in the OrderHeaders table because the explicit transaction is left active.

 
 
  • Execute the Reader_WithoutRowVersioning.sql script.

    This time, all three queries finish successfully; however, notice the discrepancy between the first and second query that is caused by the first SELECT reading the uncommitted OrderHeader record that is filtered out by the join in the second query. This kind of anomaly can cause problems, especially for systems that checksum totals to guarantee accuracy.

  • Execute statement block 2.

    This will roll back the changes that were made by the INSERT statement.

  • Execute statement block 3.

    This will insert and lock rows in both the OrderHeaders and OrderDetails tables.

 
 
  • Execute the Reader_WithoutRowVersioning.sql script.

    This time no anomaly is obvious; however, if the addition of OrderDetail records requires extensive validation, or a large number of OrderDetail rows are being inserted across multiple user dialogs, the report might contain transient numbers that are caused by partially entered orders. This kind of side effect is problematic for database administrator, because by the time they investigate the anomaly, the transaction is finished and the data is stable.

  • Execute statement block 4.

    This will roll back the changes made by the previous INSERT statements.

 
 
  • Execute the REPORT 1 SELECT statement in the Reader_WithoutRowVersioning.sql script.

    This will run successfully and return the base data that was entered when the schema was created.

  • Execute statement block 5.

    This inserts and commits a new order.

 
 
  • Execute the REPORT 2 and REPORT 3 SELECT statements in the Reader_WithoutRowVersioning.sql script.

    These will run successfully and will return the base data and also the new order (because it was committed). The read uncommitted behavior is the same as read committed.

  • Close this session.
  • Close this session.

Using read uncommitted isolation is an effective way to obtain quick results from the database without blocking or being blocked by other users. We recommend this technique when data accuracy is not paramount and the use of transient data is acceptable.

Where nonblocking access to accurate data within a single query or across several queries is required, and offloading the query to a replica database is not possible or desirable, SQL Server 2005 introduces two transaction isolation schemes whose behavior is explored in the following section.

READ COMMITTED Using Row Versioning

Preparation session:

  1. Connect to an instance of SQL Server 2005 by using SQL Server Management Studio.
  2. Open a query window.
  3. Load the Setup_UsingRowVersioning.sql script.
  4. Execute the Setup_UsingRowVersioning.sql script to create the [UsingRowVersioning] database and populate a small sample schema.

Table 11

Updater Session Reader Session
 <disconnected>
  • Open a new query window in SQL Server Management Studio and connect to an instance of SQL Server 2005.
  • Load the Reader_UsingRowVersioning.sql script and make sure that the SET TRANSACTION ISOLATION LEVEL READ COMMITTED statement is not commented out, and that the SET TRANSACTION ISOLATION LEVEL SNAPSHOT is commented out.
  • Execute the Reader_UsingRowVersioning.sql script.

    The results should be same as those returned by the Reader_WithoutRowVersioning.sql script when you first executed this script in the previous "READ COMMITTED" section.

  • Load the Updater_UsingRowVersioning.sql script.
  • Execute the "Use [UsingRowVersioning]" command to change the database context.
  • Execute statement block 1.

    This will lock records in the OrderHeaders table as the explicit transaction is left active.

 
 
  • Execute the Reader_UsingRowVersioning.sql script again.

    This time the script executes without lock contention with the updater session, which causes a lock time-out. The versioned data enables the query to access the original data and to produce a consistent view of committed data.

  • Execute statement block 2 in the Updater_UsingRowVersioning.sql script.

    This will roll back the change made by the INSERT statement.

 
 
  • Execute the REPORT 1 SELECT statement in the Reader_UsingRowVersioning.sql script.

    This will run successfully and return the base data that was entered when the schema was created.

  • Execute statement block 3 and block 4 in the Updater_UsingRowVersioning.sql script.

    These blocks insert and commit a new order, simulating online activity while a report suite is running.

 
 
  • Execute the REPORT 2 and REPORT 3 SELECT statements.

    As with the typical read committed behavior, these queries will pick up the data that is inserted by the updater session and, therefore, be out of synchronization with the first report.

  • Close this session.
  • Close this session.

The previous sessions show how the new read committed using row versioning behavior can help when the system design requires a mix of update or long-running read activity. By obtaining the original data for changes that are uncommitted, the report writer can obtain data that is consistent within transaction boundaries without blocking or being locked by other users. This is different from read uncommitted behavior that will return data that might never be committed to the database, and that can give inconsistent views of the database.

The one negative behavior is that data is subject to change while the reader session runs. This can affect a set of related queries that require a transactionally consistent view of the database across statements. In versions of SQL Server earlier than SQL Server 2005, there were two isolation levels that delivered the consistent view:

  • Repeatable read—locks the data read within the transaction.
  • Serializable—locks the sets read within the transaction.

Both of these isolation levels shape application concurrency, and hence are usually unsuitable for scenarios in which a mix of multiple, random data changes must be coupled with long-running read transactions, especially when the impact of lock escalation (the term given to the run time escalation of multiple granular locks into fewer, less granular locks to conserve lock space) is taken into consideration.

Before SQL Server 2005, application designers would usually deliver the required consistency and concurrency by taking occasional snapshots of the data for reporting, or by implementing some form of row timestamp/datetime versioning within the application. This design pattern can be avoided by using the new snapshot isolation level that is shown in the following section.

SNAPSHOT

Preparation session:

  1. Connect to an instance of SQL Server 2005 by using SQL Server Management Studio.
  2. Open a query window.
  3. Load the Setup_UsingRowVersioning.sql script.
  4. Execute the Setup_UsingRowVersioning.sql script to create the [UsingRowVersioning] database and populate a small sample schema.

Table 12

Updater Session Reader Session
 <disconnected> Connect to an instance of SQL Server 2005.

Load the Reader_UsingRowVersioning.sql script and make sure that the SET TRANSACTION ISOLATION LEVEL SNAPSHOT is not commented out, and that the SET TRANSACTION ISOLATION LEVEL READ COMMITTED statement is commented out.

Execute the statements up to and including the REPORT 1 SELECT statement.

Remember that the snapshot transaction does not begin until data is accessed. If the SELECT statement is not executed, the transaction will have access to any data committed between the BEGIN TRANSACTION and the first query to access data.

  • Load the Updater_UsingRowVersioning.sql script.

    This new data should not be seen by the reader session until its transaction finishes.

  • Execute the "Use [UsingRowVersioning]" command to change the database context.
  • Execute the INSERT statements in block 3 to insert a new order into the schema.
 
 
  • Execute the REPORT 2 and REPORT 3 SELECT statements in the Reader_UsingRowVersioning.sql script.

    The output should not show the data that was inserted by the updater session. The data is now consistent across the REPORT SELECT statements without blocking the updater session.

  • Execute the COMMIT statement in block 4.
 
 
  • Execute the REPORT 2 and REPORT 3 SELECT statements in the Reader_UsingRowVersioning.sql script.

    Although the updater session transaction has committed, the modified data is still not seen by this snapshot transaction.

  • Execute the ROLLBACK TRANSACTION statement to complete the reporting transaction.
  • Execute the complete Reader_UsingRowVersioning.sql script.

    This time the results should include the data that was entered in the updater session.

  • Close this session.
  • Close this session.

This final session shows the multistatement behavior of a transaction that is running under snapshot isolation. The results were consistent across SELECT statements, although the updater session had successfully committed new data. This consistency is achieved without the potential negative impact of repeatable read and serializable.

Notice that the same script was used to demonstrate the new behavior of the read committed isolation using row versioning, and included the same BEGIN TRANSACTION and ROLLBACK TRANSACTION statements. However, because read committed applies at the statement level, there is no "memory" across statements.

The snapshot isolation level must be explicitly requested by using a SET TRANSACTION ISOLATION LEVEL statement, and then activated by both starting a transaction and accessing data.

Administrative Best Practices

From an administrative perspective, enabling row versioning–based isolation should be decided carefully, because the affect on performance might be negative when it is used to solve the wrong problems. If performance problems exist due to lack of proper indexing and query performance suffers, changing to row versioning probably would not solve this problem. If query performance suffers due to significant blocking caused by a mixed workload of readers and writers, read committed isolation using row versioning may be all that is necessary. If transactional consistency is required for long-running transactions, snapshot isolation might be necessary. However, each of these solutions incrementally puts a heavier load on tempdb.

Database-Level Settings

Because snapshot isolation is configured at the database level, administrators must enable snapshot isolation for each database that requires it. If cross-database transactions are tried with snapshot isolation and not all databases are configured for them, the snapshot transactions will fail unless a locking hint is used to override the default.

If all databases are configured for snapshot isolation, cross-database transactions will use a consistent snapshot across databases within one server instance. For example, assume there are two tables in two databases that are enabled for snapshot in the same server, and update transactions make the same changes to these two tables. The transaction under snapshot isolation never returns different values for the two tables.

Upgrade Issues

Although upgrading to SQL Server 2005 is dynamic and requires only internal changes to support row versioning, an extra 14 bytes per data row is required to store versioning data, irrespective of snapshot or read committed using row versioning being enabled. This data is added when the row is updated and, therefore, can lead to page splits (for tables that have clustered indexes) or row forwarding for heaps. Also changes will have to be made to all text/image data to allow row versioning.

None of these changes are made during upgrade but, instead, are made during later data row and text/image data modifications. It is important that database administrators who manage systems that have large binary object (LOB) data columns are aware that, for upgraded databases, the text/image columns will be modified dynamically to include versioning changes when any part of the LOB data is changed. All the text/image pages that belong to that particular text/image value will be changed. This operation can potentially be very expensive for large values that extend over many pages (due to page allocations, copying, and logging). You will only pay this overhead when you modify the text/image column value; there is no overhead if you only modify the parent data row.

Because text/image data modifications can be run in a minimally logged mode, database administrator should determine whether performing a separate and manual step as part of the upgrade to SQL Server 2005 would be beneficial. The change of fragment size can cause a lot of fragmentation to existing LOBs when there are lots of random, small updates to only pieces of the text/image values. Although random, small, partial updates to LOBs are not the common type of text/image manipulation that is performed, this overhead could be potentially expensive (both in terms of time and logging) in a live system. Before going live with SQL Server 2005, database administrators must consider adding a step (during upgrade) that will modify all text/image data to have this new format.

To perform this modification, complete the following steps:

  1. Upgrade the database to SQL Server 2005.

  2. In place upgrade—this is the easiest to perform because all components are updated in one upgrade process.

  3. Install SQL Server 2005 on a new server and then use backup/restore to upgrade. SQL Server 2005 supports restoring SQL Server 2000 databases. For more complete information about how to successfully upgrade from SQL Server 2000, see the topic Upgrading to SQL Server 2005 in the SQL Server Books Online.

  4. Verify or change the recovery model to either simple or bulk logged. We recommend the simple recovery model because a full database backup will be performed after you successfully complete this modification process.

    To perform an update to all text/image data values, executing a statement such as UPDATE ... SET a=a will not actually update the text data. Instead, use a statement sequence such as the following:

    DECLARE @ptrval binary(16)
    DECLARE @dataval CHAR(1)
    SELECT @ptrval = TEXTPTR(anytab.a) ,
               @dataval = SUBSTRING(anytab.a,1,1)
      FROM dbo.anytable AS anytab
      WHERE anytab.primarykey = 'unique value'
    UPDATETEXT anytab.a @ptrval 0 0 @dataval
    GO
    

    To do this for all records in a table, wrap the previous set of statements in a cursor loop that iterates through all values in the table.

  5. Change the recovery model back to the recovery model you want, such as full recovery model.

Usage of Version Store in tempdb

The version store is maintained in tempdb. Because of this, tempdb is critical to the overall performance of the system and whether row versioning will even be possible for some long-running transactions. For example, if tempdb runs low on space, performance will degrade while the version store tries to clean up. The regular cleanup function is performed every minute in the background, trying to reclaim all reusable space from the version store. When tempdb runs out of free space, the regular cleanup function is called before auto-growth occurs. When the disk is full and auto-grow cannot increase the file sizes, the version store is first truncated to return space, and then if space pressure continues row versioning is stopped. If a query that is using row versions later encounters a record and wants to read an older version of the record that was not generated because of space constraints, the query fails. Updates and deletes do not fail; only queries that are requesting row versions fail, because after the version store fills, updates and deletes no longer generate row versions.

Note   Versions that are associated with short-running transactions, such as those found in OLTP workloads defined by the TPC-C benchmarks, might be written to memory and never to disk.

One alternative is to detect a long-running query or transaction and terminate it. By terminating the query you can help reduce the size of the version store. This operation can be automated by associating a script with the event (error number 3958) in tempdb. This is preferred error behavior for most applications. Otherwise, users might have many more transactions that fail because of out-of-space issues in the version store.

To ensure smooth running of a production system that is using row versioning-based isolation, the database administrator must allocate enough disk space for tempdb to make sure that there is always approximately 10 percent free space. When free space falls below 10 percent, system throughput will degrade because the version cleanup process will spend more time trying to reclaim space in the version store.

If I/O performance in tempdb becomes an issue, we recommend that the database administrator create more than one file for tempdb on different disks to increase I/O bandwidth. In fact, on multiprocessor computers, increasing the number of files to match the number of processors can often yield even greater gains. For more information, see the Knowledge Base article Q328551: Concurrency Enhancements for the tempdb Database.

If any application on a server creates unexpectedly large numbers of version store entries, the application can affect other applications by physically filling the shared tempdb database. Many versions or long-running transactions (not necessarily running under snapshot isolation) that prevent version cleanup can cause out-of-space related problems.

Sizing tempdb

If only read committed isolation using row versioning is required, sizing tempdb is not as important because the row versions are not likely to be held for long periods of time. However, long-running transactions (readers and also any writers) can cause problems when the transactions are excessively long. However, if you are running in snapshot isolation mode, the space requirement in tempdb increases. We recommend that you use the following formula to estimate the amount of space required in tempdb for running snapshot isolation queries.

To estimate how much space you need to have in tempdb, you must first consider that an active transaction must keep all its changes in the version store, so that a row versioning-based transaction that starts later will be able to get to the old versions. In addition, if there is an active snapshot transaction, all the version store data that was generated by previous transactions that are active when the snapshot starts must also be maintained until the last row versioning–based transaction that is using them finishes.

Size of Version Store = 2(Version store data generated per minute *

longest running time, in minutes, of your transaction)

Notice that the 2* multiplier reflects the possibility of two long-running transactions with a slight overlap, therefore, leading to twice the longest running transaction time before the versioned records can be released.)

Version store data that generated per minute for the system on behalf of active transactions can be compared to the log rows that are generated per minute. When you are sizing tempdb, remember that a log record will contain data changes and the snapshot of the whole row. By using Performance Monitor counters, you can see the amount of version store data that is generated per second. In a production system, you should consider monitoring these counters to fine-tune the size of tempdb.

Be aware that SQL Server 2005 online index build transactions are excluded from this calculation. These transactions do not directly affect the overall version store cleanup because their processing is handled differently from user transaction version management.

If you have enough disk space, always allocate more than your estimate to prevent potential space problems. When you are estimating size of tempdb, you must also consider the space requirements of DBCC CHECKDB, DBCC CHECKTABLE, index building, query, and other activities.

Other features that affect the size of the version store are triggers and MARS. In SQL Server 2005, triggers use row versions to generate inserted and deleted rows, instead of using scanning the log records.

Monitoring Version Store Activity

There are several ways to monitor version store activity, which include dynamic management views, Performance Monitor counters, and SQL Profiler Events. Each one offers a different perspective on the activity that is currently occurring on the system.

Dynamic Management Views

sys.dm_tran_active_snapshot_database_transactions

This dynamic management view returns a virtual table for all active transactions with a row-version–related sequence number. Only transactions that are running under snapshot isolation will include a sequence number. Read-only transactions in auto-commit mode and system transactions will not appear in this virtual table.

sys.dm_tran_active_snapshot_database_transactions returns the following columns.

Table 13

Column name Data type Description
transaction_id bigint Unique identification number assigned for the transaction. The transaction ID is primarily used to identify the transaction in locking operations.
transaction_sequence_num bigint Transaction sequence number. This is a unique sequence number that is assigned to a transaction when it starts. Transactions that do not generate version records and do not access the version store will not receive a transaction sequence number. For more information, see "Understanding Row Versioning-based Isolation Levels" in SQL Server 2005 Books Online.
commit_sequence_num bigint Sequence number that indicates when the transaction finishes (commits or stops). For active transactions, the value is NULL.
is_snapshot int 0 = Is not a snapshot isolation transaction.

1 = Is a snapshot isolation transaction.

session_id int ID of the session that started the transaction.
first_snapshot_sequence_num bigint Lowest transaction sequence number of the transactions that were active when a snapshot was taken. On execution, a snapshot transaction takes a snapshot of all of the active transactions at that time. For nonsnapshot transactions, this column shows 0.
max_version_chain_traversed int Maximum length of the version chain that is traversed to find the transactionally consistent version.
average_version_chain_traversed real Average number of row versions in the version chains that are traversed.
elapsed_time_seconds bigint Elapsed time since the transaction obtained its transaction sequence number.

The table outputs data in the sequence of the transaction_sequence_number column. This shows transactions based on start time and, therefore, also elapsed_time_seconds to help you determine the transactions that are long-running.

To find the 10 longest (that is. earliest) transactions

SELECT TOP 10 atx.transaction_id, atxs.[name]
FROM sys.dm_tran_active_snapshot_database_transactions AS atx 
INNER JOIN sys.dm_tran_active_transactions as atxs
  ON atx.transaction_id = atxs.tran_id

To determine the transaction that has traversed the longest version chains

SELECT TOP 1 atx.* 
FROM sys.dm_tran_active_snapshot_database_transactions AS atx
ORDER BY atx.max_version_chain_traversed

sys.dm_tran_transactions_snapshot

This dynamic management view returns a virtual table for all active transactions with a row-version–related sequence number. This view returns a virtual table for the sequence number of transactions that are active when each snapshot transaction starts.

sys.dm_tran_transactions_snapshot returns the following columns.

Table 14

Column name Type Description
transaction_sequence_num bigint Transaction sequence number (XSN) of a snapshot transaction.
snapshot_id bigint Snapshot ID for each Transact-SQL statement started under read committed using row versioning. This value is used to generate a transactionally consistent view of the database supporting each query that is being run under read committed using row versioning.
snapshot_sequence_num bigint Transaction sequence number of a transaction that was active when the snapshot transaction started.

Performance Monitor Counters

The Windows 2003 Performance tool (System Monitor in Windows 2000) enables a database administrator to monitor a variety of system and SQL Server counters in a graphic interface, log the performance counters in a performance log, analyze the performance log, and define actions based on these events. There is also an API that allows for a database administrator to develop his or her own programs to access these counters and take proper actions.

The following table lists and describes the various counters.

Table 15

Counter Explanation
(1) Free Space in tempdb (KB) The free space in tempdb in kilobytes.

The version store is in tempdb; therefore, the database administrator must make sure that the tempdb has enough free space. This is implemented by having a running count of free extents in tempdb.

(2) Version Store Size(KB) The size of the version store in kilobytes.

The database administrator knows how much space in tempdb is being used for the version store.

(3) Version Generation rate(KB/s)1 The version generation rate in kilobytes per second.
(4) Version Cleanup rate(KB/s)1 The version cleanup rate in kilobytes per second.
(5) Version Store unit count2 Number of version store units that are used in the version store. This counter reflects the currently active version unit count.
(6) Version Store unit creation2 Creation of new version store units in the version store. This counter represents the count since the instance was started.
(7) Version Store unit truncation2 Truncation of version store units in the version store. This counter represents the count since the instance was started.
(8) Update conflict ratio The fraction of update snapshot transactions that have update conflicts to the total number of update snapshot transactions.

The database administrator would know how appropriate the snapshot isolation transaction level is, based on this percentage. We note that a transaction can have multiple updates. In this case, the measure is the number of transactions that do updates and not the number of updates themselves. The reason for not taking the number of updates as the measure is that this would give a deceivingly low figure. This is so because for an update conflict, the numerator count is incremented by one only with other earlier updates in the transaction getting rolled back; whereas, in the case of a successful transaction, the denominator count is incremented by the number of updates in the transaction.

Note   This is a rate counter and gives the update conflict ratio for the last second.

(9) Longest Transaction Running Time The longest running time of any transaction in seconds.

The database administrator can examine this and determine whether any transaction is running for unreasonably long time. To obtain more information, the database administrator can query the sys.dm_tran_active_transactions dynamic management view to obtain the transaction_id and session_id. This table is sorted on the column elapsed_time, and also gives the database administrator the top-n longest running transactions with their information.

(10) Transactions The total number of active transactions.

The number gives all the transactions that are active in the system. It includes the background internal transactions in SQL Server, but it does not include the system transactions.

(11) Snapshot Transactions3 The total number of active snapshot transactions.
(12) Update Snapshot Transactions3,4 The total number of active snapshot transactions that also include update statements.
(13) NonSnapshot Version Transactions4 The total number of active nonsnapshot transactions that generate version records.

This is from updates that have not requested snapshot isolation.

1 From counters 3 and 4, the database administrator can predict the size requirement of tempdb and make space for it.

2 From counters 5, 6 and 7, the database administrator would know from the active count and creation count when a system has reached steady state. A high truncation rate can indicate that tempdb is or was under space stress from other applications that are also using tempdb, and could be a cue to the database administrator to increase the size of tempdb.

3 From counters 11 and 12, the database administrator can determine the number of snapshot transactions that are read-only.

4 From counters 12 and 13, the database administrator can determine the total number of transactions that cause version generation, because all snapshot transactions that do updates result in version generation.

From these counters, the database administrator knows to what extent the versioning feature is being used and how it is being used. All the previously described counters are server-wide and are grouped in a new Performance Monitor Object called SQLServer: Transactions.

Conclusion

Because snapshot isolation influences both administration and development aspects of a system, it is important to make sure that all aspects are understood. If database administrators unnecessarily allow snapshot isolation where long-running transactions occur and modifications are constant, users can experience problems when they commit changes if tempdb is not sized appropriately. Additionally, if developers expect read committed isolation to be set and it's not, data inconsistencies might occur undetected. Make sure to review all of the associated resources and participate in the beta newsgroups for more information.

For More Information

https://msdn.microsoft.com/sql/

SQL Server Books Online

For more information, see the following topics in SQL Server Books Online:

Knowledge Base Articles

Q328551: Concurrency Enhancements for the tempdb Database

Additional Reading

Generalized Isolation Level Definitions

A Critique of ANSI SQL Isolation Levels

Newsgroups and Forums

Discussion groups for SQL Server 2005 can be found at the following locations: