Export (0) Print
Expand All
12 out of 14 rated this helpful - Rate this topic

Execution Plan Caching and Reuse

Updated: 17 July 2006

SQL Server 2005 has a pool of memory that is used to store both execution plans and data buffers. The percentage of the pool allocated to either execution plans or data buffers fluctuates dynamically, depending on the state of the system. The part of the memory pool that is used to store execution plans is referred to as the procedure cache.

SQL Server 2005 execution plans have the following main components:

  • Query Plan
    The bulk of the execution plan is a re-entrant, read-only data structure used by any number of users. This is referred to as the query plan. No user context is stored in the query plan. There are never more than one or two copies of the query plan in memory: one copy for all serial executions and another for all parallel executions. The parallel copy covers all parallel executions, regardless of their degree of parallelism.
  • Execution Context
    Each user that is currently executing the query has a data structure that holds the data specific to their execution, such as parameter values. This data structure is referred to as the execution context. The execution context data structures are reused. If a user executes a query and one of the structures is not being used, it is reinitialized with the context for the new user.
Execution context, same query, different literals

When any SQL statement is executed in SQL Server 2005, the relational engine first looks through the procedure cache to verify that an existing execution plan for the same SQL statement exists. SQL Server 2005 reuses any existing plan it finds, saving the overhead of recompiling the SQL statement. If no existing execution plan exists, SQL Server 2005 generates a new execution plan for the query.

SQL Server 2005 has an efficient algorithm to find any existing execution plans for any specific SQL statement. In most systems, the minimal resources that are used by this scan are less than the resources that are saved by being able to reuse existing plans instead of compiling every SQL statement.

The algorithms to match new SQL statements to existing, unused execution plans in the cache require that all object references be fully qualified. For example, the first of these SELECT statements is not matched with an existing plan, and the second is matched:

SELECT * FROM Contact

SELECT * FROM Person.Contact

There is a greater probability that individual execution plans will be reused in an instance of SQL Server 2000 and SQL Server 2005 than in SQL Server version 6.5 and earlier.

After an execution plan is generated, it stays in the procedure cache. SQL Server 2005 ages old, unused plans out of the cache only when space is required. Each query plan and execution context has an associated cost factor that indicates how expensive the structure is to compile. These data structures also have an age field. Every time the object is referenced by a connection, the age field is incremented by the compilation cost factor. For example, if a query plan has a cost factor of 8 and is referenced two times, its age becomes 16. The lazywriter process periodically scans the list of objects in the procedure cache. The lazywriter then decrements the age field of each object by 1 on each scan. The age of our sample query plan is decremented to 0 after 16 scans of the procedure cache, unless another user references the plan. The lazywriter process deallocates an object if the following conditions are met:

  • The memory manager requires memory and all available memory is currently being used.
  • The age field for the object is 0.
  • The object is not currently referenced by a connection.

Because the age field is incremented every time an object is referenced, frequently referenced objects do not have their age fields decremented to 0 and are not aged from the cache. Objects infrequently referenced are soon eligible for deallocation, but are not actually deallocated unless memory is required for other objects.

Certain changes in a database can cause an execution plan to be either inefficient or invalid, based on the new state of the database. SQL Server detects the changes that invalidate an execution plan and marks the plan as not valid. A new plan must then be recompiled for the next connection that executes the query. The conditions that invalidate a plan include the following:

  • Changes made to a table or view referenced by the query (ALTER TABLE and ALTER VIEW).
  • Changes to any indexes used by the execution plan.
  • Updates on statistics used by the execution plan, generated either explicitly from a statement, such as UPDATE STATISTICS, or generated automatically.
  • Dropping an index used by the execution plan.
  • An explicit call to sp_recompile.
  • Large numbers of changes to keys (generated by INSERT or DELETE statements from other users that modify a table referenced by the query).
  • For tables with triggers, if the number of rows in the inserted or deleted tables grows significantly.
  • Executing a stored procedure using the WITH RECOMPILE option.

Most recompilations are required either for statement correctness or to obtain potentially faster query execution plans.

In SQL Server 2000, whenever a statement within a batch causes recompilation, the whole batch, whether submitted through a stored procedure, trigger, ad-hoc batch, or prepared statement, is recompiled. In SQL Server 2005, only the statement inside the batch that causes recompilation is recompiled. Because of this difference, recompilation counts in SQL Server 2000 and SQL Server 2005 are not comparable. Also, there are more types of recompilations in SQL Server 2005 because of its expanded feature set.

Statement-level recompilation benefits performance because, in most cases, a small number of statements causes recompilations and their associated penalties, in terms of CPU time and locks. These penalties are therefore avoided for the other statements in the batch that do not have to be recompiled.

The SQL Server Profiler SP:Recompile trace event reports statement-level recompilations in SQL Server 2005. This trace event reports only batch recompilations in SQL Server 2000. Further, in SQL Server 2005, the TextData column of this event is populated. Therefore, the SQL Server 2000 practice of having to trace SP:StmtStarting or SP:StmtCompleted to obtain the Transact-SQL text that caused recompilation is no longer required.

SQL Server 2005 also adds a new trace event called SQL:StmtRecompile that reports statement-level recompilations. This trace event can be used to track and debug recompilations. Whereas SP:Recompile generates only for stored procedures and triggers, SQL:StmtRecompile generates for stored procedures, triggers, ad-hoc batches, batches that are executed by using sp_executesql, prepared queries, and dynamic SQL.

The EventSubClass column of SP:Recompile and SQL:StmtRecompile contains an integer code that indicates the reason for the recompilation. The following table contains the meaning of each code number.

EventSubClass value Description

1

Schema changed.

2

Statistics changed.

3

Deferred compile.

4

SET option changed.

5

Temporary table changed.

6

Remote rowset changed.

7

FOR BROWSE permission changed.

8

Query notification environment changed.

9

Partitioned view changed.

10

Cursor options changed.

11

OPTION (RECOMPILE) requested.

ms181055.note(en-US,SQL.90).gifNote:
When the AUTO_UPDATE_STATISTICS database option is SET to ON, queries are recompiled when they target tables or indexed views whose statistics have been updated or whose cardinalities have changed significantly since the last execution. This behavior applies to standard user-defined tables, temporary tables, and the inserted and deleted tables created by DML triggers. If query performance is affected by excessive recompilations, consider changing this setting to OFF. When the AUTO_UPDATE_STATISTICS database option is SET to OFF, no recompilations occur based on statistics or cardinality changes, with the exception of the inserted and deleted tables that are created by DML INSTEAD OF triggers. Because these tables are created in tempdb, the recompilation of queries that access them depends on the setting of AUTO_UPDATE_STATISTICS in tempdb. Note that in SQL Server 2000, queries continue to recompile based on cardinality changes to the DML trigger inserted and deleted tables, even when this setting is OFF. For more information about disabling AUTO_UPDATE_STATISTICS, see Index Statistics.

Release History

17 July 2006

New content:
  • Added executing a stored procedure using the WITH RECOMPILE option to the list of conditions invalidating a plan.

5 December 2005

New content:
  • Clarified the Note about query recompilation when the AUTO_UPDATE_STATISTICS database option is SET to OFF applies differently to the inserted and deleted tables that are created by DML INSTEAD OF triggers.

Community Additions

ADD
Show:
© 2014 Microsoft. All rights reserved.