Chapter 17 — Tuning .NET Application Performance
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
Improving .NET Application Performance and Scalability
J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman
Home Page for Improving .NET Application Performance and Scalability
Send feedback to Scale@microsoft.com
Summary: This chapter provides an iterative process to systematically identify, tune, and eliminate bottlenecks until your application meets its performance objectives. The chapter covers tuning the ASP.NET thread pool for various scenarios, including burst load and COM interop . It also shows you how to tune timeouts such as session timeouts, script timeouts, execution timeout and deadlock intervals. It also explains how to tune Web gardens, the Enterprise Services object pool, and much more.
How to Use This Chapter
Categories for Tuning
Performance Tuning Process
.NET Framework Tuning
Enterprise Services Tuning
Web Services Tuning
SQL Server Tuning
Internet Information Services (IIS) Tuning
- Use a prescriptive performance tuning process.
- Identify application and system bottlenecks.
- Tune your system, including CPU, memory, disk I/O, and network I/O.
- Tune your platform and application.
- Configure Microsoft® .NET Framework settings.
- Tune ASP.NET applications and Web services.
- Tune Enterprise Services applications.
Performance tuning refers to the identification and systematic elimination of bottlenecks to improve performance. The focus for tuning in this chapter is on configuration settings, and therefore this chapter is of interest to both administrators and developers. The chapter shows you how to identify bottlenecks, and then how to apply the relevant configuration changes to help eliminate them. Code optimization is the subject of the technology-specific chapters in Part III, "Application Performance and Scalability" of this guide.
The current chapter starts by explaining three performance-tuning categories: system, platform, and application. It then presents an iterative performance-tuning process and explains the significance of starting with a baseline set of metrics, applying a single set of configuration changes at a time, and testing and measuring to determine whether your changes have been successful.
Subsequent sections of this chapter show you how to tune ASP.NET, Web services, Enterprise Services, and ADO.NET, and then how to tune shared system resources including CPU, memory, disk I/O, and network I/O. In each case, you are shown how to identify bottlenecks and how to eliminate them.
To get the most from this chapter:
- Read Chapter 16, "Testing .NET Application Performance" Read Chapter 16 before reading the current chapter.
- Use Chapter 15, "Measuring .NET Application Performance" Before implementing the steps in the current chapter, make sure you have Chapter 15 available, because it provides details about the measurements that you need to identify bottleneck causes.
- Read Chapter 3, "Design Guidelines for Application Performance" Read Chapter 3 to help ensure that you do not introduce bottlenecks at design time.
- Read Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability" Read Chapter 4 to help identify potential bottlenecks caused by specific architecture or design decisions.
- Read the technology chapters. Read the chapters in Part III, "Application Performance and Scalability," to learn about how to optimize your code and avoid bottlenecks caused by poor coding practices.
Configuration and tuning settings can be divided into the following categories. These are the areas where you focus your tuning efforts:
- Application. Includes the application configuration parameters.
For .NET applications, these parameters are primarily located in configuration files. ASP.NET applications use Web.config configuration files.
- Platform. Includes the host operating system and the .NET Framework, together with Web and database servers such as Internet Information Services (IIS) and Microsoft SQL Server. The .NET Framework configuration is maintained in Machine.config. Settings in this file affect all of the .NET applications on the server.
- System. Includes the hardware resources on a server. These resources include CPU, memory, disk I/O, and network I/O.
Figure 17.1 summarizes the three performance-tuning categories.
Figure 17.1: Performance tuning categories
Tuning involves proactively tuning defaults across servers and then tuning for specific application scenarios, such as burst load conditions. This process might involve tuning configuration settings on specific servers and not across all servers. If you tune a configuration file specific to one server, and subsequently move that file to another server, you often need to retune. For example, tuning the thread pool in ASP.NET depends on the number of CPUs installed. If the configuration file is moved from a two-CPU server to a four-CPU server, then you must retune.
Performance tuning is an iterative process that you use to identify and eliminate bottlenecks until your application meets its performance objectives. You start by establishing a baseline. Then you collect data, analyze the results, and make configuration changes based on the analysis. After each set of changes, you retest and measure to verify that your application has moved closer to its performance objectives. The purpose is not to load, stress, or capacity test your application, but to understand how the various tuning options and configuration settings affect your application. Figure 17.2 shows the basic tuning process.
Figure 17.2: The performance tuning process
The tuning process is an iterative processing that consists of the following set of activities.
- Establish a baseline. Ensure that you have a well defined set of performance objectives, test plans, and baseline metrics.
- Collect data. Simulate load and capture metrics.
- Analyze results. Identify performance issues and bottlenecks.
- Configure. Tune your application setup by applying new system, platform, or application configuration settings.
- Test and measure. Test and measure to verify that your configuration changes have been beneficial.
1. Establish a Baseline
Before you start to tune your application, you need to have an established baseline. You need to ensure that the following are well defined:
- Performance objectives
Your application's performance objectives are usually measured in terms of response times, throughput (requests per second), and resource utilization levels. They should include budgeted times for specific scenarios together with resource utilization levels such as CPU, memory, disk I/O, and network I/O allocated for your application. For more information about setting performance goals, see "Performance Best Practices at a Glance" in this guide.
- Test plans and test scripts
You need a test plan and a set of test scripts that you can use to apply load to your application. For more information about how to approach testing, see Chapter 16, "Testing .NET Application Performance."
- Baseline metrics
Make sure that you capture a baseline set of metrics for your system, platform, and application. Baseline metrics helps you evaluate the impact of any changes made to your configuration during the performance-tuning process. For more information about metrics, see "Metrics" in Chapter 16, "Testing .NET Application Performance."
2. Collect Data
Use test tools such as Microsoft Application Center Test (ACT) to simulate load. You can use tools like System Monitor or Microsoft Operations Manager to capture performance counters.
When you run tests for the first time, make sure you use the same version of the application that you used to establish your baseline metrics. For subsequent iterations of the tuning process, you test performance with the same workload and test scripts but with modified configuration changes.
Use a Constant Workload
For all iterations of the tuning process, make sure that you use the same test scripts and a constant workload. Doing so enables you to accurately measure the impact of any configuration changes that you have applied.
If you run short duration tests, make sure you include an appropriate warm-up time in your test scripts to ensure that your results are not skewed due to initial slow response times caused by just-in-time (JIT) compilation, cache population, and so on. Also make sure to run your tests for an adequate and realistic period of time.
For more information about using ACT, see "How To: Use ACT to Test Performance and Scalability" and "How To: Use ACT to Test Web Services Performance" in the "How To" section of this guide.
Format the Results
A typical test generates a vast amount of data in different locations, from different sources, and in different formats. For example, captured data includes system performance counters from all servers, IIS log files from Web and/or application servers, SQL Server metrics on the database server, and so on. You must collect the data together and format it in preparation for the next step, analyzing the results.
You can format the data in such a way that you are able to map the cascading effect of changes in one part of your configuration across your application. Organizing the data into the categories described earlier (system, platform, and application) helps you analyze the application as a whole, rather than analyzing it in parts.
As an example of how configuration changes can have a cascading effect, changing the thread pool settings on the Web server might cause requests to be processed faster, which may in turn causes increased resource utilization (CPU, memory, and disk I/O) on your database server.
3. Analyze Results
In this step, you analyze the captured data to identify performance issues and bottlenecks. To identify the root cause of a problem, start tracing from where you first notice the symptom. Often, the most obvious observation is not the cause of the problem. When you analyze your data, bear in mind the following points:
- The data you collect is usually only an indicator of a problem and is not the source of the problem. Indicators such as performance counters can give you directions to help isolate your debugging or troubleshooting process to target specific areas of functionality.
- Intermittent spikes in your data as shown by performance counters may not be a big concern. If it makes sense to, ignore anomalies.
- Make sure that your test results are not skewed by warm-up time. Make sure that your test scripts run for a period of time before you start capturing metrics.
- If the data you collect is not complete, then your analysis is likely to be inaccurate. You sometimes need to retest and collect the missing information or use further analysis tools. For example, if your analysis of Common Language Runtime (CLR) performance counters indicates that a large number of generation 2 garbage collections are occurring, then you should use the CLR Profiler tool to profile the overall memory usage pattern for the application.
- You should be able to identify and isolate the areas that need further tuning. This assumes that you have already optimized your code and design for any changes, and that only the configuration settings need tuning.
- If you are currently in the process of performance tuning, then you need to compare your current set of results with previous results or with your baseline performance metrics.
- If, during your analysis, you identify several bottlenecks or performance issues, prioritize them and address those that are likely to have the biggest impact first. You can also prioritize this list on the basis of which bottleneck you hit first when running a test.
- Document your analysis. Write down your recommendations, including what you observed, where you observed it, and how you applied configuration changes to resolve the issue.
You tune your application setup by applying new system, platform, or application configuration settings. The analysis documentation from the previous step can contain several recommendations, so use the following guidelines when you apply configuration changes:
- Apply one set of changes at a time. Address changes individually. Making multiple configuration changes can distort the results and can make it difficult to identify potential new performance issues. A single change may actually include a set of multiple configuration changes that need to be applied and evaluated as a single unit.
- Fix issues in a prioritized order. Address the issues that are likely to provide maximum payoff. For example, instead of fine-tuning ASP.NET, you might achieve better initial results by creating an index on a database table that you identified was missing.
5. Test and Measure
Performance tuning is an iterative process. Having applied one set of changes, you retest and measure to see whether the configuration changes have been beneficial. Continue the process until your application meets its performance objectives or until you decide on an alternate course of action, such as code optimization or design changes.
You need to identify bottlenecks caused by poor design or implementation early in the application life cycle. Bottlenecks that cannot be optimized or tuned need to be identified as constraints during the design phase of the life cycle, so that they can be factored in the design to minimize the impact on performance.
What Are Bottlenecks?
A bottleneck is the device or resource that constrains throughput. Most of the time, performance bottlenecks in your application relate to resource issues that may include server resources such as CPU, memory, disk I/O, and network I/O or other external resources such as available database connections or network bandwidth.
Bottlenecks vary from layer to layer, based on the server role:
- Web/Application server. Some common causes of bottlenecks include inefficient session and state management, thread contention, long-running calls to a Web service or a database, and chatty interfaces.
- Database server. Some common causes for bottlenecks include poor logical database design such as bad table design, improper normalization of tables, inefficient indexes on tables, and badly partitioned data across tables. Other causes include inefficient queries, inappropriate isolation levels used by transactions in queries, and inefficient stored procedures.
How to Identify Bottlenecks
The first step in identifying bottlenecks is to know the different tests and measurements that you must run to simulate varying user loads and access patterns for the application. The following measurements help you to expose bottlenecks and isolate areas that require tuning:
- Measure response time, throughput, and resource utilization across user loads.
- Measure metrics that help you capture a more granular view of your application.
Measure Response Time, Throughput, and Resource Utilization Across User Loads
These metrics help you identify whether you are moving toward or away from your performance goals with each iteration through the tuning process. These also give you a rough idea as to which resource is the first bottleneck for the application, and on which server the bottleneck occurs.
Analyzing Response Time Across User Loads
When you measure response times with varying number of users, watch for a sharp rise in response time. This rise is the point of poor efficiency, and performance only degrades from this point onward, as shown in Figure 17.3.
Figure 17.3: Response time vs. user load
Measuring Throughput Across User Loads
When you measure throughput across user loads, watch for the peak levels of throughput. At the point where throughput starts to fall, the bottleneck has been hit. Performance continues to degrade from this point onward. An example is shown in Figure 17.4.
Figure 17.4: Throughput vs. user load
Analyzing Resource Utilization Across User Loads
Analyze resource utilization levels across linearly increasing user loads. See whether the resource utilization levels increase at a sharper rate as new users are added and more transactions are performed. Figure 17.5 shows a linear utilization level for increased user load.
Figure 17.5: Utilization vs. user load
Measure Metrics that Help You Capture a More Granular View of Your Application
Using input from the coarse-grained monitoring of your performance objectives mentioned earlier, you can add additional counters during subsequent iterations of the tuning process.
Continuing with the example introduced earlier, if you are performing a remote database call, you can measure the time taken by the database call to complete. Monitor the metrics related to indexes, locks, number of tables scanned, resource utilization levels, and so forth on the database to identify the reason why the query is taking a long time to execute.
For more information about measuring throughput, load testing, and generating user load, see Chapter 15, "Measuring .NET Application Performance" and Chapter 16, "Testing .NET Application Performance"
How well your applications, services, and operating system use shared system-level resources has a direct impact on throughput, queuing, and response time. Tools such as System Monitor enable you to monitor resource usage. You should monitor the following shared system components at a minimum:
- Disk I/O
- Network I/O
Each application that runs on a server gets a time slice of the CPU. The CPU might be able to efficiently handle all of the processes running on the computer, or it might be overloaded. By examining processor activity and the activity of individual processes including thread creation, thread switching, context switching, and so on, you can gain good insight into processor workload and performance.
The performance counters shown in Table 17.1 help you identify processor bottlenecks.
Table 17.1: Performance Counters Used to Identify CPU Bottlenecks
|Processor||% Processor Time
% Privileged Time
|System||Processor Queue Length
For more information about how to measure these counters, their thresholds, and their significance, see "Processor" in Chapter 15, "Measuring .NET Application Performance."
You can monitor an individual process or use _Total for all instances. High rates of processor activity might indicate an excessively busy processor. A long, sustained processor queue is a more certain indicator of a processor bottleneck. If a single processor in a multi-processor server is overloaded, this might indicate that you have a single-threaded application using just that single processor.
Note Processor utilization depends on your system and application characteristics. The 75% threshold value given in "Bottlenecks" (following) is based on typical observations. Increase or decrease this threshold based on your system characteristics.
You might have a CPU bottleneck if you see the following:
- Processor\ % Processor Time often exceeding the 75% threshold.
- A sustained queue of 2 for a prolonged period indicated by System\ Processor Queue Length.
- Unusually high values for Processor\ % Privileged Time or System\Context Switches/sec.
If the value of % Processor Time is high, then queuing occurs, and in most scenarios the value of System\ Processor Queue Length will also be high. Figure 17.6 shows a sample System Monitor graph that indicates a high percentage of processor time and a high processor queue length.
Figure 17.6: System monitor graph showing high percentage of processor time and high processor queue length
The next step is to identify which process is causing the spike (or consuming processor time.) Use Task Manager to identify which process is consuming high levels of CPU by looking at the CPU column on the Processes page. You can also determine this by monitoring Process\%Processor Time and selecting the processes you want to monitor. For example, from the System Monitor output shown in Figure 17.7, you can see that the ASP.NET worker processor is consuming a majority of the processor time.
Once you determine that your CPU is a bottleneck, you have several options:
- Add multiple processors if you have multi-threaded applications. Consider upgrading to a more powerful processor if your application is single-threaded.
- If you observe a high rate of context switching, consider reducing the thread count for your process before increasing the number of processors.
- Analyze and tune the application that is causing the high CPU utilization. You can dump the running process by using the ADPLUS utility and analyze the cause by using Windbg. These utilities are part of the Windows debugging toolkit. You can download these tools from http://msdn.microsoft.com/en-us/library/cc267862.aspx.
- Analyze the instrumentation log generated by your application and isolate the subsystem that is taking the maximum amount of time for execution, and check whether it actually needs a code review rather than just tuning the deployment.
Note Although you can change the process priority level of an application by using Task Manager or from the command prompt, you should generally avoid doing so. For almost all cases, you should follow one of the recommendations in the previous list.
Memory consists of physical and virtual memory. You need to consider how much memory is allocated to your application. When you evaluate memory-related bottlenecks, consider unnecessary allocations, inefficient clean up, and inappropriate caching and state management mechanisms. To resolve memory-related bottlenecks, optimize your code to eliminate these issues and then tune the amount of memory allocated to your application. If you determine during tuning that memory contention and excessive paging are occurring, you may need to add more physical memory to the server.
Low memory leads to increased paging where pages of your application's virtual address space are written to and from disk. If paging becomes excessive, page thrashing occurs and intensive disk I/O decreases overall system performance.
Memory tuning consists of the following:
- Determine whether your application has a memory bottleneck. If it has, then add more memory.
- Tune the amount of memory allocated if you can control the allocation. For example, you can tune this for ASP.NET and SQL Server.
- Tune the page file size.
The performance counters shown in Table 17.2 help you identify memory bottlenecks. You should log these counter values to log files over a 24 hour period before you form any conclusions.
Table 17.2: Performance Counters Used to Identify Memory Bottlenecks
|Server||Pool Nonpaged Failures
Pool Nonpaged Peak
|Cache||MDL Read Hits %|
For more information about how to measure these counters, their thresholds, and their significance, see "Memory" in "CLR and Managed Code" in Chapter 15, "Measuring .NET Application Performance."
A low value of Available MBytes indicates that your system is low on physical memory, caused either by system memory limitations or an application that is not releasing memory. Monitor each process object's working set counter. If Available MBytes remains high even when the process is not active, it might indicate that the object is not releasing memory. Use the CLR Profiler tool at this point to identify the source of any memory allocation problems. For more information, see "How To: Use CLR Profiler" in the "How To" section of this guide.
A high value of Pages/sec indicates that your application does not have sufficient memory. The average of Pages Input/sec divided by average of Page Reads/sec gives the number of pages per disk read. This value should not generally exceed five pages per second. A value greater than five pages per second indicates that the system is spending too much time paging and requires more memory (assuming that the application has been optimized). The System Monitor graph shown in Figure 17.8 is symptomatic of insufficient memory.
Figure 17.8: Insufficient memory
If you determine that your application has memory issues, your options include adding more memory, stopping services that you do not require, and removing unnecessary protocols and drivers. Tuning considerations include:
- Deciding when to add memory
- Page file optimization
Deciding When to Add Memory
To determine the impact of excessive paging on disk activity, multiply the values of the Physical Disk\ Avg. Disk sec/Transfer and Memory\ Pages/sec counters. If the product of these counters exceeds 0.1, paging is taking more than 10 percent of disk access time. If this occurs over a long period, you probably need more memory. After upgrading your system's memory, measure and monitor again.
To save memory:
- Turn off services you do not use. Stopping services that you do not use regularly saves memory and improves system performance.
- Remove unnecessary protocols and drivers. Even idle protocols use space in the paged and nonpaged memory pools. Drivers also consume memory, so you should remove unnecessary ones.
Page File Optimization
You should optimize the page file to improve the virtual memory performance of your server. The combination of physical memory and the page file is called the virtual memory of the system. When the system does not have enough physical memory to execute a process, it uses the page file on disk as an extended memory source. This approach slows performance. To ensure an optimized page file:
- Increase the page file size on the system to 1.5 times the size of physical memory available, but only to a maximum of 4,095 MB. The page file needs to be at least the size of the physical memory to allow the memory to be written to the page file in the event of a system crash.
- Make sure that the page file is not fragmented on a given partition.
- Separate the data files and the page file to different disks only if the disk is a bottleneck because of a lot of I/O operation. These files should preferably be on the same physical drive and the same logical partition. This keeps the data files and the page file physically close to each other and avoids the time spent seeking between two different logical drives.
To configure the page file size
- Open Control Panel.
- Double-click the System icon.
- Select the Advanced tab.
- Click Performance Options.
- Click Change. The Virtual Memory dialog box appears (see Figure 17.9).
Figure 17.9: Virtual memory settings
- Enter new values for Initial size and Maximum size. Click Set, and then click OK.
For more information about the location and partitioning of the page file, see Knowledge Base article 197379, "Configuring Page Files for Optimization and Recovery," at http://support.microsoft.com/kb/314482.
Disk I/O refers to the number of read and write operations performed by your application on a physical disk or multiple disks installed in your server. Common activities that can cause disk I/O related bottlenecks include long-running file I/O operations, data encryption and decryption, reading unnecessary data from database tables, and a shortage of physical memory that leads to excessive paging activity. Slow hard disks are another factor to consider.
To resolve disk-related bottlenecks:
- Start by removing any redundant disk I/O operations in your application.
- Identify whether your system has a shortage of physical memory, and, if so, add more memory to avoid excessive paging.
- Identify whether you need to separate your data onto multiple disks.
- Consider upgrading to faster disks if you still have disk I/O bottlenecks after doing all of above.
Microsoft Windows® 2000 retrieves programs and data from disk. The disk subsystem can be the most important aspect of I/O performance, but problems can be masked by other factors, such as lack of memory. Performance console disk counters are available within both the LogicalDisk or PhysicalDisk objects.
The performance counters shown in Table 17.3 help you identify disk I/O bottlenecks.
Table 17.3: Performance Counters Used to Identify Disk I/O Bottlenecks
|PhysicalDisk||Avg. Disk Queue Length
Avg. Disk Read Queue Length
Avg. Disk Write Queue Length
Avg. Disk sec/Read
Avg. Disk sec/Transfer
For more information about how to measure these counters, their thresholds, and their significance, see "Disk I/O" in Chapter 15, "Measuring .NET Application Performance."
Note When attempting to analyze disk performance bottlenecks, you should always use physical disk counters. In Windows 2000, physical disk counters are enabled by default, but logical disk counters are disabled by default. If you use software RAID, you should enable logical disk counters by using the following command.
If you determine that disk I/O is a bottleneck, you have a number of options:
- Defragment your disks. Use the Disk Defragmenter system tool.
- Use Diskpar.exe on Windows 2000 to reduce performance loss due to misaligned disk tracks and sectors. You can use get the Diskpar.exe from the Windows 2000 Resource Kit.
- Use stripe sets to process I/O requests concurrently over multiple disks. The type you use depends on your data-integrity requirements. If your applications are read-intensive and require fault tolerance, consider a RAID 5 volume. Use mirrored volumes for fault tolerance and good I/O performance overall. If you do not require fault tolerance, implement stripe sets for fast reading and writing and improved storage capacity. When stripe sets are used, disk utilization per disk should fall due to distribution of work across the volumes, and overall throughput should increase.
If you find that there is no increased throughput when scaling to additional disks in a stripe set, your system might be experiencing a bottleneck due to contention between disks for the disk adapter. You might need to add an adapter to better distribute the load.
- Place multiple drives on separate I/O buses, particularly if a disk has an I/O - intensive workload.
- Distribute workload among multiple drives. Windows Clustering and Distributed File System provide solutions for load balancing on different drives.
- Limit your use of file compression or encryption. File compression and encryption are I/O-intensive operations. You should only use them where absolutely necessary.
- Disable creation of short names. If you are not supporting MS-DOS for Windows 3.x clients, disable short names to improve performance. To disable short names, change the default value of the \NtfsDisable8dot3NameCreation registry entry (in HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Control \Filesystem) to 1.
- Disable last access update. By default, NTFS updates the date and time stamp of the last access on directories whenever it traverses the directory. For a large NTFS volume, this update process can slow performance. To disable automatic updating, create a new REG_DWORD registry entry named NtfsDisableLastAccessUpdate in HKEY_LOCAL_MACHINE \SYSTEM\CurrentContolSet \Control \Filesystem and set its value to 1.
Caution Some applications, such as incremental backup utilities, rely on the NTFS update information and cease to function properly without it.
- Reserve appropriate space for the master file table. Add the NtfsMftZoneReservation entry to the registry as a REG_DWORD in HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet\Control \FileSystem. When you add this entry to the registry, the system reserves space on the volume for the master file table. Reserving space in this manner allows the master file table to grow optimally. If your NTFS volumes generally contain relatively few files that are large, set the value of this registry entry to 1 (the default). Typically you can use a value of 2 or 3 for moderate numbers of files, and use a value of 4 (the maximum) if your volumes tend to contain a relatively large number of files. However, make sure to test any settings greater than 2, because these greater values cause the system to reserve a much larger portion of the disk for the master file table.
- Use the most efficient disk systems available, including controller, I/O, cabling, and disk. Use intelligent drivers that support interrupt moderation or interrupt avoidance to alleviate the interrupt activity for the processor due to disk I/O.
- Check whether you are using the appropriate RAID configuration. Use RAID 10 (striping and mirroring) for best performance and fault tolerance. The tradeoff is that using RAID 10 is expensive. Avoid using RAID 5 (parity) when you have extensive write operations.
- Consider using database partitions. If you have a database bottleneck, consider using database partitions and mapping disks to specific tables and transaction logs. The primary purpose of partitions is to overcome disk bottlenecks for large tables. If you have a table with large number of rows and you determine that it is the source of a bottleneck, consider using partitions. For SQL Server, you can use file groups to improve I/O performance. You can associate tables with file groups, and then associate the file groups with a specific hard disk. For information about file groups, see Chapter 14, "Improving SQL Server Performance."
- Consider splitting files across hard disks. If you are dealing with extensive file - related operations, consider splitting the files across a number of hard disks to spread the I/O load across multiple disks.
- Check the feasibility of caching in RAM any static data that is being frequently read.
- Consider increasing memory, if you have excessive page faults.
- Consider using a disk with a higher RPM or shifting to a Storage Area Network (SAN) device.
Network I/O relates to amount of data being sent and received by all of the interface cards in your server. Common activities that can cause disk I/O related bottlenecks include excessive numbers of remote calls, large amounts of data sent and received with each call, network bandwidth constraints, and all of the data being routed through a single network interface card (NIC).
To resolve network I/O bottlenecks:
- Reduce the number of remote calls and the amount of data sent across the network. Ensure that you do not exceed your bandwidth constraint levels.
- After you have optimized your code, determine whether you need to divide the traffic on the server among multiple NICs. You can divide traffic based on protocols used, or you can use separate NICs to communicate with separate network segments.
- Consider upgrading your NIC.
Monitor both front and back interfaces for indicators of possible bottlenecks. To monitor network-specific objects in Windows 2000, you need to install the Network Monitor Driver.
To install the Network Monitor Driver
- In Control Panel, double-click Network and Dial-up Connections.
- Select any connection.
- On the File menu, click Properties.
- On the General tab, click Install.
- Click Protocol, and then click Add.
- Click Network Monitor Driver, and then click OK.
- Click Close.
The performance counters shown in Table 17.4 help you identify network I/O bottlenecks.
Table 17.4: Performance Counters Used to Identify Network I/O Bottlenecks
|Network Interface||Bytes Total/sec
|Processor||% Interrupt Time|
For more information about how to measure these counters, their thresholds, and their significance, see "Network I/O" in Chapter 15, "Measuring .NET Application Performance."
If the rate at which bytes sent and received is greater than your connection bandwidth or the bandwidth your network adapter can handle, a network bandwidth bottleneck occurs. This rate is measured by Network Interface\Bytes Total/sec.
If you determine that network I/O is a bottleneck, you have the following options:
- Distributing client connections across multiple network adapters. If your system communicates over Token Ring, Fiber Distributed Data Interface (FDDI), or switched Ethernet networks, attempt to balance network traffic by distributing client connections across multiple network adapters. When using multiple network adapters, make sure that the network adapters are distributed among the Peripheral Connect Interface (PCI) buses. For example, if you have four network adapters with three PCI buses, one 64-bit and two 32-bit, allocate two network adapters to the 64-bit bus and one adapter to each 32-bit bus.
- Use adapters with the highest bandwidth available for best performance. Increasing bandwidth increases the number of transmissions that occur and in turn creates more work for your system, including more interrupts. Remove unused network adapters to reduce overhead.
- Use adapters that support task offloading capabilities including checksum aoffloading, IPSec offloading, and large send offloading.
- Use network adapters that batch interrupts by means of interrupt moderation. High rates of interrupts from network adapters can reduce performance. By using network adapters that batch interrupts by means of interrupt moderation, you can alleviate this performance problem, provided that the adapter driver supports this capability. Another option is to bind interrupts arising from network adapters to a particular processor.
- If your network uses multiple protocols, place each protocol on a different adapter. Make sure to use the most efficient protocols, especially ones that minimize broadcasts.
- Divide your network into multiple subnets or segments, attaching the server to each segment with a separate adapter. Doing so reduces congestion at the server by spreading server requests.
To tune the .NET Framework, you need to tune the CLR. Tuning the CLR affects all managed code, regardless of the implementation technology. You then tune the relevant .NET Framework technology, depending on the nature of your application. For example, tuning the relevant technology might include tuning ASP.NET applications or Web services, Enterprise Services, and ADO.NET code.
CLR tuning is mostly achieved by designing and then optimizing your code to enable the CLR to perform its tasks efficiently. Your design needs to enable efficient garbage collection, for example by correctly using the Dispose pattern and considering object lifetime.
The main CLR-related bottlenecks are caused by contention for resources, inefficient resource cleanup, misuse of the thread pool, and resource leaks. For more information about optimizing your code for efficient CLR processing, see Chapter 5, "Improving Managed Code Performance."
Use the performance counters shown in Table 17.5 to help identify CLR bottlenecks.
Table 17.5: Performance Counters Used to Identify CLR Bottlenecks
.NET CLR Memory\% Time in GC
.NET CLR Memory\# Bytes in all Heaps
.NET CLR Memory\# Gen 0 Collections
.NET CLR Memory\# Gen 1 Collections
.NET CLR Memory\# Gen 2 Collections
.NET CLR Memory\# of Pinned Objects
.NET CLR Memory\Large Object Heap size
|Working Set||Process\Working Set|
|Exceptions||.NET CLR Exceptions\# of Exceps Thrown /sec|
|Contention||.NET CLR LocksAndThreads\Contention Rate / sec
.NET CLR LocksAndThreads\Current Queue Length
|Threading||.NET CLR LocksAndThreads\# of current physical Threads
Thread\% Processor Time
|Code Access Security||.NET CLR Security\Total Runtime Checks
.NET CLR Security\Stack Walk Depth
For more information about how to measure these counters, their thresholds, and their significance, see "ASP.NET" in Chapter 15, "Measuring .NET Application Performance."
The following list describes several common bottlenecks that occur in applications written using managed code and explains how you identify them using system counters. For more information about what to measure and how to measure it, see "CLR and Managed Code" in Chapter 15, "Measuring .NET Application Performance."
- Excessive memory consumption: Excessive memory consumption can result from poor managed or unmanaged memory management. To identify this symptom, observe the following performance counters:
- Process\Private Bytes
- .NET CLR Memory\# Bytes in all Heaps
- Process\Working Set
- .NET CLR Memory\Large Object Heap size
An increase in Private Bytes while the # of Bytes in all Heaps counter remains the same indicates unmanaged memory consumption. An increase in both counters indicates managed memory consumption.
- Large working set size. The working set is the set of memory pages currently loaded in RAM. This is measured by Process\Working Set. A high value might indicate that you have loaded a number of assemblies. Unlike other counters, Process\Working Set has no specific threshold value to watch, although a high or fluctuating value can indicate a memory shortage. A high or fluctuating value accompanied by a high rate of page faults clearly indicates that your server does not have enough memory.
- Fragmented large object heap. Objects greater than 83 KB in size are allocated in the large object heap, which is measured by .NET CLR Memory\Large Object Heap size. In many cases, these objects are buffers (large strings, byte arrays, and so on) used for I/O operations (for example, creating a BinaryReader to read an uploaded image). Such large allocations can fragment the large object heap. You should consider recycling those buffers to avoid fragmentation.
- High CPU utilization. High CPU utilization is usually caused by poorly written managed code, such as code that:
- Causes excessive garbage collection. This is measured by % Time in GC.
- Throws a large number of exceptions. This is measured by .NET CLR Exceptions\# of Exceps Thrown /sec.
- Creates a large number of threads. This causes the CPU to spend large amounts of time switching between threads instead of performing real work. This is measured by Thread\Context Switches/sec.
- Thread contention: Thread contention occurs when multiple threads attempt to access a shared resource. To identify this symptom, observe the following performance counters:
- .NET CLR LocksAndThreads\Contention Rate / sec
- .NET CLR LocksAndThreads\Total # of Contentions
An increase in the contention rate or a significant increase in the total number of contentions is a strong indication that your application is encountering thread contention. To resolve the issue, identify code that accesses shared resources or uses synchronization mechanisms.
When you approach tuning ASP.NET. consider the following:
- The client's interaction with ASP.NET. Considerations include queue sizes, timeouts (execution timeout, proxy timeouts, deadlock intervals, and session timeouts), uploading and downloading large files, and request and response sizes.
- The worker process itself. Considerations include the amount of memory allocated, view state and session state sizes, cache sizes, CPU utilization, thread affinity. If you have unavoidable thread affinity, you need to consider Web gardens.
- Remote or local Web service calls from ASP.NET. Considerations include the number of connections, and thread pool utilization.
Your approach to tuning ASP.NET should be to optimize these discrete areas from the design and code perspective first, and then to tune the individual areas.
Most ASP.NET tuning is performed by modifying configuration parameters in the system - wide Machine.config file and the application-specific Web.config file. Figure 17.10 shows an architectural view of ASP.NET and its relationship to several key configuration elements located in Machine.config.
Figure 17.10: Mapping of the key configuration elements with the request processing cycle
You have a number of options for tuning ASP.NET applications, most of which involve tuning the settings in Machine.config. This configuration file has many sections, but the following sections are most critical to performance.
The attributes on the <processModel> element apply to the ASP.NET worker process (aspnet_wp.exe) and to all applications being hosted in the worker process on an IIS 5 Web server. Many of the settings are tuned by default and do not require further changes. The default settings are as follows.
<processModel enable="true" timeout="Infinite" idleTimeout="Infinite" shutdownTimeout="0:00:05" requestLimit="Infinite" requestQueueLimit="5000" restartQueueLimit="10" memoryLimit="60" webGarden="false" cpuMask="0xffffffff" userName="machine" password="AutoGenerate" logLevel="Errors" clientConnectedCheck="0:00:05" comAuthenticationLevel="Connect" comImpersonationLevel="Impersonate" responseDeadlockInterval="00:03:00" maxWorkerThreads="20" maxIoThreads="20"/>
For a detailed description of each attribute, see "<processModel> Element" in the.NET Framework documentation at http://msdn.microsoft.com/en-us/library/7w2sway1(VS.71).aspx.
The <httpRuntime> element configures the ASP.NET runtime settings. You can specify these at the machine, site, application, and subdirectory levels. The default settings from Machine.config are as follows.
<httpRuntime executionTimeout="90" maxRequestLength="4096" useFullyQualifiedRedirectUrl="false" minFreeThreads="8" minLocalRequestFreeThreads="4" appRequestQueueLimit="100" enableVersionHeader="true"/>
For a detailed description of each attribute, see "<httpRuntime> Element" in the .NET Framework documentation at http://msdn.microsoft.com/en-us/library/e1f13641(VS.71).aspx.
Thread Pool Attributes
Figure 17.11 shows thread pool configuration options for ASP.NET in context.
Figure 17.11: ASP.NET thread pool configuration options
The following list describes key attributes (in <processModel> and <httpRuntime>) in the machine.config file related to ASP.NET ThreadPool.
The list also discusses the scenarios where each attribute applies:
- maxconnection. If your application makes calls to a remote Web service and the requests are waiting for the call to complete, you can increase the CPU utilization and your application performance by changing the maxconnection attribute on the <ConnectionManagement> element in Machine.config. The default values are as follows.
<connectionManagement> <add address="*" maxconnection="2"/> </connectionManagement>
Increasing maxconnection enables more calls to be executed concurrently to a remote Web service. This attribute does not affect local Web service calls. An increase in the number of concurrent calls causes an increase in the utilization of threads that are used to make the remote calls.
Increasing maxconnection also can lead to an increase in CPU utilization. This increase in CPU utilization is caused by the fact that more incoming requests can be processed by ASP.NET instead of having the incoming requests wait for their turn to call the Web service. You need to balance the maxconnection with the other attributes discussed in this list and the actual CPU utilization.
- maxWorkerThreads and maxIoThreads. These attributes define the maximum number of worker and I/O threads that the ASP.NET worker process is allowed to create. These values do not reflect the actual number of threads that are created by the worker process. The maximum value for these attributes is 100 per processor.
As described earlier, if you increase maxconnection, you cause increased concurrent processing, which requires a greater number of worker and I/O threads. Consider the following guidelines when tuning these attributes:
- Change these attributes only if your processor utilization is below the threshold limits defined by your application's performance objectives.
- Avoid increasing these attributes if the requests are not waiting on an I/O call but are actually performing CPU-intensive work. Increasing these attributes in this circumstance can negatively affect performance because the already stressed processor now has to handle increased thread context switching.
- If your application makes a short-running I/O call (for example, to a remote Web service), you might not need to increase these values because the calling threads are not blocked for an excessive period.
- If your application makes a long-running I/O call and your system has idle CPU, you can safely consider increasing these attributes along with the other related attributes discussed in this section. If your system does not have idle CPU, then you probably should not increase these attributes.
If you have a Web service on the same server as your Web application, consider the following to decide when to increase the default values:
- Increase these attributes only if your processor utilization is below the threshold limits defined by your application's performance objectives.
- Avoid increasing these attributes if requests are not waiting on an I/O call but are performing CPU-intensive tasks. Increasing these attributes in this situation can negatively affect performance because the already stressed processor now has to handle increased thread switching.
- maxconnection and minFreeThreads. These attributes do not have any effect in scenarios where you make only local Web services calls.
- minFreeThreads. This attribute defines the number of threads that can be used for work other than processing incoming requests to the worker process. This attribute prevents the ASP.NET process from using a thread from the thread pool to handle a new HTTP request if this would mean that the free thread count drops below this limit. The attribute is specified on the <httpRuntime> element and has a default value of 8.
You can use this attribute to help prevent deadlocks by ensuring that a thread is available to handle callbacks from pending asynchronous requests. A deadlock can occur if all of the threads in the thread pool are currently in use handling incoming HTTP requests, and one or more of those requests are waiting for asynchronous callbacks. In this situation, there are no available threads to service the callback. You can set minFreeThreads to ensure that some free threads are available to process the callbacks.
Increasing minFreeThreads means that you reserve more threads to make remote calls. In all cases, you should ensure that maxWorkerThreads minFreeThreads >=12. Twelve is the optimum number of threads that should be made available to the ASP.NET worker process to service requests. This value means that ASP.NET cannot execute more than twelve requests concurrently. This limit is based on a series of performance tests, which demonstrate that normally the worker process uses four of these threads. If the processor is fully utilized (greater than 95 percent utilization) and your application makes long-running calls, the worker process is likely to use all twelve threads.
You might want to increase the attribute from the default values in the following scenarios:
- You have increased maxWorkerThreads and maxIoThreads.
- You have increased maxconnection to service a greater number of back-end calls, and hence require more threads to be made available for this purpose.
- You might need to consider changing this attribute whenever you make long running calls which block your execution. This is most beneficial when the work is not computationally expensive for the server where you make the changes.
- minLocalRequestFreeThreads. For a Web application using the Web service located on the same computer, you should consider decreasing the value of minLocalRequestFreeThreads when you need to give priority to processing local calls. This attribute defines the minimum number of free threads that ASP.NET keeps available so that requests from localhost are not queued. Before processing a local request, the runtime checks to see if at least this minimum number of worker threads are available. If fewer threads are available, the request is queued.
The default setting is four, so if only three worker threads are available the local request is queued. When this value is decreased, ASP.NET starts to use threads more aggressively, resulting in less local queuing.
Note Requests from remote clients start to queue when the free threads in the thread pool fall below the value of minFreeThreads.
If you decrease the value of minLocalRequestFreeThreads value without changing the minFreeThreads attribute, you are effectively telling the worker process to give priority to completing calls from the local server.
The performance counters shown in Table 17.6 help you identify ASP.NET bottlenecks.
Table 17.6: Performance Counters Used to Identify ASP.NET Bottlenecks
|Worker Process||ASP.NET\Worker Process Restarts|
Web Service\ISAPI Extension Requests/sec
Requests: ASP.NET\ Requests Current
ASP.NET Applications\Requests Executing
ASP.NET Applications\ Requests Timed Out
|Response time / latency||ASP.NET\ Request Execution Time|
|Cache||ASP.NET Applications\ Cache Total Entries
ASP.NET Applications\ Cache Total Hit Ratio
ASP.NET Applications\Cache Total Turnover Rate
ASP.NET Applications\Cache API Hit Ratio
ASP.NET Applications\ Cache API Turnover Rate
ASP.NET Applications\ Output Cache Entries
ASP.NET Applications\ Output Cache Hit Ratio
ASP.NET Applications\ Output Cache Turnover Rate
For more information about how to measure these counters, their thresholds, and their significance, see "ASP.NET" in Chapter 15, "Measuring .NET Application Performance."
The following list describes several common bottlenecks that occur in ASP.NET applications and explains how you identify them using the system counters listed in Table 17.6. For more information about what to measure and how to measure it, see "ASP.NET" in Chapter 15, "Measuring .NET Application Performance."
- Thread pool starvation. Thread pool bottlenecks can occur when ASP.NET runs out of worker and I/O threads to process incoming requests or perform I/O work.
To identify this symptom, observe the following performance counters:
- ASP.NET\Requests Queued
- Process\% Processor Time (aspnet_wp.exe or w3wp.exe)
If requests are being queued with low processor utilization levels, this is a strong indication that ASP.NET is performing non-CPU bound work. If your application makes calls to remote or local Web services, you can tune the thread pool to resolve the issue. For detailed information about how to tune the thread pool, see "Threading Explained" in Chapter 6, "Improving ASP.NET Performance."
As an alternative, you can use custom performance counters to monitor the thread pool to investigate more about the available I/O and worker threads. For more information, see "How To: Monitor the ASP.NET Thread Pool Using Custom Counters" in the "How To" section of this guide.
- Thread contention. Thread contention occurs when multiple threads try to gain access to a shared resource, as explained in "Bottlenecks" in the "CLR Tuning" section earlier in this chapter.
- Memory bottlenecks. Memory bottlenecks can take many forms. They can result from memory leaks, fragmentation issues, inefficient resource cleanup, or simply allocating too much or too little memory for the worker process.
To identify this symptom, observe the following performance counters in addition to the system level memory-related counters discussed in the "Memory" section of this chapter.
- Process\Private Bytes (aspnet_wp.exe or w3wp.exe)
- Process\Virtual Bytes (aspnet_wp.exe or w3wp.exe)
- .NET CLR Memory\# Bytes in all Heaps (aspnet_wp.exe or w3wp.exe)
If any of these counters increases consistently and does not level off, your application has a memory leak. If # Bytes in all Heaps increases consistently along with Private Bytes, your application has a managed memory leak. If only the Private Bytes counter increases, your application has a native memory leak.
If there is a growing discrepancy between Private Bytes and Virtual Bytes, disk fragmentation may be the cause. If your application is throwing OutOfMemoryException, this is also a strong indication that memory is a bottleneck.
You can configure the amount of memory allocated to the ASP.NET worker process by setting the memoryLimit attribute on the <processModel> element in Machine.config.
Tuning the memoryLimit attribute on the <processModel> element in Machine.config can resolve memory bottlenecks in some cases. However, if doing so does not alleviate the bottleneck, you need to further troubleshoot the problem.
- Worker Process Restarts. Restarting the ASP .NET worker process takes time and consumes resources. Set the restart threshold to an appropriate value to prevent unnecessary restarts. The following factors can contribute to recycling:
- Changes to a configuration file. (Note that these changes are not logged to the Application Log.)
- Exceeding memory limits (<processModel memoryLimit= />).
- Request and timeout limits specified in Machine.config.
Consider the following tuning options:
- Tune the thread pool using the formula for reducing contention.
- Configure the memory limit.
- Configure timeouts aggressively.
- Evaluate configuring RequestQueue limit.
- Disable tracing and debugging.
- Disable session state if you do not use it.
- Disable View State if you do not need it.
- If you upload large files, consider maxRequestLength.
- Consider Web gardens for scenarios that benefit from processor affinity.
Tune the Thread Pool Using the Formula for Reducing Contention
The formula for reducing contention can give you a good empirical start for tuning the ASP.NET thread pool. Consider using the Microsoft product group recommended settings shown in Table 17.7 if you have available CPU, your application performs I/O bound operations (such as calling a Web method, accessing the file system, and so forth), and you have queued requests (as indicated by ASP.NET Applications\Requests In Application Queue). For more information about these individual settings, see "Tune the Thread Pool by Using the Formula to Reduce Contention" in Chapter 6, "Improving ASP.NET Performance."
Table 17.7: Recommended Threading Settings for Reducing Contention
|Configuration setting||Default (.NET 1.1)||Recommended value|
|2||12 * #CPUs|
|minFreeThreads||8||88 * #CPUs|
|minLocalRequestFreeThreads||4||76 * #CPUs|
To reduce contention, you need to configure the following items in Machine.config. The changes described in the list should be applied across the settings and not in isolation.
- Set maxconnection to 12 * # of CPUs. This setting controls the maximum number of outgoing HTTP connections allowed by the client (in this case, ASP.NET). The recommendation is to set maxconnection to 12 * # of CPUs.
- Set maxIoThreads to 100. This setting controls the maximum number of I/O threads in the CLR thread pool. This number is automatically multiplied by the number of CPUs by the worker processor. The recommendation is to set maxIoThreads to 100.
- Set maxWorkerThreads to 100. This setting controls the maximum number of worker threads in the CLR thread pool. This number is automatically multiplied by the number of CPUs by the worker processor. The recommendation is to set maxWorkerThreads to 100.
- Set minFreeThreads to 88 * # of CPUs. This setting is used by the worker process to queue all of the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of concurrently executing requests to maxWorkerThreads minFreeThreads. The recommendation is to set minFreeThreads to 88 * # of CPUs. This setting would limit the number of concurrent requests to 12 (assuming that maxWorkerThreads is set to 100).
- Set minLocalRequestFreeThreads to 76 * # of CPUs. This setting is used by the worker process to queue requests from localhost (for example, your Web application sending requests to Web services on the same computer) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads, but it only applies to requests originating on the local server. The recommendation is to set minLocalRequestFreeThreads to 76 * # of CPUs.
Note The recommendations given are not inflexible rules; they are a starting point. Appropriate testing should be done to determine the correct settings for your scenario.
Tuning the Thread Pool for Burst Load Scenarios
If your application experiences unusually high loads of users in small bursts (for example, 1000 clients all logging in at 9 A.M. in the morning), your system may be unable to handle the burst load. Consider setting minWorkerThreads and minIOThreads as specified in Knowledge Base article 810259, "FIX: SetMinThreads and GetMinThreads API Added to Common Language Runtime ThreadPool Class," at http://support.microsoft.com/default.aspx?scid=kb;en-us;810259.
Tuning the Thread Pool When Calling COM Objects
ASP.NET Web pages that call single-threaded apartment (STA) COM objects should use the ASPCOMPAT attribute. The use of this attribute ensures that the call is executed using a thread from the STA thread pool. However, all calls to an individual COM object must be executed on the same thread. As a result, the thread count for the process can increases during periods of high load. You can monitor the number of active threads used in the ASP.NET worker process by viewing the Process:Thread Count (aspnet_wp instance) performance counter.
The thread count value is higher for an application when you are using ASPCOMPAT attribute compared to when you are not using it. When tuning the thread pool for scenarios where your application extensively uses STA COM components and the ASPCOMPAT attribute, you should ensure that the total thread count for the worker process does not exceed the following value.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2)
Evaluating the Change
To determine whether the formula for reducing contention has worked, look for improved throughput. Specifically, look for the following improvements:
- CPU utilization increases.
- Throughput increases according to the ASP.NET Applications\Requests/Sec performance counter.
- Requests in the application queue decrease according to the ASP.NET Applications\Requests In Application Queue performance counter.
If this change does not improve your scenario, you may have a CPU-bound scenario. In a CPU-bound scenario, adding more threads may increase thread context switching, further degrading performance.
When tuning the thread pool, monitor the Process\Thread Count (aspnet_wp) performance counter. This value should not be more than the following.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs)
If you are using AspCompat, then this value should not be more than the following.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2)
Values beyond this maximum tend to increase processor context switching.
For more information, see the following resources
- "ASP.NET Performance Monitoring, and When to Alert Administrators" by Thomas Marquardt, at http://msdn.microsoft.com/en-us/library/ms972959.aspx
- For more information about Web service scenarios, see Knowledge Base article 821268, "PRB: Contention, Poor Performance, and Deadlocks When You Make Web Service Requests from ASP.NET Applications," at http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
- "At Your Service: Performance Considerations for Making Web Service Calls from ASPX Pages" at http://msdn.microsoft.com/en-us/library/aa480507.aspx
- Microsoft Support WebCast, "Microsoft ASP.NET Threading," at http://support.microsoft.com/default.aspx?scid=%2fservicedesks%2fwebcasts%2fen%2ftranscripts%2fwct060503.asp
- Knowledge Base article 827419, "PRB: Sudden Requirement for a Larger Number of Threads from the ThreadPool Class May Result in Slow Computer Response Time," at http://support.microsoft.com/default.aspx?scid=kb;en-us;827419
- Knowledge Base article 810259, "FIX: SetMinThreads and GetMinThreads API Added to Common Language Runtime ThreadPool Class," at http://support.microsoft.com/default.aspx?scid=kb;en-us;810259
Configure the Memory Limit
The memory threshold for ASP.NET is determined by the memoryLimit attribute on the <processModel> element in Machine.config. For example:
<processModel ... memoryLimit="60" .../>
This value controls the percentage of physical memory that the process is allowed to consume. If the worker process exceeds this value, the worker process is recycled. The default value shown in the code represents 60 percent of the total physical memory installed in your server.
This setting is critical because it influences the cache scavenging mechanism for ASP.NET and virtual memory paging. For more information, see "Configure the Memory Limit" in Chapter 6, "Improving ASP.NET Performance." The default setting is optimized to minimize paging. If you observe high paging activity (by monitoring the Memory\Pages/sec performance counter) you can increase the default limit, provided that your system has sufficient physical memory.
The recommended approach for tuning is to measure the total memory consumed by the ASP.NET worker process by measuring the Process\Private Bytes (aspnet_wp) performance counter along with paging activity in System Monitor. If the counter indicates that the memory consumption is nearing the default limit set for the process, it might indicate inefficient cleanup in your application. If you have ensured that the memory is efficiently cleaned but you still need to increase the limit, you should do so only if you have sufficient physical memory.
This limit is important to adjust when your server has 4 GB or more of RAM. The 60 percent default memory limit means that the worker process is allocated 2.4 GB of RAM, which is larger than the default virtual address space for a process (2 GB). This disparity increases the likelihood of causing an OutOfMemoryException.
To avoid this situation on an IIS 5 Web server, you should set the limit to the smaller of 800 MB or 60 percent of physical RAM for .NET Framework 1.0.
.NET Framework 1.1 supports a virtual space of 3 GB. If you put a /3GB switch in boot.ini, you can safely use 1,800 MB as an upper bound for the memory limit.
You should use the /3GB switch with only the following operating systems:
- Microsoft Windows Server 2003
- Microsoft Windows 2000 Advanced Server
- Microsoft Windows 2000 Datacenter Server
- Microsoft Windows NT 4.0 Enterprise Server
You should not use the /3GB switch with the following operating systems:
- Microsoft Windows 2000 Server
- Microsoft Windows NT 4.0 Server
Windows 2000 Server and Windows NT 4.0 Server can only allocate 2 GB to user mode programs. If you use the /3GB switch with Windows 2000 Server or Windows NT 4.0 Server, you have 1 GB for kernel and 2 GB for user mode programs, so you lose 1 GB of address space.
For IIS 6 use the Maximum used memory (in megabytes) setting in the Internet Services Manager on the Recycling page to configure the maximum memory that the worker process is allowed to use. As Figure 17.12 shows, the value is in megabytes and is not a percentage of physical RAM.
Figure 17.12: Memory recycling settings in the IIS 6 manager
For more information, see the following resources:
- For more information about running ASP.NET 1.1 with IIS 6, see the Microsoft ASP.NET FAQ page at http://www.asp.net/learn/whitepapers/aspnet-and-iis6/.
- For more information about virtual address space and virtual memory paging activities, see Chapter 6, "Evaluating Memory and Cache Usage" in Part 2 of the Windows 2000 Operations Guide at http://www.microsoft.com/downloads/details.aspx?FamilyID=f0b7b4ee-201a-4b40-a0d2-cdd9775aeff8&DisplayLang;=en.
Configure Timeouts Aggressively
The following list explains the configuration settings that you can tune in respect to timeouts:
- executionTimeout. This is an attribute of the <httpRuntime> element with a default value of 90 seconds. You can programmatically set this attribute using HttpServerUtility.ScriptTimeout as follows:
<httpRuntime executionTimeout="90" />
You need to change the default values if your application performs long-running operations like transferring large files, making long-running calls, or performing any other operation that may take longer than 90 seconds. Consider the following guidelines when deciding to change this attribute value:
- Clients of an average Web application expect a response in 7 to 10 seconds. Consider reducing the value of executionTimeout to deliberately time out excessively long-running requests and display an error message to users.
- Carefully evaluate any decision to increase the value of executionTimeout. Some scenarios, such as transferring extremely large files, may warrant increasing the value of this attribute. However, keep in mind the issue of user interface responsiveness and consider whether the user is willing to wait for a response for such a long time.
Longer timeouts result in server resources being retained for longer periods, which might cause stress and instability on the server under high load conditions.
In scenarios where you do need to increase executionTimeout to more than 180 seconds, you need to increase the value of the responseDeadlockInterval attribute to a value greater than 180 seconds (its default value).
You should also make sure that executionTimeout is set to a value greater than the timeout value specified by the client on the Web service proxy. If the request for a Web page times out before the call to the Web service that was made from the page, you can end up with open socket connections, causing a resource leak.
The ASP.NET pages calling Web services can set a TimeOut property on the proxy object. This timeout value should be less than the executionTimeout value, which in turn should be less than the responseDeadlockInterval value. The value executionTimeout on a server hosting ASP.NET pages should be greater than on the server hosting your Web service. This setting handles the condition where a long-running Web service completes successfully, but the ASP.NET page returns a ThreadAbortException.
Note Timeout values are specified in milliseconds. If you have debug="false" in the Web.config file, the executionTimeout value will be ignored.
For more information about tuning the timeouts in relation to Web service proxies, see "Timeouts" in Chapter 10, "Improving Web Services Performance."
- responseDeadlockInterval. This attribute is used by ASP.NET to detect deadlock or thread contention on the server. For example:
<processModel responseDeadlockInterval="00:03:00" />
Avoid increasing this timeout limit, because doing so may have an adverse effect on the overall performance of your application. For alternative solutions, see "Avoid Blocking on Long-Running Tasks" in Chapter 6, "Improving ASP.NET Performance."
- Worker process timeouts. The <processModel> element contains timeout, idleTimeout, shutdownTimeout, and responseDeadlockInterval attributes. The timeout and idleTimeout attributes are set to infinite by default. Avoid changing these to lower values. Doing so causes the process to recycle (when the set threshold is reached), which is expensive and time-consuming and causes application downtime.
Evaluate Configuring RequestQueueLimit
There is a named pipe between IIS and the ASP.NET worker process over which requests for ASP.NET are queued. There is also a queue for each virtual directory.
The default limit for the queue at the process level is 5000 and is specified by the requestQueueLimit attribute on the <processModel> element as follows.
<processModel enable="true" requestQueueLimit="5000" />
The number of current requests is measured by ASP.NET\Requests Current. If the number of current requests (which includes the requests that are queued, executing, or waiting to be written to the client) exceeds the value set for requestQueueLimit, the worker process rejects the requests with a 503 status code and the message "Server Too Busy."
The default limit for the queue for each virtual directory (or application) is 100. This limit is specified by the appRequestQueueLimit attribute on the <httpRunTime> element in Machine.config.
When the number of current requests in the application queue (measured by ASP.NET Applications\ Requests In Application Queue) exceeds the default threshold settings for each virtual directory, users receive a 503 status code and a "Server Too Busy" error message. Requests are rejected when the number of requests exceeds the appRequestQueueLimit value in Machine.config:
The default value for the process queue is optimized. It should be roughly the value of the appRequestQueueLimit attribute multiplied by the total number of virtual directories.
Note A bug in .NET Framework version 1.1 allowed ASP.NET to handle an infinite number of requests when running in IIS 6.0. The fix, available in "821156 INFO: ASP.NET 1.1 June 2003 Hotfix Rollup Package" at http://support.microsoft.com/?id=821156, causes ASP.NET to reject requests when the value of Requests Current exceeds the value of requestQueueLimit.
The default value of 100 for appRequestQueueLimit (per virtual directory) is low for servers with only a few applications (or a single application). You should consider increasing the value incrementally and measure the application performance (throughput, response time, and so on) at the point where requests start being rejected. The response time for the queued requests should not exceed the levels defined by your application's performance objectives. If it does, it probably means that the requests are waiting too long in the queue to be processed, so increasing the queue limit will not serve any purpose.
The requestQueueLimit value should be set higher than the sum of the appRequestQueueLimit values for all virtual directories because the requestQueueLimit value is actually the limit for total number of requests queued, executing, and waiting to be written to the clients for all virtual directories combined.
Disable Tracing and Debugging
Disable tracing and debugging in the Machine.config and Web.config as follows.
<configuration> <system.web> <trace enabled="false" pageOutput="false" /? <compilation debug="false" /> </system.web> </configuration>
For more information, see Knowledge Base article 815157, "HOW TO: Disable Debugging for ASP.NET Applications" at http://support.microsoft.com/default.aspx?scid=kb;en-us;815157.
Note You may also want to check that tracing and debugging is not enabled on individual pages, because those settings will override the settings in the Web.config.
Disable Session State If You Do Not Use It
If you are not using sessions, you should turn them off. By turning them off, you prevent the ASP.NET process from creating and maintaining session objects on a per-user basis. You can turn off the session state for an individual application by setting the mode attribute to "off" on the <sessionState> element in Web.config as follows.
<sessionstate mode="off" />
For more information, see "Session State" in Chapter 6, "Improving ASP.NET Performance."
If You Use Session State, Then Reduce Timeouts
User information is stored in session state until the session expires (the default is 20 minutes). If the client disconnects, the session continues to remain active, and session state continues to consume resources until it times out. Reducing the session state helps clean up unused sessions faster. Reducing session state is also good practice from a security standpoint.
You can change the timeout values in your application's Web.config file by modifying the timeout attribute on the <sessionState> element. The default setting of 20 minutes is shown in the following code.
Evaluating the Change
If you set the session state timeout to too low a value, your users may experience having their sessions expire frequently, which would increase the redundant hits to the site for the same requests as well as user dissatisfaction. If you set the timeout to too high a value, server resources are consumed for long periods, which affects your application scalability.
You need to optimize session timeout values. For information about how to identify the optimum value for session timeouts, see "Sessions" in "ASP.NET" in Chapter 15, "Measuring .NET Application Performance."
Disable View State If You Do Not Need It
There are a number of ways to disable view state at various levels:
- To disable view state for all applications on a Web server, configure the <pages> element in Machine.config as follows.
<pages enabledViewState="false" />
This approach allows you to selectively enable view state just for those pages that need it using the EnableViewState attribute of the @ Page directive.
- To disable view state for a single page, use the @ Page directive as follows.
<%@ Page EnableViewState="false" %>
- To disable view state for a single control on a page, set the control's EnableViewState property to false as follows.
//programatically yourControl.EnableViewState = false; //something <asp:datagrid EnableViewState="false" runat="server" />
For more information, see "View State" in Chapter 6, "Improving ASP.NET Performance"
If You Upload Large Files, Consider maxRequestLength
If your application needs to upload large files, be aware that the default setting does not allow anything greater than 4 MB. To change this default, configure the maxRequestLength attribute on the <httpRuntime> element in Machine.config to an appropriate value.
For more information, see Knowledge Base article 295626, "PRB: Cannot Upload Large Files When You Use the HtmlInputFile Server Control," at http://support.microsoft.com/default.aspx?scid=kb;en-us;295626.
Consider Web Gardens for Scenarios that Benefit from Processor Affinity
You might need to consider Web gardens if your application uses STA components excessively or it uses any other technique which might benefit from affinity to a processor. Affinity to a particular processor helps the process takes advantage of more frequent CPU cache (L1 or L2 cache) hits.
You should always test and evaluate whether Web gardens improve performance for your scenario. They are not generally advocated as an option for most applications.
By default, on an IIS 5 Web server there is only one ASP.NET worker process. You can set the webGarden attribute on the <processModel> element in Machine.config to true to create multiple worker processes; one per eligible processor on a multiple processor server.
You can specify the eligible processors by setting the cpuMask attribute. This attribute specifies a bit pattern that indicates the CPUs eligible to run ASP.NET threads. For example, the cpuMask hexadecimal value 0x0d represents the bit pattern 1101. On a server with four processors, this bit pattern indicates that ASP.NET processes can be scheduled for CPUs 0, 2, and 3, but not CPU 1. If webGarden is set to true, by default, all CPUs are enabled and ASP.NET starts one process for each CPU. If webGarden is set to false, the cpuMask attribute is ignored and only one worker process runs on the server.
The default values for these attributes are shown below.
<processModel webGarden="false" cpuMask="0xffffffff" />
Requests are distributed among the multiple processes on a round-robin basis. Each process has an affinity to a particular processor in this case.
On an IIS 6.0 Web server, you can create multiple worker processes per application pool, which can have affinity to particular processors.
If you are considering using Web gardens, you should be aware of the following pitfalls:
- Cache, session, and application state are not shared among the different processes. Hence, each process needs its own copy of cache and application state. You can store the session state out of process in a state service or SQL Server database to share it across processes. Out-of-process session state incurs the additional overhead of serialization and cross-process or cross-server communication.
- Memory requirements for your server increase when you use Web gardens because multiple processes are used.
For more information about Web gardens, the scenarios where they might result in performance gains, and IIS 6.0 deployment considerations, see "Deployment Considerations" in Chapter 6, "Improving ASP.NET Performance" and "Web and Application Server Infrastructure — Performance and Scalability" at http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/659f2e2c-a58b-4770-833b-df96cabe569e.mspx?mfr=true.
In addition to the primary tuning options for ASP.NET applications discussed earlier, you also have the following additional considerations:
- Tuning for pages and compilation. ASP.NET compiles all of the pages in a directory into a single assembly when the first page is requested. This feature, referred to as batch compilation, is beneficial because in general fewer assemblies are preferable. If you have several hundred pages in a directory, the first request for a page may take a long time to execute due to the amount of compilation work needed.
The timeout for batch compilation is specified by the batchTimeout attribute on the <compilation> element. If this value is exceeded, batch compilation continues on a background thread and the requested page is compiled individually.
You are not advised to modify the default batchTimeout setting. Instead, you should design your application to avoid using an excessive number of pages. For example, consider using fewer dynamic pages that vary their behavior by using query strings, rather than using large numbers of static pages.
- Short circuiting the HTTP pipeline. User requests travel through various modules specified in the HTTP pipeline. These are defined in Web.config and Machine.config. By removing modules that your application does not use, you avoid any unnecessary overhead introduced by the modules. Measure to see if this performance gain is significant for your application.
For example, if your Web application does not use forms authentication, you can remove the FormsAuthentication module by adding the following entry to Web.config.
<httpModules> <remove name="FormsAuthentication" /> </httpModules>
When you tune Enterprise Services performance, your main focus is optimizing the lifetime of components, including their creation, destruction, and pooling.
Most Enterprise Services tuning involves using the Component Services administration tool to fine-tune the settings maintained in the COM+ catalog for your Enterprise Services applications and serviced components.
To monitor the performance related statistics for Enterprise Services applications, you can use the Component Services tool. This enables you to monitor the following:
- Number of objects activated
- Number of objects in use
- Time duration for which the object is in call
- Cost of transaction in term of resources and time taken to complete the transaction
- Cost in terms of processor and memory for the method call on objects
- Optimum size for the object pool
For more information about how to measure these values, their thresholds, and their significance, see "Enterprise Services" in Chapter 15, "Measuring .NET Application Performance."
Some of these statistics are shown in Figure 17.13.
Figure 17.13: The Component Services tool showing component statistics
To tune Enterprise Services applications, you have the following options:
- Tune the application pool size.
- Tune object pool size to preallocate and set thresholds.
- Optimize idle time management for server applications.
- Use packet privacy only if you need encryption.
- Set DisableAsyncFinalization only when clients do not call Dispose.
Tune the Application Pool Size
Application pooling enables single-threaded processes to scale, and it also improves resilience. If a single process fails, other running processes are able to handle client requests. This feature is available in COM+ 1.5.
The concurrent number of processes that are allowed to run is determined by the application pool size. A default value of zero indicates that application pooling is disabled. You can increase this value on the Pooling & Recycling page on the application's Properties dialog box, as shown in Figure 17.14.
Note This option is available only for server applications and not for library applications.
Figure 17.14: Application Pooling configuration
You do not generally need to alter the application recycling settings. For more information about these settings, see the following resources:
- "Preload Applications That Have Large Minimum Pool Sizes" in Chapter 8, "Improving Enterprise Services Performance"
- "Configuring COM+ Application Recycling Values" at http://msdn.microsoft.com/en-us/library/ms684192(VS.85).aspx
Tune Object Pool Size to Preallocate and Set Thresholds
Tune object pool size to preallocate and set thresholds. When your application starts, the Dllhost.exe surrogate process is initialized and the object pool is populated with a specified number of objects determined by the minimum pool size setting. Consider using object pooling if:
- Your application contains objects that are expensive to create and initialize. For example, your objects need to acquire data from remote resources that are expensive to connect to. Examples include connecting to a mainframe database or performing a multi-table join across several thousand records.
- You can reuse objects across multiple client calls. Objects that you reuse must contain no request or user-specific state.
- Your application has bursts of activity where several clients connect at the same time.
If your application satisfies one of the conditions in the previous list, consider the following guidelines:
- Preallocate objects by setting an appropriate minimum pool size. Doing so ensures that objects are ready to use for incoming client requests.
- Set the maximum pool size if you need to set thresholds on resource utilization. Doing so also provides an opportunity (if applicable) for you to satisfy licensing rules. For example, you might have rules that permit only a certain number of client connections for a particular license.
You configure object pooling on the Activation tab of your application's Properties dialog box, as shown in Figure 17.15
Figure 17.15: Object pooling settings in Component Services
Note You must test and measure the performance for your specific scenario to determine the optimal value for these settings.
For more information, see "Improving Performance with Object Pooling" at http://msdn.microsoft.com/en-us/library/ms682822(VS.85).aspx.
Optimize Idle Time Management for Server Applications
The COM+ host process (Dllhost.exe) shuts down after a configured period of inactivity from any client. By default, this period is three minutes. If clients access your applications in bursts at sporadic intervals, then you might need to increase the default time period. To configure the idle time, use the Advanced page of the application's Properties dialog box in Component Services. Values in the range of 1 to 1440 minutes are supported.
Use Packet Privacy Only if You Need Encryption
If you need to authenticate callers and ensure that packets have not been tampered with in transit between the client and serviced component, you do not need encryption. If you need to ensure the privacy of data sent to and from your serviced components, you should consider using packet privacy authentication.
However, you can avoid using packet privacy authentication if your application is located in a secure network that uses IPSec encryption to protect the communication channels between servers. You can configure the packet privacy authentication level on the Security page of the application's Properties dialog box, as shown in Figure 17.16.
Figure 17.16: Authentication level configuration
For more information, see "Use Packet Privacy Authentication Only if You Need Encryption" in Chapter 8, "Improving Enterprise Services Performance."
Note This option is available only for server applications and not for library applications.
Set DisableAsyncFinalization Only When Clients Do Not Call Dispose
If client code does not call Dispose to release serviced components, as a last resort you can consider using the DisableAsyncFinalization registry key. To enable this feature, create the following registry key.
HKLM\Software\Microsoft\COM3\System.EnterpriseServices DisableAsyncFinalization = DWORD(0x1)
For more information, see "DisableAsyncFinalization Registry Setting" in Chapter 8, "Improving Enterprise Services Performance."
Note Administrators should check with the development team if they own the client implementation, to ensure that they call Dispose method to release resources on the server. Changing this registry key should be considered a last resort.
ASP.NET Web services use the same ASP.NET runtime as ASP.NET applications. As a result, the tuning guidelines discussed earlier also apply to Web services. For Web services, there are a number of additional considerations.
Consider the following tuning options:
- Tune the thread pool using the formula for reducing contention.
- Configure maxconnections.
- Prioritize and allocate connections across discrete Web services.
- Consider the responseDeadlockInterval Attribute.
- If you upload large files, configure maxRequestLength.
Tune the Thread Pool Using the Formula for Reducing Contention
You should tune the thread pool and connection settings on any ASP.NET server that calls other Web services. For a detailed explanation of how to configure maxconnection in relation to other settings such as maxWorkerThreads, and maxIoThreads, see "Threading" in Chapter 10, "Improving Web Services Performance" and "Thread Pool Attributes" in "ASP.NET Tuning" in this chapter.
The maxconnection attribute in Machine.config limits the number of concurrent outbound calls.
Note This setting does not apply to local requests — requests that originate from ASP.NET applications on the same server as the Web service. The setting applies to outbound connections from the current computer, for example to ASP.NET applications and Web services calling other remote Web services.
The default setting for maxconnection is 2 per connection group. For desktop applications that call Web services, two connections may be sufficient. For ASP.NET applications that call Web services, two is generally not enough. Change the maxconnection attribute from default of 2 to (12 x #CPUs) as a starting point.
<connectionManagement> <add address="*" maxconnection="12"/> </connectionManagement>
Note that 12 connections x #CPUs is an arbitrary number, but empirical evidence has shown that it is optimal for a variety of scenarios, when you also limit ASP.NET to 12 concurrent requests. You need to validate the appropriate number of connections for your scenario.
Increasing the maxconnection attribute results in increased thread pool and processor utilization. With the increase in the maxconnection value, a higher number of I/O threads will be available to make outbound concurrent calls to the Web service. As a result, incoming HTTP requests are processed at a faster pace.
Before Making the Change
You should consider increasing the connections only if you have available CPU. You should always check processor utilization before considering the increase in the attribute, because increasing the attribute results in more processing work for the processor as described earlier. Therefore, increasing this attribute makes sense only when you have processor utilization below the threshold limits (such as 75 %).
For more information, see "ASP.NET Tuning" in this chapter.
Evaluating the Change
Changing the attribute may involve multiple iterations for tuning and involves various tradeoffs with respect to thread pool utilization. Therefore, the changes in the maxconnection attribute may require changes to other thread pool related configuration attributes such as maxWorkerThreads, and maxIoThreads.
When you load test your application after making the configuration changes, you should monitor CPU utilization and watch the ASP.NET Applications\Requests/sec and ASP.NET Applications\Requests in Application Queue performance counters. Requests in Application Queue should decrease, while Requests/sec and CPU utilization should increase.
Prioritize and Allocate Connections Across Discrete Web Services
Enumerate and prioritize the Web services you call. Allocate more connections to your critical Web services. You specify each Web service by using the address attribute as follows.
<connectionManagement> <add address="WebServiceA" maxconnection="8"> <add address="WebServiceB" maxconnection="4"> </connectionManagement>
For example, if your application typically makes more requests to Web ServiceA than WebServiceB, you can dedicate more connections, as shown in the example.
Consider the responseDeadlockInterval Attribute
When making Web service calls from an ASP.NET application, if you are increasing the value of both the proxy timeout and the executionTimeout to greater than 180 seconds for some reason, you should consider changing the responseDeadlockInterval attribute for processModel element in the machine.config file. The default value of this attribute is 180 seconds. If there is no response for an executing request for180 seconds, the ASP.NET worker process will recycle.
You must reconsider your design if it warrants changing the attributes to a higher value.
If You Upload Large Files, Configure maxRequestLength
The ASP.NET runtime settings prevent you from uploading files larger than 4 MB. To change this default, you need to modify the maxRequestLength parameter in the <httpRuntime> section to the value that you require.
For more information, see the following resources:
- "Bulk Data Transfer" in Chapter 10, "Improving Web Services Performance"
- Knowledge Base article 295626, "PRB: Cannot Upload Large Files When You Use the HtmlInputFile Server Control," at http://support.microsoft.com/default.aspx?scid=kb;en-us;.295626
The main configuration setting you tune in remoting is the lease time. To determine appropriate lifetime timeouts for your application, you need to strike a balance between resource utilization on the server and the performance implications of frequently destroying and recreating objects. Increasing an object's lifetime increases your server's memory and resource utilization, whereas decreasing the lifetime can lead to objects being destroyed too frequently and prematurely.
The general guidelines are as follows:
- Consider using a longer lease time for objects that are expensive to create.
- Consider shorter lease times for objects that consume lots of shared or important resources.
Consider Using a Longer Lease Time for Objects that Are Expensive to Create
If you use objects that are expensive to create, consider modifying the lease timeouts to allow the object to remain longer than the default 5 minute timeout. For example, if you use a singleton object that incurs an expensive startup process, consider changing the timeout to a longer, more appropriate period of time, or change the timeout to infinite.
Consider Shorter Lease Times for Objects that Consume Lots of Shared or Important Resources
If you create objects that consume shared or important resources, consider using a shorter lease timeout. Setting a timeout of less than 5 minutes will force the cleanup of resources to take place sooner, which can help avoid stranded resources and resource pressure.
Tuning the Lease Time
You can tune both the lease timeout and the "renew on call" time either programmatically or through configuration. The following configuration file shows the use of these settings.
<configuration> <system.runtime.remoting> <application> <lifetime leaseTimeout="1M" renewOnCallTime="30S" leaseManagerPollTime="2M" /> </application> </system.runtime.remoting> </configuration>
Note that this approach changes all remote objects published by the server.
The primary configurable option in ADO.NET is the connection string. You need to consider the identity used to connect, the number of connections that your application uses, the number of pooled connections pools, and timeouts for trying to connect to the database.
Configuring the connection string depends on where the connection string is stored. For security reasons, the connection string should be stored in encrypted format when possible.
The following performance counters help you identify ADO.NET bottlenecks.
Table 17.8: Performance Counters Used to Identify ADO.NET Bottlenecks
|Connection Pooling||.NET CLR Data\SqlClient: Current # pooled connections
.NET CLR Data\SqlClient: Peak # pooled connections
.NET CLR Data\SqlClient: Total # failed connections
For more information about how to measure these counters, their thresholds, and their significance, see "ADO.NET/Data Access" in Chapter 15, "Measuring .NET Application Performance."
The following list describes several common bottlenecks that occur in applications using AOD.NET and explains how you identify them using system counters.
Too Many Connections
Too many connections can be a result of poor coding practices or improper connection string settings. To identify this symptom, observe the following performance counters:
- A SQLServer:General Statistics object showing Logins/sec and Logouts/sec counters should be close to zero and should stay there; values that are consistently higher than zero indicate that connection pool is not being used.
- A SQLServer:General Statistics object showing increasing values for User Connection without stabilizing over a period of time probably indicates a connection leak.
For more information, see "Monitoring Pooling" in Chapter 12, "Improving ADO.NET Performance."
Tuning ADO.NET data access mostly involves tuning the connection string used to connect to the database. You can use the connection string to set the database connection pool size. It is important to ensure that a consistent connection string is used for all connections. Any slight variation in the connection string causes a new pool to be used.
Within the same connection string is a set of attributes that may have an effect on performance. These attributes are optimized for most common scenarios with tradeoffs regarding performance, security, and network utilization. If you are considering changing the defaults, see "SqlConnection.ConnectionString Property" in the .NET Framework documentation at http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring(VS.71).aspx.
Consider Tuning Your Pool Size If Needed
You can tool the connection pool size. For example, in the case of the .NET Framework Data Provider for SQL Server, the default minimum pool size is zero and the maximum size is 100. You might need to increase the minimum size to reduce warm-up time. You might need to increase the maximum size if your application needs more than 100 connections, though this scenario is rare.
For more information about measuring the health of the database connection pool, see Chapter 15, "Measuring .NET Application Performance."
In most cases SQL Server is self-tuning, so you do not need to change the default configuration settings. If you do make changes, test to ensure that the changes resolve the issue and help you meet your performance objectives.
The following performance counters help you identify SQL Server bottlenecks.
Table 17.9: Performance Counters Used to Identify SQL Server Bottlenecks
|Indexes||SQLServer: Access Methods\Index Searches/sec
SQLServer: Access Methods\ Full Scans/sec
|Cache||SQL Server: Cache Manager\ Cache Hit Ratio
SQL Server: Cache Manager\Cache Use Counts/sec
SQL Server: Memory Manager\ SQL Cache Memory(KB)
Memory\ Cache Faults/sec
|Transactions||SQL Server: Databases\Transactions/sec
SQL Server: Databases\Active Transactions
|Locks||SQL Server: Locks\ Lock Requests/sec
SQL Server: Locks\ Lock Timeouts/sec
SQL Server: Locks\Lock Waits/sec
SQL Server: Locks\ Number of Deadlocks/sec
SQL Server: Locks\Average Wait Time (ms)
SQL Server: Latches\Average Latch Wait Time(ms)
For more information about how to measure these counters, their thresholds, and their significance, see "ADO.NET/Data Access" in Chapter 15, "Measuring .NET Application Performance."
Out-of-date statistics can be a common bottleneck in SQL Server. If statistics used by the SQL query optimizer are out of date, this can lead to poor performance. To monitor these statistics, you can either use SET STATISTICS PROFILE ON in Query Analyzer or you can use the SQL Profiler and monitor the profiler event named Performance:Showplan Statistics (Event Class 98). A big difference between the estimated row count and the actual row count can indicate that the optimizer had outdated or skewed statistics. For more information, see "Compare Actual vs. Estimated Rows and Executions" in Chapter 14, "Improving SQL Server Performance."
In general, do not tune SQL Server configuration settings unless that is your only remaining option (meaning that you have already tuned your application and database design). Your tuning can work against the SQL Server self-tuning and can degrade performance. Make sure that you test to verify that you have fixed the problem.
To tune SQL Server, consider the following options:
- If there are other applications on the system, set SQL Server memory to a fixed amount.
- Update statistics.
- Choose hardware-level RAID rather than software RAID when you can.
If There Are Other Applications on the System, Set SQL Server Memory to a Fixed Amount
Assign SQL Server a fixed amount of memory only if your system is running applications other than SQL Server. Otherwise, you can assign SQL Server as much memory as possible. To assign SQL Server a fixed amount of memory, the SQL memory settings of the server needs to be set as "Use a Fixed Memory size." This value depends on your application and SQL Server load, so consider changing the value and testing the server with application load and SQL Server load to find an optimal value for this setting.
If statistics are out of date, your indexes will be inefficient. You can update out-of-date statistics by using UPDATE STATISTICS WITH FULLSCAN.
Choose Hardware-Level RAID Rather Than Software RAID When You Can
Choose hardware-level RAID rather than software RAID when you can. Software RAID takes CPU cycles away from SQL Server.
Choose RAID 0+1 (Striped Mirror) Where You Can
Use RAID 0+1 (also known as RAID 01 or striped mirror) where possible. RAID 5 (striping with parity) can be used in some circumstances, but is generally less reliable and more expensive over time.
Tuning IIS is the subject of entire books, and complete coverage is well beyond the scope of this chapter. However, this section does highlight several common problem areas and identifies several of the more important configuration settings. A number of resources at the end of this section provide additional detailed information.
- Know the tools. The IIS Metabase and Windows registry contains settings that control how your IIS server works. You can modify these settings by using Adsutil and MetaEdit. For more information, see Knowledge Base article 240225, "Description of Adsutil and MetaEdit Utilities Used to Modify the Metabase," at http://support.microsoft.com/default.aspx?scid=kb;en-us;240225.
You also use the IIS management console (Internet Service Manager) to configure session state and debugging settings (both of which should be disabled if not required). These settings are shown in Figure 17.17.
Figure 17.17: IIS console showing session state and debugging settings
- Know how to monitor. The System Monitor comes with a rich set of counters to monitor Web server performance. Figure 17.18 shows some of the Web Service counters.
Note Do not confuse Web Service with Web services. Web Service in this case refers to IIS.
Figure 17.18: System Counter Web Service counters
The following articles provide detailed information about various aspects of Web server tuning:
- Knowledge Base article 308186, "HOW TO: Optimize Web Server Performance in Windows 2000," at http://support.microsoft.com/default.aspx?scid=kb;en-us;308186
- Knowledge Base article 305313, "Optimizing Internet Information Services 5.0," at http://support.microsoft.com/default.aspx?scid=kb;en-us;305313
- "Tuning Internet Information Server Performance" at http://technet.microsoft.com/en-us/library/bb742459.aspx"
- "Windows 2000 Performance Tuning" at http://technet.microsoft.com/en-us/library/bb742460.aspx
- "Deploying Windows 2000 with IIS 5.0 for Dot Coms: Best Practices" at http://technet.microsoft.com/en-us/library/bb742415.aspx
- "Setting Up a Reliable Web Server by Using Windows 2000" at http://technet.microsoft.com/en-us/library/bb742397.aspx
- "The Art and Science of Web Server Tuning with Internet Information Services 5.0" at http://technet.microsoft.com/en-us/library/bb742402.aspx
- "Performance Tuning IIS 6" in "Performance Tuning Guidelines for Windows Server 2003" at http://www.microsoft.com/windowsserver2003/evaluation/performance/tuning.mspx
Performance tuning refers to code optimization and application configuration. This chapter has focused on configuration and has shown you how to tune your system-, platform-, and application-level configuration parameters to improve performance. For each of major technology areas addressed in this chapter, you have seen how to identify and then address bottlenecks.
Use the performance-tuning process in this chapter to help you to systematically identify and fix bottlenecks. Start by obtaining a set of baseline metrics. Then run tests to collect data, analyze that data to identify bottlenecks, and tune your configuration settings. Remember to apply only one set of configuration changes at a time. Be sure to retest and measure performance to validate the impact of your latest changes.
Performance tuning is an iterative process that continues until your application meets its performance objectives or you determine that your code or application design needs further optimization.
For more information about tuning performance, see the following resources in this guide:
- Chapter 15, "Measuring .NET Application Performance"
- Chapter 16, "Testing .NET Application Performance"
- "How To: Optimize SQL Indexes"
- "How To: Optimize SQL Queries"
For further reading, see the following resources:
- "ASP.NET Performance Monitoring, and When to Alert Administrators" by Thomas Marquardt, at http://msdn.microsoft.com/en-us/library/ms972959.aspx.
- Performance Testing Microsoft .NET Web Applications, Microsoft Press®.
- For a systematic, quantitative approach to performance tuning that helps you quickly find problems, identify potential solutions, and prioritize your efforts, see "Five Steps to Solving Software Performance Problems," by Lloyd G. Williams, Ph.D., and Connie U. Smith, Ph.D., at http://www.perfeng.com/papers/step5.pdf.
- For techniques and strategies for building a collaborative relationship between test and development around performance tuning, see "Part 11: Collaborative Tuning" in "Beyond Performance Testing" by Scott Barber, at http://www.perftestplus.com/resources/BPT11.pdf.
- For insight into bottleneck identification and analysis, see "Part 7: Identify the Critical Failure or Bottleneck" from "Beyond Performance Testing" by Scott Barber, at http://www.perftestplus.com/resources/BPT7.pdf.
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.