Creating and Editing Load Tests

You can create a Web performance and load test project to add load tests to your solution. Load tests can contain both unit tests and Web performance tests. The main purpose of a load test is to simulate many users accessing a server at the same time. A load test gives you access to application stress and performance data. A load test can be configured to emulate various load conditions such as user loads and network types. A new load test is created by using the New Load Test Wizard, in which you specify the initial settings for your load test. Initial settings including a scenario, counter sets, and a run setting.


  • Visual Studio Ultimate


Associated Topics

Create a new load test: You can use the New Load Test Wizard in Visual Studio Ultimate to create load tests for testing stress and performance on your application.

Edit an existing load test: After a load test has been created by using the New Load Test Wizard, you can modify and configure various settings and properties using the Load Test Editor.

Load testing with coded UI tests: You can create load tests that include coded UI tests as performance tests. This is useful under very specific circumstances, because coded UI tests enable you to capture performance at the UI layer.

Specify 64-bit processes for load testing: You can configure the test setting that you are using with your load test to specify that you want to use 64-bit processes.

Configuring Load Test Run Settings

Run settings are a set of properties that influence the way a load test runs. Run settings are organized by categories in the Properties window.

There are three types of load patterns: constant, step, and goal-based. To choose the load pattern that is appropriate for your load test, you must understand the advantages of each type. For more information, see Editing Load Patterns to Model Virtual User Activities.


A constant load pattern is useful when you want to run your load test with the same user load for a long period of time. If you specify a high user load with a constant load pattern, it is recommended that you also specify a warm-up time for the load test. When you specify a warm-up time, you avoid over stressing your site by having hundreds of new user sessions hitting the site at the same time.


A step load pattern is one of the most common and useful load patterns, because it lets you monitor the performance of your system as the user load increases. Monitoring your system as the user load increases lets you determine the number of users who can be supported with acceptable response times. Conversely, it also lets you determine the number of users at which performance becomes unacceptable.

If each step adds a large number of users, for example, more than 50 users, consider using the Step Ramp Time property to stagger the start of the users in the step. For more information, see How to: Specify the Step Ramp Time Property for a Step Load Pattern.


A goal-based load pattern is similar to a step load pattern in that the user load is typically increasing over time. However, it lets you specify that the load should stop increasing when some performance counter reaches a certain level. For example, you can use a goal-based load pattern to continue increasing the load until one of your target servers is 75% busy and then keep the load steady.

If no predefined load pattern meets your needs, it is also possible to implement a custom load test plug-in that controls the user load as the load test runs. For more information, see Creating and Using Custom Plug-ins for Load and Web Performance Tests.

Load Test Run Settings support different options for modeling user connections to the Web server by using the Web Test Connection Model property. There are three types of connection model: connection per user, connection pool, and connection per test iteration. To choose the connection model that is appropriate for your load test, you must understand the advantages of each type.

Connection Per User

The connection per user model most closely simulates the behavior of a real browser. Each virtual user who is running a Web performance test uses up to six connections to each Web Server. Connection are kept open for the Web server that are dedicated to that virtual user. The first connection is established when the first request in the Web performance test is issued. Additional connections may be used when a page contains more than one dependent request. These requests may be issued in parallel using the additional connections. Older browsers use up to two connections per Web server, but FireFox 3 and Internet Explorer 8 use up to 6 connections per Web server. These same connections are reused for the virtual user throughout the load test.

The disadvantage of the connection per user model is that the number of connections held open on the agent computer can be as high as six times the user load, or even higher if multiple Web servers are targeted, and the resources required to support this high connection count might limit the user load that can be driven from a single load test agent.

Connection Pool

The connection pool model conserves the resources on the load test agent by sharing connections to the Web server among multiple virtual Web performance test users. In the connection pool model, the connection pool size specifies the maximum number of connections to make between the load test agent and the Web server. If the user load is larger than the connection pool size, then Web performance tests that are running on behalf of different virtual users will share a connection. This is the best model to use to drive the most load to the application tier.

Sharing a connection means that one Web performance test might have to wait before issuing a request when another Web performance test is using the connection. The average time that a Web performance test waits before submitting a request is tracked by the load test performance counter Avg. Connection Wait Time. This number should be less than the average response time for a page. If it is not, the connection pool size is probably too small.

Connection Per Test Iteration

Connection per test iteration closes the connection after each test iteration, and opens a new connection on the next iteration.

This setting puts the most stress on your network logons. Unless this is required, it is recommended you use one of the previous two options.

Choose an appropriate sample rate based on the length of your load test. A small sample rate, for example five seconds, collects more data for each performance counter than a large sample rate. Collecting large amount of data for long period of time could cause disk space errors. For long load tests, you can increase the sample rate to reduce the amount of data that you collect. The number of performance counters also affects how much data is collected. For computers under test, reducing the number of counters reduces the amount of data that you collect.

To determine what sample rate will work best for your particular load test, you must experiment. The following table provides recommended sample rates that you can use to get started.

Load Test Duration

Recommended Sample Rate

< 1 Hour

5 seconds

1 - 8 Hours

15 seconds

8 - 24 Hours

30 seconds

> 24 Hours

60 seconds

The think time for Web performance test requests has a significant effect on the number of users who can be supported with reasonable response times. Changing think times from 2 seconds to 10 seconds usually enables you simulate 5 times as many users. However, if your goal is to simulate real users, you should set think time based on how you expect users will behave on your Web site. Increasing the think time and number of users will not necessarily put additional stress on your Web server. If the Web site is authenticated, the type of scheme used will affect performance.

If you disable think times for a Web performance test, you could generate a Load test that has higher throughput with regard to requests per second. If you disable think times, you should also reduce the number of users to a much smaller number than when think times are enabled. For example, if you disable think times and try to run 1000 users, you are likely to overwhelm either the target server or the load test agent.

For more information, see Editing Think Times to Simulate Web Site Human Interaction Delays in Load Tests Scenarios.

One of the properties of a Web test request is response time goal. If you define response time goals for your Web performance test requests, when the Web performance test is run in a load test, the Load Test Analyzer will report the percentage of the Web performance tests for which the response time did not meet the goal. By default, there are no response time goals defined for Web requests.

In addition, if you use the Response Time Goal validation rule, pages that do not meet the response time goal will result in an error in the load test. If you use log on error, you can see what was that virtual user was doing when the slow page occurred.

For more information, see How to: Set Page Response Time Goals in a Web Performance Test.

The run settings include a property named Timing Details Storage. If this property is enabled, the time that it takes to execute each individual test, transaction, and page during the load test will be stored in the load test results repository. This enables the Virtual Users Activity Chart in the Load Test Analyzer. It also allows 90th, 95th and 99th percentiles and standard deviation to be shown in the Load Test Analyzer in the Tests, Transactions, and Pages tables.

By default, Timing Details Storage property is enabled to support the Virtual User Activity chart in the Details view in the load test result using the Load Test Analyzer.

You should consider disabling the Timing Details Storage property for large tests. There are two important reasons for doing this.

  • The amount of space that is required in the load test results repository to store the timing details data may be very large, especially for long load tests.

  • The time to store this data in the load test results repository at the end of the load test is long because this data is stored on the load test agents until the load test has finished executing.

If sufficient disk space is available in the load test results repository, you can enable Timing Details Storage to obtain the percentile data. You have two choices for enabling Timing Details Storage: StatisticsOnly and AllIndividualDetails. By using either option, all the individual tests, pages, and transactions are timed, and percentile data is calculated from the individual timing data. If you choose StatisticsOnly, the individual timing data is deleted from the repository after the percentile data has been calculated. Deleting the data reduces the amount of space that is required in the repository. However, if you want to process the timing detail data directly, using SQL tools, or enable viewing virtual user details in the Virtual User Activity chart choose AllIndividualDetails so that the timing detail data is saved in the repository.

For more information, see Analyzing Load Test Virtual User Activity in the Details View of the Load Test Analyzer and How to: Configure Load Tests to Collect Full Details to Enable Virtual User Activity in Test Results.

Each scenario in a load test has a property named Percentage of New Users. This property affects the way the load test runtime engine simulates the caching that would be performed by a Web browser. The default value for Percentage of New Users is 0. This means that each virtual user keeps a virtual cache of dependent requests and a list of cookies between test iterations. The cache works like a browser cache. Therefore, subsequent requests to the URL will not be made. This closely resembles real Web browsers.

If Percentage of New Users is set to 100%, each user is effectively a "one time user" and never returns to the site. In this case, each Web performance test iteration that is run in a load test is treated like a first time user to the Web site, who has no content from the Web site in their browser cache from previous visits. Therefore, all requests in the Web performance test are downloaded. This includes all dependent requests, such as images.

Note Note

An exception is the case in which the same cacheable resource is requested multiple times in a Web performance test.

Use the default value of 0 percent new users to drive the most load to the application tier of your Web site. This value closely resembles real users and drives more load to your application tier, where most performance problems occur. For more information, see How to: Specify the Percentage of Virtual Users that Use Web Cache Data.

The ASP.NET profiler diagnostic data adapter lets you to collect ASP.NET profiler data from the application tier while you run a load test. You should not run the profiler for long load tests, for example on load tests running greater than one hour, because the profiler file can become large (hundreds of megabytes). Instead, run shorter load tests with the ASP.NET profiler, which will still give you the benefit of deep diagnosis of performance problems.

For more information, see How to: Configure ASP.NET Profiler for Load Tests Using Test Settings in Visual Studio.

You can collect full logs for failed tests or by specifying a frequency to log tests. Logging is controlled by the Save Log on Test Failure, Save Log Frequency for Completed Tests, and Maximum Test Logs properties. The number of logs collected is controlled by the Maximum Test Logs and the Save Log Frequency for Completed Tests property settings. The default settings prevent a large number of logs from being collected. For long-running tests that will generate millions of requests, do not use the Save Log Frequency for Completed Tests setting because the number of logs will become too large. Also, keep the Maximum Test Logs property setting at a reasonable number. This property setting controls the maximum number of logs per error type. Therefore, you should keep this setting. It will prevent collecting tens of thousands of logs. Collecting too many logs increases the time at the end of the test to collect the logs and takes storage space in the load test database.

For more information, see Modifying Load Test Logging Settings.

The run settings include a property named SQL Tracing Enabled. This property lets you enable the tracing feature of Microsoft SQL Server for the duration of a load test. This is an alternative to starting a separate SQL Profiler session while the load test is running to diagnose SQL performance problems. If the property is enabled, SQL trace data is displayed in the Load Test Analyzer. You can view it on the Tables page in the SQL Trace table.

To enable this feature, the user who is running the load test must have the SQL privileges required to perform SQL tracing. When a load test is running on a remote machine, using a test agent and test controller, the controller user must have the SQL privileges. You must also specify a directory where the trace data file will be written. This directory is usually a network share. At the completion of the load test, the trace data file is imported into the load test repository and associated with the load test. The trace data file can be viewed later using the Load Test Analyzer.

For more information, see Configuring Load Test Run Settings and Collecting SQL Trace Data to Monitor and Improve Performance in Load Tests.

If an agent computer has more than 75% CPU utilization, or has less than 10% of physical memory available, it is overloaded. Add more agents to your test controller to ensure that the agent computer does not become the bottleneck in your load test.

For more information, see Distributing Load Test Runs Across Multiple Test Machines Using Test Controllers and Test Agents and How to: Specify Test Agents to Use in Load Test Scenarios.

Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
© 2015 Microsoft