Export (0) Print
Expand All
6 out of 10 rated this helpful - Rate this topic

Testing for Performance

Visual Studio .NET 2003

After you have identified specific performance requirements, you can begin testing to determine whether the application meets those requirements. Performance testing presumes that the application is functioning, stable, and robust. As such, it is important to eliminate as many variables as possible from the tests. For example, bugs in the code can create the appearance of a performance problem or even mask a performance problem. To accurately compare the results from different performance test passes, the application must be working correctly. It is especially important to retest application functionality if the tuning process has modified the implementation of a component. The application must pass its functional tests before you can test its performance. In addition to application changes, unexpected changes can occur in hardware, network traffic, software configuration, system services, and so on. It is important to control changes to the application.

Measuring Performance

Defining Performance Tests

Determining Baseline Performance

Stress Testing

Solving Performance Problems

Measuring Performance

To correctly tune performance, you must maintain accurate and complete records of each test pass. Records should include:

  • The exact system configuration, especially changes from previous test passes
  • Both the raw data and the calculated results from performance monitoring tools

These records not only indicate whether the application meets performance goals, but they also help identify potential causes of future performance problems.

During each test pass, run exactly the same set of performance tests; otherwise, it is not possible to discern whether different results are due to changes in the tests rather than to changes in the application. Automating as much of the performance test set as possible helps eliminate operator differences.

Other seemingly benign factors impact the results of performance tests, such as how long the application runs before the test begins. Just as a cold automobile engine performs differently than a warm one, a long-running application may perform differently from a newly launched one due to factors such as memory fragmentation.

Defining Performance Tests

During performance testing, measure and record values for the metrics specified in the performance goals. It is important to meet all performance metrics, such as think time, transaction mix, and so on. Within these constraints, testing should be as realistic as possible. For example, test the application to determine how it performs when many clients are accessing it simultaneously. A multi-threaded test application can simulate multiple clients in a reproducible manner; each thread represents one client. If the application accesses a database, the database should contain a realistic number of records, and the test should use random (but valid) values for data entry. If the test database is too small, the effects of caching in the database server will yield unrealistic test results. The results might also be unrealistic if data is entered or accessed in unrealistic ways. For example, it is unlikely that new data would be created in alphabetical order on the primary key.

Usually, test harnesses must accept user-specified input parameters, such as the transaction mix, think time, number of clients, and so on. However, the test harness itself may dictate the rules for creating realistic random data.

After creating a test harness to drive the application, you should document all invariant conditions for running the tests. At the very least, these conditions should include the input parameters required to run the test harness. In addition, you should document how to set up a database for running the test. The instructions should specify that the database should not contain changes made by a previous test pass. The instructions should also specify computer configurations used for the test. Run the test harness on a separate computer from the application because this setup more closely approximates a production environment.

Determining Baseline Performance

After defining performance goals and developing performance tests, run the tests once to establish a baseline. The more closely the certification environment resembles the production environment, the greater the likelihood that the application will perform acceptably after deployment. Therefore, it is important to have a realistic certification environment at the outset.

With luck, the baseline performance will meet performance goals, and the application will not need any tuning. More likely, the baseline performance will not be satisfactory. However, documenting the initial test environment and the baseline results provides a solid foundation for tuning efforts.

Stress Testing

Stress testing, which is a specialized form of performance testing, is similar to destructive testing in other fields of engineering. The goal of stress testing is to crash the application by increasing the processing load past performance degradation until the application begins to fail due to saturation of resources or the occurrence of errors. Stress testing helps to reveal subtle bugs that would otherwise go undetected until the application was deployed. Since such bugs are typically the result of design flaws, stress testing should begin early in the development phase on each area of the application. Fix these subtle bugs at their source instead of fixing symptomatic bugs that may occur elsewhere in the application if these bugs were ignored.

Solving Performance Problems

You can often attribute performance problems to more than one factor. So, finding a solution for poor performance is quite similar to conducting a scientific experiment. Scientific experimentation traditionally follows a six-step process that involves observation, preliminary hypothesis, prediction, tests, controls, and a theory. The theory consists of a hypothesis supported by the best collection of evidence accumulated by the process. You can solve performance problems by following the same process.

Observing less than desirable performance of an ASP application, you hypothesize that the ASPProcessorThreadMax metabase property is set too low. This can be the case when the ASP Requests Queued performance counter moves up and down, and the processor(s) are running below 50 percent. You predict that increasing the value of number of the ASPProcessorThreadMax metabase property will increase performance.

The active thread setting has now become the control. Make only one setting change at a time until you observe an acceptable change in performance. If performance that is more satisfactory is achieved after several adjustments to the ASPProcessorThreadMax metabase property, the theory is that a certain property setting provides the best server performance in combination with all current variables (amount of total required memory, number of applications being run, upgraded software, and so on). Any change in the variables will then constitute further experimentation.

See Also

Performance | Logging Application, Server, and Security Events | Monitoring Performance Thresholds | Building, Debugging, and Testing | Performance and Scalability Testing | Understanding Performance Testing

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.