Export (0) Print
Expand All
0 out of 1 rated this helpful - Rate this topic

Performance Testing for Application Blocks

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

 

patterns & practices Developer Center

Microsoft Corporation

January 2005

Summary: Covers planning steps for Performance testing including defining metrics to collect, scenarios to cover, and data analysis.

Contents

Objectives

Overview

Performance Objectives

Load Testing

Stress Testing

Tools

Summary

Objectives

  • Learn performance testing fundamentals.
  • Learn about load testing an application block.
  • Learn about stress testing an application block.
  • Learn about the tools that are used for performance testing.

Overview

Performance testing an application block involves subjecting it to various load levels. The goals of performance testing can be summarized as follows:

  • To verify that the application block (or a prototype) meets the performance objectives within the budgeted constraints of resource utilization. The performance objectives can include several different parameters such as the time it takes to complete a particular usage scenario (known as response time) or the number of concurrent or simultaneous requests that can be supported for a particular operation at a given response time. The resource constraints can be set with respect to server resources such as processor utilization, memory, disk I/O, and network I/O.
  • To analyze the behavior of the application block at various load levels. The behavior is measured in metrics related to performance objectives and other metrics that help to identify the bottlenecks in the application block.
  • To identify the bottlenecks in the application block. The bottlenecks can be caused by several issues such as memory leaks, slow response times, or contention under load.

Performance testing for an application block can be broadly categorized into two types:

  • Load testing. Load testing helps you to monitor and analyze behavior of an application block under normal and peak load conditions. Load testing enables you to verify that the application block meets the desired performance objectives.
  • Stress testing. Stress testing helps to analyze behavior of an application that integrates the application block when it is pushed beyond the peak load conditions. The goal of stress testing is to identify problems that occur only under high load conditions.

You can conduct performance testing during various phases of the development life cycle:

  • Design phase. During this phase of the life cycle, you can conduct performance testing on a prototype to evaluate whether a particular design would meet the targeted performance objectives.
  • Implementation/construction phase. During this phase of the life cycle, you can conduct performance testing to validate that the implementation of the modules meets the performance objectives.
  • Integration testing phase. During this phase of the life cycle, you can conduct performance testing to ensure that the application that is integrating the application block can meet its own performance objectives.

You can either directly load the API for the application block by using a load generator or develop a prototype application that integrates the application block. Either approach is valid if the overhead of a prototype application is minimal. If you decide on the prototype application approach, you should make sure that the prototype application does not perform any expensive rendering operations or other actions that are not relevant to testing the application blocks.

The Configuration Management Application Block (CMAB) is used to illustrate concepts in this chapter. The requirements for the CMAB are the following:

  • It provides the functionality to read and store configuration information transparently in a persistent storage medium. The storage media are the Microsoft® SQL Server™ database system, the registry, and XML files.
  • It provides a configurable option to store the information in encrypted form in plain text using XML notation.
  • It can be used with desktop applications and with Web applications that are deployed in a Web farm.
  • It caches configuration information in memory to reduce cross-process communication such as reading from any persistent medium. This caching reduces the response time of the request for configuration information. The expiration and scavenging mechanism for the data that is cached in memory is similar to the CRON algorithm in UNIX.

The CMAB can store and return data from various locales or cultures without losing any data integrity.

In the case of the CMAB, the performance objectives are as follows (please note that these objectives are fictitious and are for illustration purposes only):

  • The CPU overhead should not be more than 7 to 10 percent.
  • The application block should be able to support a minimum of 200 concurrent users for the reading of data from SQL Server.
  • The application block should be able to support a minimum of 150 concurrent users for the writing of data to SQL Server.
  • The response time for a client is not more than 2 seconds for the given concurrent load. (The client is firing the request from a 100 megabits per second [Mbps] VLAN in the test lab.)

For more information about performance testing fundamentals, see "Chapter 16—Testing .NET Application Performance" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt16.asp.

Performance Objectives

Performance objectives are captured in the requirements phase and early design phase of the application life cycle. All performance objectives, resource budget data, key usage scenarios, and so on, are captured as a part of the performance modeling process. The performance modeling artifact serves as important input to the performance testing process. In fact, performance testing is a part of the performance modeling process; you may update the model depending on the application life cycle phase in which you are executing the performance tests.

The performance objectives may include some or all of the following:

  • Workload. If the application block is to be integrated with a server-based application, it will be subject to a certain load of concurrent and simultaneous users. The requirements may explicitly specify the number of concurrent users that should be supported by the application block for a particular operation. For example, the requirements for an application block may be 200 concurrent users for one usage scenario and 300 concurrent users for another usage scenario.
  • Response time. If the application block is to be integrated with a server-based application, the response time objective is the time it takes to respond to a request for the peak targeted workload on the server. The response time can be measured in terms of Time to First Byte (TTFB) and Time to Last Byte (TTLB). The response time depends on the load that is on the server and the network bandwidth over which the client makes a request to the server. The response time is specified for different usage scenarios of the application block. For example, a write feature may have a response time of less than 4 seconds; whereas a read scenario may have a response time of less than 2 seconds for the peak load scenario.
  • Throughput. Throughput is the number of requests that can be served by the application per unit time. A simple application that integrates the application block is supposed to process requests for the targeted workload within the response time goal. This goal can be translated as the number of requests that should be processed per unit time. For an ASP.NET Web application, you can measure this value by monitoring the ASP.NET\Request/sec performance counter. You can measure the throughput in other units that help you to effectively monitor the performance of the application block; for example, you can measure read operations per second and write operations per second.
  • Resource utilization budget. The resource utilization cost is measured in terms of server resources, such as CPU, memory, disk I/O, and network I/O. The resource utilization budget is the amount of resources consumed by the application block at peak load levels. For example, the processor overhead of the application block should not be more than 10 percent.

For more information about performance modeling, see "Chapter 2—Performance Modeling" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt02.asp.

Load Testing

Load testing analyzes the behavior of the application block with workload varying from normal to peak load conditions. This allows you to verify that the application block is meeting the desired performance objectives.

Input

The following input is required toload test an application block:

  • Performance model( workload characteristics, performance objectives, and resource budget allocations)
  • Test plans

Load Testing Steps

Load testing involves six steps:

  1. Identify key scenarios. Identify performance-critical scenarios for the application block.
  2. Identify workload. Distribute the total load among the various usage scenarios identified in Step 1.
  3. Identify metrics. Identify the metrics to be collected when executing load tests.
  4. Create test cases. Create the test cases for load testing of the scenarios identified in Step 1.
  5. Simulate load. Use the load-generating tools to simulate the load for each test case, and use the performance monitoring tools (and in some cases, the profilers) to capture the metrics.
  6. Analyze the results. Analyze the data from the performance objectives as the benchmark. The analysis also identifies potential bottlenecks.

The next sections describe each of these steps.

Step 1: Identify Key Scenarios

Generally, you should start by identifying scenarios that can have a significant performance impact or that have explicit performance goals. In the case of application blocks, you should prepare a prioritized list of usage scenarios, and all of these scenarios should be tested.

In the case of the CMAB, the two major functionalities are reading and writing configuration data. The CMAB functionalities can be extended more based on various scenarios, such as whether caching is enabled or disabled, usage of different data stores, or different encryption providers, and so on. Therefore, the load-testing scenarios for the CMAB are the combinations of all the configuration options. The following are some of the scenarios for the CMAB:

  • Read a declared configuration section from a file store with caching disabled and data encryption enabled.
  • Write configuration data to a file store with encryption enabled.
  • Read configuration data from a SQL store with caching and data encryption enabled.
  • Write configuration data to a SQL store with data encryption enabled.
  • Initialize the Configuration Manager for the first time when the Configuration Manager is performing user operations.

For the CMAB, performance degradation probability is high in a case where data must be written to a file store, because concurrent write operations are not supported on a file and the response time is expected to be greater in this case.

Step 2: Identify Workload

In this step, you identify the workload for each scenario or distribute the total workload among the scenarios. Workload allocation involves specifying the number of concurrent users that are involved in a particular scenario, the rate of requests, and the pattern of requests. You may have a workload defined for each usage scenario in terms of concurrent users (that is, all users firing requests at a given instant without any sleep time between requests). For example, the CMAB has a targeted workload of 200 concurrent users for a read operation on the SQL store with caching disabled and encryption enabled.

In most real-world scenarios, the application block may be performing parallel execution of multiple operations from different scenarios. You may therefore want to analyze how the application block performs with a particular workload profile that is a mix of various scenarios for a given load of simultaneous users (that is, all users have active connections, and all of them may not be firing requests at same time), with two consecutive requests separated by specific think time (that is, the time spent by the user between the two consecutive requests).

Workload characteristics are determined by using workload modeling techniques. For more information about workload modeling, see "Chapter 16—Testing .NET Application Performance" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt16.asp.

The following steps help you to identify the workload profile:

  • Identify the maximum number of simultaneous users accessing each of the usage scenarios in isolation. This number is based on the performance objectives identified during performance modeling. For example, in the case of the CMAB, the expected load of users is 1,200 simultaneous users.
  • Identify the expected mix of usage scenarios. In most real-world server–based applications, a mix of application block usage scenarios might be accessed by various users. You should identify each mix by naming each one as a unique profile. Identify the number of simultaneous users for each scenario and the pattern in which the users have to be distributed across test scenarios. Distribute the workload based on the requirements that the application blocks have been designed for. For example, the CMAB is optimized for situations where read operations outnumber write operations; it is not meant for online transaction processing (OLTP)–style applications. Therefore, the workload has to be distributed so only a small part of the workload is allocated for write operations. Group users together into user profiles based on the key scenarios they participate in.

    For example, in the case of the CMAB, for any given database store, there will be a read profile and a write profile. Each profile has its respective use case as the dominant one. A sample composition of a read profile for a SQL store is shown in Table 8.1. The table assumes that out of a total workload of 1,000 simultaneous users, 600 users are using the SQL store.

    Table 8.1: CMAB Read Profile for SQL Server Database Store

    Read profile for SQL ServerPercentage of the workload for SQL store
    Simultaneous users
    Reading from a SQL store90540
    Writing to a SQL store1060
    Total100600

In this way, you will have the read profiles for the registry and XML file data stores. Assuming that each of these gets a share of 200 users out of the total workload, the workload profile for the CMAB is as shown in Table 8.2.

Table 8.2: Sample Workload Profile for the CMAB


User profile
Percentage of the workload for SQL store
Simultaneous users
Read profile—SQL Server60600
Read profile—registry20200
Read profile—XML file20200
Total1001000

In addition to helping you simulate real-world scenarios, testing for combinations like those shown in the tables helps you to identify any possible negative impact on performance because of any contention. In the CMAB example, when a configuration is being read and written into a data store at the same time, it is locked for writing, and the read operation has to wait for the locks to be released.

  • Identify the average think time. Think time is the time spent by the user between two consecutive requests. This value may be as low as zero, a fixed value, or a random value in a range of numbers. In the case of the CMAB, the think time is a random think time of zero to 3 seconds.
  • Identify the duration of test for each of the profiles identified above. You need to run load tests for the user profiles and each isolated scenario identified earlier. The duration of test depends on the end goal of running the performance tests and can vary from 30 minutes to more than 100 hours. If you are interested in learning whether the usage scenario meets the performance objectives, a quick test of 20 minutes will suffice. However, if you are interested in analyzing the behavior of a write scenario over a sustained load for long hours when the database log file size tends to grow, you might want to run a long test lasting 4 to 5 days.

Step 3: Identify Metrics

Identify metrics that are relevant to your performance objectives and those that help you to identify bottlenecks. As the number of test iterations increases, more metrics can be added based on the analysis in previous iterations to identify potential bottlenecks.

Regardless of the application block, there are metrics that should be captured during load testing. These metrics are listed in Table 8.3.

Table 8.3: Metrics to Be Measured for All Test Cases

ObjectCounterInstance
Processor%Processor Time_Total
ProcessPrivate Bytes<Process>
MemoryAvailable MBytesNot applicable
ASP.NETRequest Execution TimeNot applicable
ASP.NETRequests RejectedNot applicable
ASP.NET applicationsRequests/SecYour virtual directory

The metrics in Table 8.3 give a coarse-grained view of memory and CPU utilization and performance for any application block during the early iteration cycles of the load test. In subsequent iterations, you can add more metrics for a fine-grained picture of resource utilization and performance. For example, suppose that during the first iteration of load tests, the ASP.NET worker process shows a marked increase in the Process\Private Bytes counter, indicating a possible memory leak. In the subsequent iterations, additional memory counters related to generations can be captured to study the memory allocation pattern for the application. These counters are listed in Table 8.4.

Table 8.4: Performance Counters for Analyzing the Managed Memory

ObjectCounterInstance
.NET CLR Memory# Bytes in all HeapsNot applicable
.NET CLR Memory# Gen 0 CollectionsNot applicable
.NET CLR Memory# Gen 1 CollectionsNot applicable
.NET CLR Memory# Gen 2 CollectionsNot applicable

In addition to the metrics in Table 8.2, you should capture metrics specific to each application block's performance objectives and test scenario goals. This may require additional instrumentation of the code by adding custom performance counters. For example, in the case of the CMAB, you can monitor the counters listed in Table 8.5 while running tests for performing read or write operations in isolation on a data store.

Table 8.5: Performance Counters for a Read or Write Operation in Isolation

ObjectiveCounterInstance
ThroughputASP.NET\Requests/SecYour virtual directory
Execution timeASP.NET\Request Execution TimeNot applicable

However, if you need to perform a mix of usage scenarios, simultaneously capturing out-of-the-box counters gives you only average values processed by the server. If you are interested in the number of read operations per second and write operations per second, you may need to add custom performance counters in the application block source code.

For a detailed list of performance counters and the scenarios, see "Chapter 15—Measuring .NET Application Performance" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt15.asp.

Step 4: Create Test Cases

The test cases are created based on the scenarios and the profile mix identified in the previous steps. In general, the inputs for creating the test cases are performance objectives, workload characteristics, and the identified metrics. Each test case should mention the expected results in such a way that each test case can be marked as a pass or fail after execution.

Test Case for the CMAB Sample Application Block

In the case of the CMAB, the test case would be the following:

  • Scenario: Reading configuration data from a SQL store with data caching and data protection options enabled
  • Number of users: 200 concurrent users
  • Test duration: 40 minutes
  • Think time: Random think time of 0 to 3 seconds
  • Expected results: The expected results are the following:
    • Throughput: 200 requests per second (ASP.NET\Requests/sec performance counter)
    • Processor\%Processor Time: 75 percent
    • Memory\Available Mbytes: 25 percent of total RAM
    • Request execution time: 2 seconds (on 100 Mbps LAN)

Step 5: Simulate Load

The load is generated using load generator tools, such as Microsoft Application Center Test (ACT), that simulate the number of users as specified in the workload characteristics. For each test cycle, incrementally increase the load. You should continue to increase the load and record the behavior until the threshold crosses the limit for the resources identified in the performance objectives. The number of users can also be increased slightly beyond the peak operating levels. The metrics are captured using performance monitoring tools, such as ACT or System Monitor.

For more information about using ACT for performance testing, see "How To: Use ACT to Test Performance and Scalability" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/ScaleNetHowTo10.asp.

Step 6: Analyze the Results

The metrics captured at various load levels should be analyzed to determine whether the performance of the application block being tested shows a trend toward or away from the performance objectives. The measured metrics should also be analyzed to diagnose potential bottlenecks. Based on the analysis, you can capture additional metrics in the subsequent test cycles.

For a general template for creating the load testing report, see the "Reporting" section in "Chapter 16—Testing .NET Application Performance" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt16.asp.

Primarily, load testing is done from a black box testing perspective. However, load testing results (such as information about potential deadlocks, contention, and memory leaks) can be used as input for the white box testing phase. You can analyze the code more by profiling using specialized tools such as WinDbg from the Windows Resource Kit.

The load testing process produces the following output:

  • Updated test plans
  • Potential bottlenecks that need to be analyzed in the white-box testing phase
  • Peak operating capacity
  • Behavior of the application block at various load levels

Stress Testing

Stress testing an application block means subjecting it to load beyond the peak operating capacity and at the same time denying resources that are required to process the load. An example of stress testing is hosting the application that uses the application block on a server that already has a process utilization of more than 75 percent because of existing applications, and subjecting the application block to a concurrent load above the peak operating capacity.

The goal of stress testing is to evaluate how the application block responds under such extreme conditions. Stress testing helps to identify problems that occur only under high load conditions. Stress testing application blocks identifies problems such as memory leaks, resource contentions, and synchronization issues.

Stress testing uses the analysis from load testing. The test scenarios and the maximum operating capacities are obtained from load testing.

The stress testing approach can be broadly classified into two types: sustained testing and maximal testing. The difference is usually the time the stress test is scheduled to run for, because a sustained stress test usually has a longer execution time than a maximal stress test. In fact, stress testing can accomplish its goals by intensity or quantity. A maximal stress test tends to concentrate on intensity; in other words, it sets up much more intense situations than would otherwise be encountered but it attempts to do so in a relatively short period of time. For example, a maximal stress test may have 500 users concurrently initiating a very data-intensive search query. The intensity is much greater than a typical scenario. Conversely, a sustained stress load tends to concentrate on quantity because the goal is to run much more in terms of the number of users or functionality, or both, than would usually be encountered. So, for example, a sustained stress test would be to have 2000 users run an application designed for 1000 users.

Input

The following input is required for stress testing an application block

  • Performance model (workload characteristics, performance objectives, key usage scenarios, resource budget allocations)
  • Potential problematic scenarios from the performance model and load testing
  • Peak load capacity from load testing

Stress Testing Steps

Stress testing includes the following steps:

  1. Identify key scenarios. Identify test scenarios that are suspected to have potential bottlenecks or performance problems, using the results of the load-testing process.
  2. Identify workload. Identify the workload to be applied to the scenarios identified earlier using the workload characteristics from the performance model, the results of the load testing, and the workload profile used in load testing.
  3. Identify metrics. Identify the metrics to be collected when stress testing application blocks. The metrics are now identified to focus on potential performance problems that may be encountered during the testing process.
  4. Create test cases. Create the test cases for the key scenarios identified in Step 1.
  5. Simulate load. Use load-generating tools to simulate the load to stress test the application block as specified in the test case, and use the performance monitoring and measuring tools and the profilers to capture the metrics.
  6. Analyze the results. Analyze the results from the perspective of diagnosing the potential bottlenecks and problems that occur only under continuous extreme load condition and report them in a proper format.

The next sections describe each of these steps.

Step 1: Identify Key Scenarios

Identify scenarios from the test cases used for load testing that may have a performance problem under high load conditions.

To stress test the application block, identify the test scenarios that are critical from the performance perspective. Such scenarios are usually resource-intensive or frequently used. These scenarios may include functionalities such as the following:

  • Synchronizing access to particular code that can lead to resource contention and possible deadlocks
  • Frequent object allocation in various scenarios, such as developing a custom caching solution, and creating unmanaged objects

For example, in the case of the CMAB, the test scenarios that include caching data and writing to a data store such as file are the potential scenarios that need to be stress tested for memory leaks and synchronization issues, respectively.

Step 2: Identify Workload

Identify the workload for each of the performance-critical scenarios. Choose a workload that stresses the application block sufficiently beyond the peak operating capacity.

You can capture the peak operating capacity for a particular profile from the load testing process and incrementally increase the load and observe the behavior at various load conditions. For example, in the case of the CMAB, if the peak operating capacity for a writing to a file scenario is 150 concurrent users, you can start the stress testing by incrementing the load with a delta of 50 or 100 users and analyze the application block's behavior.

Step 3: Identify Metrics

Identify the metrics that help you to analyze the bottlenecks and the metrics related to your performance objectives. When load testing, you may add a wide range of metrics (during the first or subsequent iterations) to detect any possible performance problems, but when stress testing, the metrics monitored are focused on a single problem. For example, to capture the contentions in the application block code when stress testing the "writing to a file" scenario for the CMAB, you need to monitor the counters listed in Table 8.6.

Table 8.6: Metrics to Measure When Stress Testing the "Writing to a File" Scenario

Base set of metrics:  
ObjectCounterInstance
Processor% Processor Time_Total
ProcessPrivate Bytesaspnet.wp
MemoryAvailable MBytesNot applicable
ASP.NETRequests RejectedNot applicable
ASP.NETRequest Execution TimeNot applicable
ASP.NET applicationsRequests/SecYour virtual directory
Contention-related metrics  
ObjectCounterInstance
.NET CLR LocksAndThreadsContention Rate/secaspnet_wp
.NET CLR LocksAndThreadsCurrent Queue Lengthaspnet.wp

Step 4: Create Test Cases

The next step is to create test cases. When load testing, you have a list of prioritized scenarios, but when stress testing you identify a particular scenario that needs to be stress tested. There may be more than one scenario or there may be a combination of scenarios that you can stress test during a particular test run to reproduce a potential problem.

Document your test cases for a list of scenarios identified in Step 1.

Test Case for the CMAB Sample Application Block

In the case of the CMAB, a sample test case would be the following:

  • Scenario: Write to a file
  • Number of users: 350 concurrent users
  • Test duration: 10 hours
  • Think time: 0
  • Expected results:
    • The ASP.NET worker process should not be recycled.
    • Throughput should not fall below 30 requests per second (ASP.NET\Requests/sec performance counter).
    • Response time should not exceed 10 seconds (on 100 Mbps LAN).
    • Server busy errors should not be more than 25 percent of the total response because of contention-related issues.

Step 5: Simulate Load

The load is simulated using load generator tools such as ACT. The metrics are captured using performance monitoring tools such as System Monitor. You should make sure that the client systems that are used to generate loads do not cross the resource utilization thresholds.

Step 6: Analyze the Results

The captured metrics should be analyzed for diagnosing the bottleneck. If the metrics are below the accepted level of performance, you may need to do one of the following:

  • Debug the code during white-box testing to identify any possible contention issues.
  • Examine the stack dumps of the worker process to diagnose the exact cause of deadlocks.
  • Perform a design review of the module to identify whether you need to consider a design change to satisfy the performance goal.

For example, to reduce contention when writing to a file, you can update the changes in shadow files, which are exact replicas of the original files, and later merge the changes to the original file.

For more information about stress testing, see "Stress Testing Process" in "Chapter 16—Testing .NET Application Performance" of Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt16.asp.

Tools

Various tools are used during the performance testing process. These tools can be broadly classified into two categories:

  • Load simulators and generators
  • Performance monitoring tools

The next sections describe each of these categories.

Load Simulators and Generators

These tools simulate or generate the specified load in terms of users, active connections, and so on. In addition to generating load, these tools can also help you to gather related metrics, such as response time and requests per second. They can also generate reports that help you to analyze the captured metrics.

One tool that falls into this category is Microsoft Application Center Test (ACT). ACT is designed to stress test Web applications or Web services. ACT is a processor-intensive tool that can quickly stress the client computer. You should distribute the load among various clients running ACT if the client shows high processor utilization. ACT is included with Enterprise editions of Microsoft Visual Studio® .NET 2003 development system.

For more information about how to use ACT, see "How To: Use ACT to Test Performance and Scalability" in Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/ScaleNetHowTo10.asp.

For more general information about ACT, see "Microsoft Application Center Test 1.0, Visual Studio .NET Edition" on MSDN at:
http://msdn.microsoft.com/library/en-us/act/htm/actml_main.asp.

Performance Monitoring Tools

Performance monitoring tools capture the metrics during load and stress testing. The following tools can be used to monitor resource utilization and other performance metrics during testing:

  • System Monitor. System Monitor is a standard component of the Microsoft Windows® operating system. It can be used to monitor performance objects and counters. It can also be used to monitor instances of various hardware and software components. System Monitor is useful when you need to log the metrics for a particular test duration ranging from a few minutes to a few days.
  • Microsoft Operations Manager (MOM). MOM provides event-driven operations monitoring and performance tracking capability. MOM is suitable for collecting large amounts of data over a long period of time. MOM agents on individual servers collect data and send it to a centralized server, where the data is stored in a MOM database. For more information, see "Microsoft Operations Manager" on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/momsdk/htm/momstart_1nad.asp.
  • Network Monitor (NetMon). NetMon is used to monitor network traffic. It provides statistics such as packet size, network utilization, routing, timing, and other statistics that can be used to analyze system performance when testing the performance of application blocks that have to access resources over the network. For more information, see "Network Monitor" on MSDN at:
    http://msdn.microsoft.com/library/en-us/netmon/netmon/network_monitor.asp.

Summary

This chapter explained the processes for the two types of performance testing, load testing and stress testing, from the perspective of testing of the application blocks. The processes laid down in the chapter are generic and can be customized for your own needs; for example, you may choose to have your profiles for load testing organized differently than suggested in the chapter.

Start | Previous | Next

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.