Export (0) Print
Expand All
11 out of 16 rated this helpful - Rate this topic

Black Box and White Box Testing for Application Blocks

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

 

patterns & practices Developer Center

Microsoft Corporation

January 2005

Summary: Detailed overview of white box testing and black box testing an application block covering subjects such as code path profiling, instrumentation, and testing external interfaces.

Contents

Objectives

Overview

Black Box Testing

White Box Testing

Tools

Summary

Objectives

  • Learn about black box testing an application block.
  • Learn about white box testing an application block.

Overview

After you complete the design and code review of the application block, you need to test the application block to make sure it meets the functional requirements and successfully implements the functionality for the usage scenarios it was designed and implemented for.

The testing effort can be divided into two categories that complement each other:

  • Black box testing. This approach tests all possible combinations of end-user actions. Black box testing assumes no knowledge of code and is intended to simulate the end-user experience. You can use sample applications to integrate and test the application block for black box testing. You can begin planning for black box testing immediately after the requirements and the functional specifications are available.
  • White box testing. (This is also known as glass box, clear box, and open box testing.) In white box testing, you create test cases by looking at the code to detect any potential failure scenarios. You determine the suitable input data for testing various APIs and the special code paths that need to be tested by analyzing the source code for the application block. Therefore, the test plans need to be updated before starting white box testing and only after a stable build of the code is available.

    A failure of a white box test may result in a change that requires all black box testing to be repeated and white box testing paths to be reviewed and possibly changed.

The goals of testing can be summarized as follows:

  • Verify that the application block is able to meet all requirements in accordance with the functional specifications document.
  • Make sure that the application block has consistent and expected output for all usage scenarios for both valid and invalid inputs. For example, make sure the error messages are meaningful and help the user in diagnosing the actual problem.

You may need to develop one or more of the following to test the functionality of the application blocks:

  • Test harnesses, such as NUnit test cases, to test the API of the application block for various inputs
  • Prototype Windows Forms and Web Forms applications that integrate the application blocks and are deployed in simulated target deployments
  • Automated scripts that test the API of the application blocks for various inputs

This chapter examines the process of black box testing and white box testing. It includes code examples and sample test cases to demonstrate the approach for black box testing and white box testing application blocks. For the purpose of the examples illustrated in this chapter, it is assumed that functionality testing is being done for the Management Application Block (CMAB). The CMAB has already been through design and code review. The requirements for the CMAB are the following:

  • It provides the functionality to read and store configuration information transparently in a persistent storage medium. The storage mediums are SQL Server, the registry, and an XML file.
  • It provides a configurable option to store the information in encrypted form and plain text using XML notation.
  • It can be used with desktop applications and Web applications that are deployed in a Web farm.
  • It caches configuration information in memory to reduce cross-process communication, such as reading from any persistent medium. This reduces the response time of the request for any configuration information. The expiration and scavenging mechanism for the data that is cached in memory is similar to the cron algorithm in UNIX.
  • It can store and return data from various locales and cultures without any loss of data integrity.

Black Box Testing

Black box testing assumes the code to be a black box that responds to input stimuli. The testing focuses on the output to various types of stimuli in the targeted deployment environments. It focuses on validation tests, boundary conditions, destructive testing, reproducibility tests, performance tests, globalization, and security-related testing.

Risk analysis should be done to estimate the amount and the level of testing that needs to be done. Risk analysis gives the necessary criteria about when to stop the testing process. Risk analysis prioritizes the test cases. It takes into account the impact of the errors and the probability of occurrence of the errors. By concentrating on the test cases that can lead to high impact and high probability errors, the testing effort can be reduced and the application block can be ensured to be good enough to be used by various applications.

Preferably, black box testing should be conducted in a test environment close to the target environment. There can be one or more deployment scenarios for the application block that is being tested. The requirements and the behavior of the application block can vary with the deployment scenario; therefore, testing the application block in a simulated environment that closely resembles the deployment environment ensures that it is tested to satisfy all requirements of the targeted real-life conditions. There will be no surprises in the production environment. The test cases being executed ensure robustness of the application block for the targeted deployment scenarios.

For example, the CMAB can be deployed on the desktop with Windows Forms applications or in a Web farm when integrated with Web applications. The CMAB requirements, such as performance objectives, vary from the desktop environment to the Web environment. The test cases and the test environment have to vary according to the target environments. Other application blocks may have more restricted and specialized target environments. An example of an application block that requires a specialized test environment is an application block that is deployed on mobile devices and is used for synchronizing data with a central server.

As mentioned earlier, you will need to develop custom test harnesses for functionality testing purpose.

Input

The following input is required for black box testing:

  • Requirements
  • Functional specifications
  • High-level design documents
  • Application block source code

The black box testing process for an application block is shown in Figure 6.1.

Ff649503.f06mtf01(en-us,PandP.10).gif

Figure 6.1 . Black box testing process

Black Box Testing Steps

Black box testing involves testing external interfaces to ensure that the code meets functional and nonfunctional requirements. The various steps involved in black box testing are the following:

  1. Create test plans. Create prioritized test plans for black box testing.
  2. Test the external interfaces. Test the external interfaces for various type of inputs using automated test suites, such as NUnit suites and custom prototype applications.
  3. Perform load testing. Load test the application block to analyze the behavior at various load levels. This ensures that it meets all performance objectives that are stated as requirements.
  4. Perform stress testing. Stress test the application block to analyze various bottlenecks and to identify any issues visible only under extreme load conditions, such as race conditions and contentions.
  5. Perform security testing. Test for possible threats in deployment scenarios. Deploy the application block in a simulated target environment and try to hack the application by exploiting any possible weakness of the application block.
  6. Perform globalization testing. Execute test cases to ensure that the application block can be integrated with applications targeted toward locales other than the default locale used for development.

The next sections describe each of these steps.

Step 1: Create Test Plans

The first step in the process of black box testing is to create prioritized test plans. You can prepare the test cases for black box testing even before you implement the application block. The test cases are based on the requirements and the functional specification documents.

The requirements and functional specification documents help you extract various usage scenarios and the expected output in each scenario.

The detailed test plan document includes test cases for the following:

  • Testing the external interfaces with various types of input
  • Load testing and stress testing
  • Security testing
  • Globalization testing

For more information about creating test cases, see Chapter 3, "Testing Process for Application Blocks."

Step 2: Test the External Interfaces

You need to test the external interfaces of the application block using the following strategies:

  • Ensure that the application block exposes interfaces that address all functional specifications and requirements. To perform this validation testing, do the following:
    1. Prepare a checklist of all requirements and features that are expected from the application block.
    2. Create test harnesses, such as NUnit, and small "hello world"' applications to use all exposed APIs of the test application block.
    3. Run the test harnesses.

    Using NUnit, you can validate that the intended feature is working if the input is given on the expected lines.

    The sample applications can indicate whether the application block can be integrated and deployed in the target environment. The sample applications are used to test for the possible user actions for the usage scenarios; these include both the expected process flows and the random inputs. For example, a Web application deployed in a Web farm that integrates the CMAB can be used to test reading and writing information from a persistent database, such as the registry, SQL, or an XML file. You need to test the functionality by using various configuration options in the configuration file.

  • Testing for various types of inputs. After ensuring that the application block exposes the interfaces that address all of the functional specifications, you need to test the robustness of these interfaces. You need to test for the following input types:
    • Randomly generated input within a specified range
    • Boundary cases for the specified range of input
    • The number zero testing if the input is numeric
    • The null input
    • Invalid input or input that is out of the expected range

This testing ensures that the application block provides expected output for data within the specified range and gracefully handles all invalid data. Meaningful error messages should be displayed for invalid input. Boundary testing ensures that the highest and lowest permitted inputs produce expected output.

You can use NUnit for this type of input testing. Separate sets of NUnit tests can be generated for each range of input types. Executing these NUnit tests on each new build of the application block ensures that the API is able to successfully process the given input.

For example, consider the following API that is a part of an application block and takes in an integer argument.

public class SampleClass{
public string SampleAPI(int testArg){
// API specific logic where a check is made whether testArg has a 
//value greater than 0 and is less than 65536. If the preceding 
//condition is not satisfied then an exception of user defined type //DataException is thrown.
}
}
  

The test harness code using NUnit framework for testing the various inputs to the above API will look like the following.

//NUnit tests provide various types of inputs for testing of the SampleClass
using NUnit.Framework;
namespace TestHarnessSample{
    [TestFixture]
    public class SampleTest{
    [Test]
[ExpectedException(typeof(DataException))]
    public void ZeroInputTest(){
    SampleClass testSample = new testSample ();
    //test for input as 0
    int result = testSample.SampleAPI (0);
    }
    [Test]
[ExpectedException(typeof(DataException))]
    public void NegativeInputTest(){
    SampleClass testSample = new testSample ();
    //test for input which is less than the expected range
    int result = testSample.SampleAPI (-1);
    }
[Test]
[ExpectedException(typeof(DataException))]
    public void NullInputTest(){
    SampleClass testSample = new testSample ();
    //test for null input
    int result = testSample.SampleAPI (null);
    }
[Test]
[ExpectedException(typeof(DataException))]
    public void LargeNumberTest()    {
    SampleClass testSample = new testSample ();
    //test for input which is greater than the expected range
    int result = testSample.SampleAPI (65537);
    }
[Test]
    public void ValidInputTest()    {
    SampleClass testSample = new testSample ();
    //test for input which is within the expected range
    int result = testSample.SampleAPI (65);
    Assert.AreEqual(3000, result);
    }
    }
}
  

Using the preceding test harness code, the various combinations of inputs can be tested for the SampleClass. The expected behavior for all test methods, except the ValidInputTest method, is that a user-defined exception of type DataException should be thrown. If the DataException is not thrown, the API is missing the intended input validation and the test is a failure. For the ValidInputTest method, a result of 3,000 is expected and if the result is not obtained, the particular test is a failure.

For a more specific example, consider the CMAB example. The test harness code that feeds various combinations of inputs to the ConfigurationManager class is shown in the following code.

using NUnit.Framework;
namespace CMABTestSample{
    [TestFixture]
    public class CMABTest{
        private string sectionName;
        [Test]
        [ExpectedException(typeof(ArgumentNullException))]
        public void Test1(){
            //test by passing an empty string as section name for 
            //reading configuration data
            sectionName = "";
 Hashtable sampleTable =  (Hashtable)ConfigurationManager.Read(sectionName);
        }
        [Test]
        [ExpectedException(typeof(ConfigurationException))]
        public void Test2(){
            sectionName = " ";
            //test by passing a string with a space character as 
            //section name for reading configuration data
 Hashtable sampleTable =              (Hashtable)ConfigurationManager.Read(sectionName);
        }
        [Test]
        [ExpectedException(typeof(ArgumentNullException))]
        public void Test3(){
            //test by passing a string with a space character as 
            //section name for reading configuration data
            Hashtable sampleTable = (Hashtable)ConfigurationManager.Read(null);  
        }
        [Test]
        public void Test4(){
            sectionName = "TestConfigSection";
            //test by passing a valid section name for reading 
            //configuration data
 Hashtable sampleTable =     (Hashtable)ConfigurationManager.Read(sectionName);
            Assert.AreEqual(sampleTable["Item1"].Value, "XXXXX");
        }
    }
}
  

The preceding test harness code gives various combinations of inputs to the CMAB ConfigurationManager class for reading of data. The first three test methods force exceptions and tests that exceptions are thrown as expected. The last method, Test4(), passes a valid input parameter and expects a valid response of configuration data. The exceptions that should be generated in the respective test methods are mentioned in the ExpectedException attribute on top of every test method. If the indicated exception is not thrown by the ConfigurationManager, that particular test is considered to be a failure. The CMAB is expected to throw ArgumentNullException if an empty string or null value is passed to the ConfigurationManager. If any other invalid input is passed, the CMAB is expected to throw ConfigurationException. The test method Test4() passes a valid input to the ConfigurationManager.Read method. The output is compared with the expected result by using the Assert.AreEqual()method of the NUnit framework. If the expected result does not match the output returned by the Configuration Manager, the test is a failure.

Step 3: Perform Load Testing

Use load testing to analyze the application block behavior under normal and peak load conditions. Load testing allows you to verify that the application block can meet the desired performance objectives and does not overshoot the allocated budget for resource utilization such as memory, processor, and network I/O. The requirements document usually lists the resource utilization budget for the application block and the workload it should be able to support.

For example, the CMAB had the following performance objectives on a Web server (please note that these objectives are totally fictitious and are only for the purpose of illustration):

  • The CPU overhead should not be more than 7–10 percent.
  • The application block should be able to support a minimum of 200 concurrent users for reading data from SQL Server.
  • The application block should be able to support a minimum of 150 concurrent users for writing data to SQL Server.
  • The response time for a client (the client is firing requests from a 100 Mbps VLAN in the test lab) is not more than 2 seconds for the given concurrent load.

You can measure metrics related to response times, throughput rates, and so on, for the load test. In addition, you can measure other metrics that help you identify any potential bottlenecks.

To load test an application block, you need to develop a sample application that is an accurate prototype of applications that will be used in the target environment. In the case of the CMAB, and because one of the deployment scenarios is the Web environment, a simple Web application can be developed that uses the application block for reading and writing configuration information. Preferably, this application block should be tested in clustered and nonclustered environments because deploying in a Web farm is one of the deployment scenarios.

For a detailed process on load testing application blocks, see Chapter 8, "Performance Testing for Application Blocks."

Table 6.1 shows a sample test case for load testing the CMAB.

Table 6.1: Sample Test Case Document for Load Testing the CMAB When Reading Data from a SQL Store

Scenario 1.1Reading configuration data from a SQL store with data caching and data protection options enabled.
PriorityHigh
Execution detailsCreate a sample Web application that integrates the CMAB.
Add counters to test the performance of the CMAB.
Configure ACT to set the following attributes for the test:
? Number of users: 200 concurrent users
? Test duration: 20 minutes
? Think time: 0
Run ACT tool for the specific test duration.
Tools requiredSample application integrating the CMAB.
Expected resultsThroughput> 150 requests per second
Processor\%Processor Time< 75 %
Request Execution Time<= 2 seconds (on 100 megabits per second [Mbps] LAN)

Step 4: Perform Stress Testing

Use stress testing to evaluate the application block's behavior when it is pushed beyond the normal or peak load conditions. The expectation from the system beyond load conditions is to either return expected output or return meaningful error messages to the user without corrupting the integrity of any data. The goal of stress testing is to discover bugs that surface only under high load conditions, such as synchronization issues, race conditions, and memory leaks.

The data that is collected in stress testing is based on the input from load testing and the code review. The code review identifies the potential areas in code that may lead to the preceding issues. The metrics collected in load testing also provides input for identifying the scenarios that need to be stress tested. For example, if during load testing, you observe that the application starts to show increased response times for increased load conditions when writing to SQL Server, you should check for any potential issues because of concurrency.

For a detailed process on stress testing application blocks, see Chapter 8, "Performance Testing for Application Blocks."

Table 6.2 shows a sample test case for stress testing the CMAB.

Table 6.2: Sample Test Case Document for Stress Testing the CMAB When Reading Data from a SQL Store

Scenario 1.2Reading configuration data from a SQL store with data caching and data protection options enabled.
PriorityHigh
Execution detailsUse the sample application created for load testing.
Add counters to identify potential bottlenecks.
Configure ACT to set the following attributes for the test:
Number of users: 500 concurrent users
Test duration: 60 minutes
Think time: 0
Run ACT tool for the specific test duration.
Tools requiredSample application integrating the CMAB.
Expected resultsThe ASP.NET worker process should not be recycled.
Response time should not exceed 7 seconds (on a 100 megabits per second [Mbps] LAN)
Server busy errors should not be more than 20 percent of the total response time because of contention-related issues.

Step 5: Perform Security Testing

Black box security testing the application block identifies security vulnerabilities within the application block by treating it as an independent unit. The testing is done at run time. The purpose is to forcefully break the interfaces of the application block, intercept sensitive data within the block, and so on. Sample test harnesses can be used to create a deployment scenario for the application block.

Depending on the functionality the application block provides, test cases can be identified. Examples of test cases and tests can be the following:

  • If the application block accepts data from a user, make sure it validates the input data by creating test cases to pass different types of data, including unsafe data, through the application block's interfaces and confirming that the application block is able to stop it and handle it by providing appropriate error messages.
  • If the application block accesses any secure resources, such as the registry or file system, identify test cases that can test for threats resulting from elevated privileges.
  • If the application block handles secure data and uses cryptography, scenarios can be developed for simulating various types of attacks to access the data. This tests and ensures that the appropriate algorithms and methods are used to secure data.

Step 6: Perform Globalization Testing

The goal of globalization testing is to detect potential problems in the application block that could inhibit its successful integration with an application that uses culture resources different than the default culture resources used for development. Globalization testing involves passing culture-specific input to a sample application integrating the application block. It makes sure that the code can handle all international support and supports any culture or locale settings without breaking functionality that would cause data loss.

To perform globalization testing, you must install multiple language groups and set the culture or locale to different cultures or locales, such as Japanese or German, from the local culture or locale. Executing test cases in both Japanese and German environments, and a combination of both, can cover most globalization issues.

For a detailed process on globalization testing application blocks, see Chapter 7, "Globalization Testing for Application Blocks."

White Box Testing

White box testing assumes that the tester can take a look at the code for the application block and create test cases that look for any potential failure scenarios. During white box testing, you analyze the code of the application block and prepare test cases for testing the functionality to ensure that the class is behaving in accordance with the specifications and testing for robustness.

Input

The following input is required for white box testing:

  • Requirements
  • Functional specifications
  • High-level design documents
  • Detailed design documents
  • Application block source code

White Box Testing Steps

The white box testing process for an application block is shown in Figure 6.2.

Ff649503.f06mtf02(en-us,PandP.10).gif

Figure 6.2. White box testing process

White box testing involves the following steps:

  1. Create test plans. Identify all white box test scenarios and prioritize them.
  2. Profile the application block. This step involves studying the code at run time to understand the resource utilization, time spent by various methods and operations, areas in code that are not accessed, and so on.
  3. Test the internal subroutines. This step ensures that the subroutines or the nonpublic interfaces can handle all types of data appropriately.
  4. Test loops and conditional statements. This step focuses on testing the loops and conditional statements for accuracy and efficiency for different data inputs.
  5. Perform security testing. White box security testing helps you understand possible security loopholes by looking at the way the code handles security.

The next sections describe each of these steps.

Step 1: Create Test Plans

The test plans for white box testing can be created only after a reasonably stable build of the application block is available. The creation of test plans involves extensive code review and input from design review and black box testing. The test plans for white box testing include the following:

  • Profiling, including code coverage, resource utilization, and resource leaks
  • Testing internal subroutines for integrity and consistency in data processing
  • Loop testing; test simple, concatenated, nested, and unstructured loops
  • Conditional statements, such as simple expressions, compound expressions, and expressions that evaluate to Boolean.
  1. For more information about creating test cases, see Chapter 3, "Testing Process for Application Blocks."

Step 2: Profile the Application Block

Profiling allows you to monitor the behavior of a particular code path at run time when the code is being executed. Profiling includes the following tests:

  • Code coverage. Code coverage testing ensures that every line of code is executed at least once during testing. You must develop test cases in a way that ensures the entire execution tree is tested at least once. To ensure that each statement is executed once, test cases should be based on the control structure in the code and the sequence diagrams from the design documents. The control structures in the code consist of various conditions as follows:
    • Various conditional statements that branch into different code paths. For example, a Boolean variable that evaluates to "false" or "true" can execute different code paths. There can be other compound conditions with multiple conditions, Boolean operators, and bit-wise comparisons.
    • Various types of loops, such as simple loops, concatenated loops, and nested loops.

    There are various tools available for code coverage testing, but you still need to execute the test cases. The tools identify the code that has been executed during the testing. In this way, you can identify the redundant code that never gets executed. This code may be left over from a previous version of the functionality or may signify a partially implemented functionality or dead code that never gets called.

    Tables 6.3 and 6.4 list sample test cases for testing the code coverage of ConfigurationManager class of the CMAB.

    Table 6.3: The CMAB Test Case Document for Testing the Code Coverage for InitAllProvider Method and All Invoked Methods

Scenario 1.3Test the code coverage for the method InitAllProviders()in ConfigurationManager class.
PriorityHigh
Execution detailsCreate a sample application for reading configuration data from a data store through the CMAB.
Run the application under the following conditions:
With a default section present
Without a default section
Trace the code coverage using an automated tool.
Report any code not being called in InitAllProviders().
Tools requiredCustom test harness integrating the application block for reading configuration data.
Expected resultsThe entire code for InitAllProviders()method and all the invoked methods should be covered under the preceding conditions.

Table 6.4: The CMAB Test Case Document for Testing the Code Coverage for Read Method and All Invoked Methods

Scenario 1.4Test the code coverage for the method Read (sectionName) in the ConfigurationManager class.
PriorityHigh
Execution detailsCreate a sample application for reading configuration data from SQL database through the CMAB.
Run the application under the following conditions:
Give a null section name or a section name of zero length to the Read method.
Read a section whose name is not mentioned in the App.config or Web.config files.
Read a configuration section that has cache enabled.
Read a configuration section that has cache disabled.
Read a configuration section successfully with the cache disabled, and then disconnect the database and read the section again.
Read a configuration section with the section having no configuration data in the database.
Read the configuration section that does not have provider information mentioned in the App.config or Web.config files.
Trace the code coverage.
Report any code left not being covered in the Read (sectionName) method.
Tools requiredCustom test harness integrating the application block for reading of configuration data.
Expected resultsThe entire code for the Read (sectionName) method and the invoked methods should be covered under the preceding conditions.

  • Memory allocation pattern. You can profile the memory allocation pattern of the application block by using code profiling tools. You need to check for the following in the allocation pattern:
    • The percentage of allocations in Gen 0, Gen 1, and Gen 2. If the percentage of objects in Gen 2 is high, the resource cleanup in the application block is not efficient and there are memory leaks. This probably means the objects are held up longer than required (this may be expected in some scenarios). Profiling the application blocks gives you an idea of the type of objects that are being promoted to Gen 2 of the heap. You can then focus on analyzing the culprit code snippet and rectify the problem.

      An efficient allocation pattern should have most of the allocations in Gen 0 and Gen 1 over a period of time.

      There might be certain objects, such as a pinned pool of reusable buffers used for I/O work, that are promoted to Gen 2 when the application starts. The faster this pool of buffers gets promoted to Gen 2, the better.

    • The fragmentation of the heap. The heap fragmentation happens most often in scenarios where the objects are pinned and cannot be moved. The memory cannot be efficiently compacted around these objects. The longer these objects are pinned, the greater the chances of heap fragmentation. As mentioned earlier, there might be a pool of buffers that needs to be used for I/O calls. If these objects are initialized when the application starts, they quickly move the Gen 2, where the overhead of heap allocation is largely removed.
    • "Side effect" allocations. Large number of side effect allocations take place because of some calls in a loop or recursive functions, such as the calls to string-related functions String.ToLower()or concatenation using the + operator happening in a loop. This causes the original string to be discarded and a new string to be allocated for each such operation. These operations in a loop may cause significant increase in memory consumption.

    You can also analyze memory leaks by using debugging tools, such as WinDbg from the Windows Resource Kit. Using these tools, you can analyze the heap allocations for the process.

    For more information about how to use WinDbg for memory leaks, see "Debugging Memory Problems" in "Production Debugging for .NET Framework Applications" on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/DBGch02.asp.

    For more information about best practices related to garbage collection, see "Chapter 5—Improving Managed Code Performance" of Improving .NET Application Performance and Scalability on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt05.asp.

  • Cost of serialization. There may be certain scenarios when the application block needs to serialize and transmit data across processes or computers. Serializing data involves memory overhead that can be quite significant, depending on the amount of data and the type of serializer or formatter used for serialization. You need to instrument your code to take the snapshots of memory utilized by the garbage collector before and after serialization.

    For more information about how to calculate the cost of serialization, see "Chapter 15—Measuring .NET Application Performance" of Improving .NET Application Performance and Scalability on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt15.asp.

  • Contention and deadlock issues. Contention and deadlock issues mostly surface under high load conditions. The input from load testing (during black box testing) give you information about the potential execution paths where contention and deadlocks issues are suspected. For example, in the case of the CMAB, you may suspect a deadlock if you see the requests timing out when trying to update a particular information in the persistent medium.

    You need to analyze these issues with invasive profiling techniques, such as using WindDbg tool, in the production environment on a live process or by analyzing the stack dumps of the process.

    For more information about production debugging of contention and deadlock issues, see "Debugging Contention Problems" in "Production Debugging for .NET Framework Applications" on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/DBGch03.asp.

  • Time taken for executing a code path. For scenarios where performance is critical, you can profile the time they take. Timing a code path may require custom instrumentation of the appropriate code. There are also various tools available that help you measure the time it takes for a particular scenario to execute by automatically creating instrumented assemblies of the application block. The profiling for time taken may be for complete execution of a usage scenario, an internal function, or even a particular loop within a function.

    For more information about how to time managed code and a working sample, see "How To: Time Managed Code Using QueryPerformanceCounter and QueryPerformanceFrequency" in Improving .NET Application Performance and Scalability on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/ScaleNetHowTo09.asp.

  • Profiling for excessive resource utilization. The input from a performance test may show excessive resource utilization, such as CPU, memory, disk I/O, or network I/O, for a particular usage scenario. But you may need to profile the code to track the piece of code that is blocking resources disproportionately. This might be an expected behavior for a particular scenario in some circumstances. For example, an empty while loop may pump up the processor utilization significantly and is something you should track and rectify; whereas, a computational logic that involves complex calculations may genuinely warrant high processor utilization.

For more information about how to use CLR Profiler, see "How To: Use CLR Profiler" in Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenethowto13.asp.

Step 3: Test the Internal Subroutines

Thoroughly test all internal subroutines for every type of input. The subroutines that are internally called by the public API to process the input may be working as expected for the expected input types. However, after a thorough code review, you may notice that there are some expressions that may fail for certain types of input. This warrants the testing of internal methods and subroutines by developing NUnit tests for internal functions after a thorough code review. Following are some examples of potential pitfalls:

  • The code analysis reveals that the function may fail for a certain input value. For example, a function expecting numeric input may fail for an input value of 0.
  • In the case of the CMAB, the function reads information from the cache. The function returns the information appropriately if the cache is not empty. However, if during the process of reading, the cache is flushed or refreshed, the function may fail.
  • The function may be reading values in a buffer before returning them to the client. Certain input values might result in a buffer overflow and loss of data.
  • The subroutine does not handle an exception where the remote call to a database is not successful. For example, in the CMAB, if the function is trying the update the SQL Server information but the SQL Server database is not available, it does not log the application in the appropriate event sink.

Step 4: Test Loops and Conditional Statements

The application block may contain various types of loops, such as simple, nested, concatenated, and unstructured loops. Although unstructured loops require redesigning, the other types of loops require extensive testing for various inputs. Loops are critical to the application block performance because they magnify seemingly trivial problems by iterating through the loop multiple times.

Some of the common errors could cause the loop to execute infinite times. This could result in excessive CPU or memory utilization resulting in the application failing. Therefore, all loops in the application block should be tested for the following conditions:

  • Provide input that results in executing the loop zero times. This can be achieved where the input to the lower bound value of the loop is less than the upper bound value.
  • Provide input that results in executing the loop one time. This can be achieved where the lower bound value and upper bound value are the same.
  • Provide input that results in executing the loop a specified number of times within a specific range.
  • Provide input that the loop might iterate n, n-1, and n+1 times. The out-of-bound loops (n-1 and n+1) are very difficult to detect with a simple code review; therefore, there is a need to execute special test cases that can simulate such cases.

When testing nested loops, you can start by testing the innermost loop, with all other loops set to iterate a minimum number of times. After the innermost loop is tested, you can set it to iterate a minimum number of times, and then test the outermost loop as if it was a simple loop.

Also, all of the conditional statements should be completely tested. The process of conditional testing ensures that the controlling expressions have been exercised during testing by presenting the evaluating expression with a set of input values. The input values ensure that all possible outcomes of the expressions are tested for expected output. The conditional statements can be a relational expression, a simple condition, a compound condition, or a Boolean expression.

Step 5: Perform Security Testing

White box security testing focuses on identifying test scenarios and testing based on knowledge of implementation details. During code reviews, you can identify areas in code that validate data, handle data, access resources, or perform privileged operations. Test cases can be developed to test all such areas. Following are some examples:

  • Validation techniques can be tested by passing negative value, null value, and so on, to make sure the proper error message displays.
  • If the application block handles sensitive data and uses cryptography, then based on knowledge from code reviews, test cases can be developed to validate the encryption technique or cryptography methods.

For more information about application areas that can be validated, see the various checklists that are part of Improving Web Application Security: Threats and Countermeasures on MSDN at:
http://msdn.microsoft.com/library/en-us/dnnetsec/html/Cl_Index_Of.asp.

Tools

This section describes and points to some of the tools, technologies, and methods that can be used during the profiling process.

Profiling

Profiling tools can profile the application blocks for analyzing resource utilization and the time it takes to complete a particular operation. They can diagnose potential bottlenecks for a particular operation. One such tool available for creating a memory utilization profile is CLR Profiler. It enables users to understand the interaction between the application and the managed heap. It includes information about how and where the memory allocation takes place and how efficient garbage collection is for the application block. CLR Profiler helps identify and isolate problematic code and track down memory leaks.

For more information about how to use CLR Profiler, see "How To: Use CLR Profiler" in Improving .NET Application Performance and Scalability on MSDN at:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenethowto13.asp.

There are various tools that you can use to profile .NET applications including Intel VTune and Xtremesoft AppMetrics. These tools help you identify and tune your application bottlenecks.

Instrumentation

Instrumentation involves adding code to the application block. The code that is added will generate events that can be logged in various event sinks. These events can be used to capture application specific metrics, profiling, and tracing of the code. The various technologies that can be used to instrument application blocks are as follows:

  • Enterprise Instrumentation Framework (EIF). EIF is a flexible and configurable instrumentation framework that encapsulates the functionality of Event Tracing for Windows (ETW), WMI (Windows Management Instrumentation), and event log service. It allows you to publish information such as errors, warnings, audits, and even business-specific events with the help of an extensible event schema. EIF also provides tracing of business processes and application blocks' execution paths.

    For more information, see "Enterprise Instrumentation Framework (EIF)" in the Microsoft® Visual Studio Developer Center on MSDN at:
    http://msdn.microsoft.com/vstudio/teamsystem/eif/.

    For more information about how to use EIF, see "How To: Use EIF" in Improving .NET Application Performance and Scalability on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenethowto14.asp.

  • Event Tracing for Windows (ETW). ETW is suitable for logging high frequency events such as warnings and audits. For more information, see "Event Tracing" in "Platform SDK: Performance Monitoring" on MSDN at:
    http://msdn.microsoft.com/library/en-us/perfmon/base/event_tracing.asp.
  • Windows Management Instrumentation (WMI). WMI is a management information and control technology built into the Microsoft Windows® operating system. WMI collects and analyzes performance-related data for application blocks through event-based monitoring. But logging to a WMI sink is an expensive operation; therefore, it should be used only to log infrequent and critical events and information.

    For more information, see "Windows Management Instrumentation" in "Windows Platform SDK" on MSDN at:
    http://msdn.microsoft.com/library/en-us/wmisdk/wmi/wmi_start_page.asp.

  • Custom performance counters. Custom performance counters can be used to capture application-specific information. For example, in the CMAB, it can be the time taken to read information from a data store.

    For more information, see the following How To articles in Improving .NET Application Performance and Scalability on MSDN at:
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenet.asp

    • "How To: Monitor the ASP.NET Thread Pool Using Custom Counters"
    • "How To: Time Managed Code Using QueryPerformanceCounter and QueryPerformanceFrequency"
    • "How To: Use Custom Performance Counters from ASP.NET"

Summary

This chapter presented the fundamentals of functionality testing and explained the two categories of testing, namely black box testing and white box testing. The processes for black box testing and white box testing have been laid out step by step in this chapter. These processes, along with the detailed processes presented in other chapters, help ensure that you deliver robust application blocks and/or customize the application blocks for your applications.

Start | Previous | Next

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.