Quality in the Test Automation Review Process and Design Review Template

About this document

  • Prerequisite knowledge/experience: Software Testing, Test Automation
  • Applicable Microsoft products: Visual Studio Team System, .NET
  • Intended audience: Software Testers

Definitions and Terms

Test automation – test code written to carry out the execution of a test case in an automated or at least semi-automated fashion

Data-driven testing (DDT) – test cases that are executed multiples times—once for each input of a given set of data

Positive testing – testing normal situations that should not result in any errors or exceptions thrown

Negative testing—testing failure conditions and/or edge cases that expect an error or exception as a result

Test harness – also known as automated test framework, which consists of a test case repository and a test execution engine

Article Summary

This article discusses the need for writing good test automation and some guidance documents that can help facilitate an automation review process

Full Article

Are you automating test cases? If so, then your team should have some sort of process that ensures test automation is written well. Many people argue that since test automation is not shipping code, the code quality level is unimportant.

I would argue strongly against that point of view. Here’s why: Sure, the code doesn’t ship to the customers. Therefore, yes, customers won’t be discovering bugs in it. However, we’re talking about the code that is used to test the code that is shipping to the customer. If the test code has a bug in it then how do we know that it didn’t miss a bug that was in the shipping code? The bottom line is we don’t. The quality of test automation is critical to validating the quality of shipping code. Furthermore, just like the code that is shipped, test code has a maintenance life of its own. Good design, use of design patterns, and refactoring is just as valuable on test code as it is for shipping code since someone is going to be modifying or enhancing it somewhere down the line.

For these reasons, the Microsoft.com team (as well as other teams across the company) has a virtual team of engineers focused on test automation with the goal of ”increasing test automation efficacy without introducing too much process overhead”. “Increasing test automation efficacy” is extremely broad so we’ve translated that vision into more specific objectives:

  • Introduce a more planned approach to developing test automation
  • Increase the Return on Investment  of test automation
  • Increase the quality of test automation design and code
  • Promote sharing of test automation best practices
  • Increase awareness of test automation code that is available and reusable
  • Decrease maintenance costs of test automation code
  • Ensure test automation plans are comprehensive and cover more than just functional testing (i.e. performance, security)

In order to achieve these objectives, the team’s first priority was to develop a test automation review process that would facilitate the way we create test automation code. What we came up with is a process that includes two major milestones: a test automation design review and a test automation code review.

The below section is that of a Test Automation Design Review Template that is filled out by the tester designing the automation and then submitted to a review team. This helps the tester cover all the bases of a good design as well as document their intention so that the reviewer can easily understand what the automation design is and provide feedback accordingly. I hope these documents will be helpful for you to use in your own automation review process!

Test Automation Design Review Template

Project Name:
Design Author(s):

<<section guidance>>
The purpose of this template is two-fold: first, to get you thinking strategically about your automation design for a new project or component and second, to standardize the documentation approach so it is easier for others to review. It is meant to be filled out before you start writing your automation and prior to asking others to review your design. It contains a list of template sections that will help you structure your automation design and address different aspects of it. Please note that this is only meant to spur high-level thinking about the automation design and in no way should replace the rigorous level of detail that goes into identifying specific test cases and execution scenarios. Since the purpose of filling out this document is primarily for the design review, it is not necessarily expected to be a living document that is kept up-to-date at all times.

The text enclosed in the “section guidance” tags is meant to give you guidance around understanding the purpose of each template section and also assist you in filling it out. The section guidance snippets can be deleted once the section has been completed.
<<end of section guidance>>

1. Test Projects

Questions to answer:

What projects will be created?
What is the intent of each project?

<<section guidance>>
Definition: A standard method of categorizing test code into different projects which gives structure and implies the purpose.

Required Projects:
(More detail is provided in Test Code Layout below)
1. [ProjectName]Tests - contains test methods (no shared or common libraries)
2. [ProjectName]TestLibrary - contains test library code (no test methods)

Optional Projects:

This is a list of projects that have been created for current or past releases. It is used to organize similar pieces of code that need to be separate from the two required projects above.

1. Console app
Purpose - to allow someone to quickly re-run the same test over and over (call the test from the main.cs file's Main() routine, then simply hit F5. This also creates a sandbox where the user can play and modify code temporarily without having to check out any test files or libraries.

Creation - A console app is added to the solution, with one simple Main.cs file. This file is checked into source control as "[Main.cs]". Each new enlistment will need to copy it local to a new file called Main.cs which is not checked into source control. Rather, the file is kept locally and never checked in. The console app project is set as the "Default Startup Project".

2. Web proxy library
Purpose - to abstract out complex code and code otherwise unrelated to the other projects. This allowed for better organization and clear boundaries of what code does what.

3. WebPage Library or API
Purpose - to create a programmatic interface for testing a set of webpages that encapsulates common operations frequently used in a number of test cases. Then, instead of updating all test cases that commonly execute the same sequence of steps, the WebPage library wraps this functionality so that breaking changes only need to be updated in one place.
<<end of section guidance>> 

2. Test Code Layout

Questions to answer:

How will projects be structured in source?

<<section guidance>>
Definition:  A uniform organization scheme that allows a user to quickly identify what code belongs where and how to find it. This applies to both test code (code automating test cases), test libraries (common code shared by test cases), and other forms of code.

NOTE: The level of detail here will obviously vary according to what is known up front, but the more detail one can include here the better.

General guidelines:
1.  Reusable test code is in a separate project from test cases
2. A folder should only have similar items in it
3. Files should be granular enough that multiple people working on the project at the same time will not need to check out the same file at the same time.
4. The schema should be used by all projects so that users will have a common framework and not have to learn new structures with each new project

Folder structure:
Main directory just for this solution, preferably off $[Project Name]\Main\Test\
Folders: One per project (See below)
Files: One or more solution files, and a ReadMe.Txt or ReadMe.Docx (containing any special instructions for layout, setting up, or using the automation).
Sample Project Layout:
[ProjectName]Tests.csproj
- [Feature1Folder]
- - [Feature1][MethodGroupName1]Tests.cs
- - [Feature1][MethodGroupName2]Tests.cs
- - [Feature1]DataDrivenTests.xls
[ProjectName]TestLibrary.csproj
- Settings.cs or App.config
- [Feature1Folder]
- - [Feature1][MethodGroupName1]Lib.cs
- - [Feature1][MethodGroupName2]Lib.cs
<<end of section guidance>>

3. Automation Architecture

<<section guidance>>
Include architecture diagram of your automation here.
Note: Required for all major releases
<<end of section guidance>>

4. Designing for Code Reuse

Questions to answer:

What existing code do you plan to leverage in your design?
What reusable functionality do you plan to contribute and how will this be shared?
What reusable methods do you plan to add to higher-level test libraries (refer to section guidance)?
How do you plan to structure your Project Test Library (one layer, two layers etc.)?

<<section guidance>>
There are 4 main levels of test code abstraction which facilitate re-use:

  1. Test Methods
  2. Project Test Library
  3. Customer/Adopter Test Libraries
  4. Shared Test Libraries

Test Methods
These are the individually implemented test cases. Any duplicate or copied code should instead refactored and moved up to the test library.

Project Test Library
The test library is used for our internal testing. There can be multiple levels of abstraction within the test library itself, especially in UI automation, where there is a logical layer and a physical layer. Functionality that would be useful outside of just the internal testing process (i.e. a customer could use it to run tests) should be moved up to the next level of code abstraction, Customer/Adopter Test Libraries.

Customer/Adopter Test Libraries
The Customer/Adopter Test Libraries can be used by other people to quickly and easily access functionality in the product from a test automation perspective. These libraries should be scrutinized in a somewhat more rigorous manner since the intent is to give it away externally. It is also beneficial to have some accompanying documentation.

Shared Test Libraries
Shared Test Libraries are automation libraries which are system and project agnostic. There should be no dependencies in these DLLs, other than .Net Frameworks. Any library code that is generic enough to apply to multiple projects/scenarios should be added to these. For example, common stuff like SQL helper objects, Event Log checking, etc is prime for this level of abstraction.
<<end of section guidance>>

5. Security Design Considerations

Questions to answer:

How will your design accommodate the security testing necessary for this project? (If none, please explain why…)

<<section guidance>>
The purpose of this section is to document any special design considerations in your automation that are related to testing the security of the product. Also, this section contains tips about security related precautions we need to take in order to keep our testing environment secure.

Secure Testing Environment
Personally Identifiable Information (PII) - there are times where production data must be used for testing purposes. For example, the test might require actual production-like data distribution. In cases such as this, we need to sanitize the data before importing it into our test environment.
<<end of section guidance>>

6. Performance Design Considerations

Questions to answer:

How will your design be fit for performance testing?

<<section guidance>>
Reusing Functional Tests for Performance Testing

  • Displaying Exception Information

    1. We typically do functional testing from the VSTS IDE, which automatically displays information about any exceptions that occur during execution. However, we often do performance testing from the VS Command Prompt where this information is not automatically displayed. In order for the performance tester to be able to see this information without having to break into a debugger, we should create a new test method specifically for performance testing (preferably prefixed with a naming convention that identifies it as such) that wraps the existing test method that was used for functional testing. The body of the new method should have nothing but a call to the functional test method surrounded by a ‘try’ block. The ‘catch’ block should write the exception to the console (so that the perf tester knows what went wrong) and just re-throw the exception.
  • Performance Measurement & Logging

    1. In performance testing we are usually only interested in the actual execution time of the unit under test. It would not be desirable to include overhead code such as test method setup, creation of data to execute the test, and verification of the expected and actual data / state / behavior etc. since we do not want to corrupt the results. If this requirement differs from, for example, another measurement time that must be inclusive of some of these other things, the code should be written in a way that easily allows the performance tester to easily change where timing begins and ends.

    2. Extensive performance logging requirements might require a tool other than VSTS for customized logging. One suggestion is to use log4net. It allows multiple outputs such as DB, file, console etc. We may implement a wrapper for this in the future to enable asynchronous logging. Whatever the tool used, there should be a configuration switch to turn logging on or off.

  • SQL Server

    1. Tables used for testing, whether functional or performance, should have the appropriate indexes. Although the performance measurement results shouldn’t be affected by, for instance, the time it takes to get test data out of a table, it could speed up execution time and therefore allow us to run tests with more load if necessary.
  • Other Best Practices (Append to the list whenever new information is known)

    1. This should be self-evident so take it as a reminder. Document your code with comments that include troubleshooting information. This helps the performance tester with known ‘gotchas’ and other things that may come up.

    2. Have one central place to edit configurable values like connection strings, SqlCommandTimeOut, I/O paths.

    3. Be aware that test methods that use data source attributes are usually so inefficient that they cannot be reused as load tests. In other words, they are too slow to be able to generate enough load/RPS.

    4. For data driven tests, consider buffering data at client prior to the test run.  This helps avoid the overhead of going back and forth between client and data source while performance testing.

    5. If the functional tests call ASPX pages, instrument the pages to have QueryString parameters that can be fed test data for specific scenarios during performance runs.

    <<end of section guidance>>

    8. Process Checklist

    Steps (mark each upon completion)

    [  ] Prepare automation design document using template
    [  ] Automation design reviewed by manager
    [  ] Automation design reviewed by review team
    [  ] All action items from design reviews completed

    9. Tracking Info

    Date Name Document Action (Drafted, Updated, Review etc.)
         

    About the Authors:

    Devin A. Rychetnik is currently working as a Software Development Engineer in Test II for the Windows Marketplace for Mobile team. In addition to testing, his 9 years of experience in software includes development, project management and security. He is currently finishing a Masters Degree in Software Development from the Open University of England and is a certified Six Sigma Green Belt and Project Management Professional (PMP).

    Other notable contributors:

    Venkat B. Iyer; Francois Burianek; Syed Sohail; Ajay Jha; Jim Lakshmipathy; Mansoor Mannan; Viet Pham; Vijaykumar Ramanujam; Sachin Joshi