Share via


Analyzing Test Results Using the Test Perspective in the Analysis Services Database for Visual Studio ALM

By using the Test perspective in the SQL Server Analysis Services cube for Visual Studio Team Foundation Server, you can view just the measures, dimensions, and attributes that pertain to reporting on tests results and test runs. For example, you can use these measures to determine the overall quality of each build, the tests that a particular build affected, and the number of test cases that were run. You can also answer questions about changes to the result outcomes.

The Test measure group is based on the Test Results relational table, which enables reporting on test results as either a property of the tests or an independent outcome. For more information, see Test Result Tables.

Test Measure Group

By using the Test perspective, you can create reports that answer the following questions:

Status reports:

  • What is the status of testing of specific user stories or product areas?

  • What is the quality of builds based on the number of failed and passed tests?

  • How many test cases have never been run?

  • Which test cases have never been run?

Trend reports:

  • How many tests are blocked, passing, or failing over time?

  • How many tests are regressing?

  • How consistent is the manual test activity over time?

NoteNote
If your data warehouse for Visual Studio Application Lifecycle Management (ALM) is using SQL Server Enterprise Edition, the list of cubes will include Team System and a set of perspectives. The perspectives provide a focused view of the data so that you do not have to scroll through all of the dimensions and measure groups in the whole Team System cube.

To use many Test measures and dimension attributes, the test team must publish the test results to the data store for Team Foundation Server. For more information, see Required Activities for Managing Tests and Builds later in this topic.

In this topic

  • Example: Progress Report for Testing User Stories

  • Test Measures

  • Dimensions and Attributes in the Test Perspective that Support Filtering and Categorization

    • Build, Build Flavor, and Build Platform Dimensions

    • Test Case, Test Configuration, Test Plan, and Test Suite Dimensions

    • Test Result Dimension

    • Test Run Dimension

    • Work Item and Work Item Linked Dimensions

  • Required Activities to Manage Tests and Builds

Example: Progress Report for Testing User Stories

By using PivotTable and PivotChart reports in Excel, you can create a status report that shows the test progress on user stories, similar to the report in the following illustration.

User Story Test Status Excel Report

The process templates for Microsoft Solutions Framework (MSF) v5.0 include the User Story Test Status report and the Requirement Test Status report in Excel. For more information, see User Story Test Status Excel Report (Agile) and Requirement Test Status Excel Report (CMMI).

Back to top

Specifying and Filtering Pivot Fields

Pivot Fields for User Stories Test Progress

By performing the following steps, you can create a progress report for testing user stories:

  1. In Excel, connect to the Analysis Services cube for Team Foundation Server, and then insert a PivotChart report.

    For more information, see Create a Report in Microsoft Excel for Visual Studio ALM.

  2. Right-click the chart, click Change Chart Type, click Area, and then click Stacked Bar.

  3. For each report filter, right-click each of the following fields, specify the hierarchies or elements of interest, and then drag the field to the Report Filter area.

    • Team Project Hierarchy from the Team Project dimension

    • Area Path from the Team Project dimension

    • Iteration Path from the Test Case dimension

    • Work Item Type from the Work Item Linked dimension

      Specify the type as user story, requirement, or another type of work item that has test cases linked to it that you want to report.

  4. Drag the Point Count Trend field from under the Test measure group to the Values area.

  5. Drag the Outcome field from under the Test Result dimension to the Column Labels area.

Back to top

Test Measures

The following table describes the measures that the Test measure group includes. You can analyze test results by the aggregate of tests results and their outcome for a particular build or by the changed outcome for a test result.

Measure

Description

Build Result Count Trend

Counts the most recent version of each result in a particular build.

For an example of a report that uses this measure, see Build Quality Excel Report.

Point Count Trend

Count of the most recent version of each test result in a particular build. If a test is run multiple times against a build, the Point Count Trend counts the most recent result for that test using that build. If a test case is not included in the build, the test case is counted as "Never Run."

Use this measure to determine which tests or how many tests are failing in the current build.

Result Count

Counts the most recent version of each test result. Use this measure when you want to determine the overall volume of testing.

For an example of a report that uses this measure, see Build Quality Indicators Report.

Result Transition Count

Counts all the results whose outcome changed in a particular build. Use this measure when you want to determine which tests were affected by a particular build.

Test Case Count

Number of test cases. Use this measure when you want to determine how many test cases were run for a particular test run or build.

Dimensions and Attributes in the Test Perspective that Support Filtering and Categorization

By using the attributes that this section describes, you can aggregate a measure, filter a report, or specify a report axis. These attributes are in addition to the Team Project and Date shared dimensions that Working with Shared Dimensions describes.

In this section

  • Build, Build Flavor, and Build Platform Dimensions

  • Test Case, Test Configuration, Test Plan, and Test Suite Dimensions

  • Test Result Dimension

  • Test Run Dimension

  • Work Item and Work Item Linked Dimensions

Back to top

Build, Build Flavor, and Build Platform Dimensions

You can filter test reports based on build definition, build flavor, or build platform by using the attributes that the following table describes.

Dimension

Attribute

Description

Build

Build Definition Name

The name that is assigned to the build definition for which a build was executed.

For an example of a report that uses this attribute, see Build Quality Excel Report.

Build ID

The number that is assigned to the build. Each time that a particular build definition is run, the Build ID is incremented by 1.

Build Name

The name or expression that uniquely identifies a build. For more information, see Work with Build Numbers.

Build Start Time

The date and time when the build started.

Build Type

The reason why the build was run. Build types are associated with the trigger that was defined for the build. Team Foundation Server supports the following types of build: manual, continuous (triggered by every check-in), rolling (accumulate check-ins until the previous build finishes), gated check-in, and scheduled. For more information, see Specify Build Triggers and Reasons.

Drop Location

The drop folder that is defined for the build and that is specified as a Uniform Resource Locator (URL). A URL specifies the protocol with which web browsers will locate Internet resources. The URL also includes the name of the server on which the resource resides. You can also include the path to a resource.

For more information, see Set Up Drop Folders.

Build Flavor

Build Flavor

(Published test results only) A name that designates the category of builds that are assigned to a set of completed builds that were published as part of a test run. For example, a build flavor can be used to designate a beta release or a final release. For more information, see Command-Line Options for Publishing Test Results.

Build Platform

Build Platform

The name of the machine platform for which an end-to-end (not desktop) build was made (for example, x86 or Any CPU). For more information, see Define a Build Using the Default Template.

Back to top

Test Case, Test Configuration, Test Plan, and Test Suite Dimensions

The Test Case, Test Configuration, Test Plan, and Test Suite dimensions correspond to how you can organize, configure, automate, and run tests by using Microsoft Test Manager from Visual Studio 2010 Ultimate or Visual Studio Test Professional.

The test case corresponds to a type of work item that the test team uses to define both manual and automated tests that your team can run and manage by using Microsoft Test Manager. A test plan consists of test configurations and test suites. A test configuration defines the software or hardware on which you want to run your tests. A test suite defines a hierarchy within the plan so that you can group test cases together.

For more information, see the following topics:

Dimension

Attribute

Description

Test Case

Area Hierarchy and more

The Work Item and Test Case dimensions contain all attributes that relate to work items, such as State, Work Item Type, and Work Item ID. For information about the structure of the Test Case dimension, see Analyzing Work Item and Test Case Data Using the Work Item Perspective.

For a description of each attribute, see Using System Fields and Fields Defined by the MSF Process Templates.

For information about how to work with date, area, and iteration hierarchies, see Working with Shared Dimensions in the Analysis Services Cube.

This measure group contains additional attributes when custom fields in the definition for a type of work item specify Dimension as the reportable attribute. For more information about how to use the optional reportable attribute and its values, see Adding and Modifying Work Item Fields to Support Reporting.

Test Configuration

Configuration ID and Configuration Name

The number that the system assigns and the name of a test configuration.

Test Plan

Area Hierarchy, Area Path, Iteration Hierarchy, and Iteration Path

The product area and milestone that is assigned to the test plan.

For more information, see Analyzing Work Item and Test Case Data Using the Work Item Perspective.

End Date Hierarchy By Month or By Week

Start Date Hierarchy By Month or By Week

Optional values that a test plan owner can assign to the test plan. They represent the date on which the test plan should start and the date on which the test plan should finish.

For more information about how to work with date hierarchies, see Working with Shared Dimensions in the Analysis Services Cube.

Test Plan Id and Test Plan Name

The number that the system assigns and the name that the test plan owner assigns.

Test Plan Owner

The user name of the test team member who created or currently is assigned as the owner of the test plan.

Test Plan ID and State

The system-assigned number and name of the state of the test plan. For example, Inactive indicates that the test plan is being defined, and Active indicates that the test plan is ready to be reviewed and run.

Test Suite

Test Suite Hierarchy

Provides a mechanism to specify multiple filters based on project collection, team project, and test suite.

Suite Path

Corresponds to the hierarchy of test suites that are configured for all team projects in all team project collections.

Back to top

Test Result Dimension

The following table lists all dimensions and attributes that are specific to the test measures in the cube. Before you can report on Failure Type or Resolution, the test team must populate this information as part of their test activities.

Attribute

Description

Failure Type and Failure Type Id

Corresponds to one of the following reasons why a test failed: None, Known Issue, New Issue, or Regression.

Microsoft Test Manager automatically assigns a number or an ID to each reason. The test team can, but is not required to, assign a failure type to each failed test.

NoteNote
You cannot add to or change the set of failure types.

For an example of a trend report that shows the outcome of test results based on failure type, see Failure Analysis Excel Report.

Outcome and Outcome Id

The outcome of the test (for example, Passed, Failed, or Inconclusive).

For an example of a trend report that shows the outcome of test plans and test configurations, see Test Plan Progress Report.

Readiness State and Readiness State Id

The state of a particular test within a test run. Valid values are Completed, InProgress, None, NotReady, and Ready.

Resolution State

(Optional) The name of the Resolution with which a tester identified the cause of a failed test. By default, all MSF process templates have the following resolution states: Needs investigation, Test issue, Product issue, and Configuration issue. The test team can, but is not required to, assign a resolution state to each failed test.

NoteNote
You cannot change these states or add states after the team project is created. For more information, see Defining Resolution States for Test.

Test Result Executed By

The name of the user or other account under which the test was run.

For an example of a report that uses this attribute, see Test Team Productivity Excel Report.

Test Result Owner

The name of the user or other account that is assigned as the owner of the test result. The assignment corresponds to the value that is set by using the tcm /resultowner switch.

Test Result Priority

The priority of a particular test within a test run.

Back to top

Test Run Dimension

The following table describes the attributes that are defined for the Test Run dimension. Many of these attributes correspond to parameters that the test team specifies when it runs and publishes tests. For more information, see tcm: Running Tests from a Test Plan Using the Command Line Utility.

Attribute

Description

Complete Date, Creation Date, Start Date Hierarchy By Month or By Week

Dates when the test run was created, completed, or started. You can use these attributes to filter or structure a report. For more information, see Working with Shared Dimensions in the Analysis Services Cube.

Is Automated

Flag that indicates that the test run contains one or more automated tests.

For an example of a report that uses this attribute, see Build Quality Excel Report.

Is Build Verification Run

Flag that indicates whether the test run contains build verification tests that check the basic functionality of the build. This flag corresponds to the tcm /buildverification switch.

For an example of a report that uses this attribute, see Build Quality Excel Report.

Test Run Id

The number that the system assigned to the test run.

Test Run Owner

Corresponds to the owner who is assigned to the test run that the test team created or published. Corresponds to the tcm /owner switch.

Test Run State and Id

Name or number that is assigned to the state of a test run (for example, Aborted, Completed, In Progress, Not Started, or Unknown).

Test Run Title

Corresponds to the title that is assigned to the test run that the test team created or published. Corresponds to the tcm /title switch.

Back to top

Work Item and Work Item Linked Dimensions

You can link test cases to other work items such as user stories, requirements, and bugs. By using the Work Item Linked dimension, you can create a report that provides test results that relate to the linked work items. The progress report for testing user stories, described earlier in this topic, provides an example of using the linked work item.

For a description of each attribute, see Using System Fields and Fields Defined by the MSF Process Templates.

Required Activities for Managing Tests and Builds

To create test reports that contain useful data, team members must perform the following activities to manage builds and tests:

  • Build Activities

    • Configure a build system. To use Team Foundation Build, the team must set up a build system.

      For more information, see Configure Your Build System.

    • Create build definitions. The team must create at least one build definition. The team can create multiple build definitions, each of which can be run to produce code for a different platform. Also, the team can run each build for a different configuration.

      For more information, see Create a Basic Build Definition.

    • (Recommended) Run builds regularly. The team can automatically run builds at intervals that they specify or after every check-in. By using the schedule trigger, the team can automatically run builds at the same time or times on the same day or days that they specify.

      For more information, see Specify Build Triggers and Reasons and Run and Monitor Builds.

    For more information, see Team Foundation Build Activities.

  • Test Management Activities

    • Define test cases, test plans, and test configurations. To report on test cases and test plans, the test team must define these items. The test team can also define test suites and assign test cases to test plans.

    • (Optional) Assign product areas and milestones to tests, and track status. The test team can specify the Area and Iteration paths for each test case and test plan. Specify the State of each test case and the Test Plan State of each test plan.

    • (Optional) Link test cases to work items. For example, the test team can monitor the testing progress on each story by using the Tested By link type to link test cases to user stories.

    • (Optional) Mark the results of tests. For manual tests, the test team can mark the results of each validation step in the test case as passed or failed.

      Important

      Testers must mark each validation test step with a status. The overall result for a test reflects the status of all the test steps that were marked. Therefore, the test will have a status of failed if a tester marks any test step as failed or does not mark all steps.

      Each automated test is automatically marked as passed or failed.

    • (Optional) Configure tests to gather code coverage data. For code coverage data to appear in the report, team members must instrument tests to gather that data.

      Important

      To collect data for code coverage, you must install Visual Studio Premium or Visual Studio Ultimate on the build agent machine. For more information, see Create and Work with Build Agents.

      For more information, see How to: Configure Code Coverage Using Test Settings for Automated Tests and How to: Gather Code-Coverage Data with Generic Tests.

    • Define tests to run automatically as part of the build. As part of the build definition, you can define automated tests to run as part of the build and analyze the impact of code changes on your tests.

      For more information, see Define a Build Using the Default Template.

    • Publish tests. As part of the build and test activities, the test team must publish test results to the data store for Team Foundation Server.

      For more information, see Command-Line Options for Publishing Test Results.

Back to top

See Also

Concepts

Analyzing Build Details and Build Coverage Using the Build Perspective

Perspectives and Measure Groups Provided in the Analysis Services Cube for Team System

Other Resources

Use Your Build System to Work with Tests

Change History

Date

History

Reason

July 2011

Rewritten for clarity and completeness.

Information enhancement.