Troubleshooting in Test Edition
When you are testing your code, certain conditions can generate errors or warnings, or even cause testing to fail. This topic describes some of those conditions and steps that you can take to resolve them.
Conditions that can prevent tests from running usually can be traced to a failure to deploy the test file or other files that are required for the test to run.
Remote tests. For remote tests, communication problems with the remote computer can also be at fault. These and other errors at the test level and run level are described in Troubleshooting Test Execution.
ASP.NET unit tests. If your ASP.NET unit test is running in the IIS process, you can choose to have the ASP.NET process run as a non-default user, that is, with a different process identity, for security purposes. In this case, test execution can fail. For more information, see Unit Tests for ASP.NET Web Services.
Deploying files together with your tests. Deployment errors are frequently displayed in the Test Run Details Page and not on the test Results Details page of the individual test that failed. Therefore, it might not be obvious why an individual test has failed. For more information, see the section "Troubleshooting Test Deployment" in Test Deployment Overview.
Troubleshooting Web tests. Various errors can occur when you run Web tests. They might be caused by a missing data binding attribute, problems with security settings, or an attempt to access a Web site outside your firewall. For more information, see Troubleshooting Web Tests.
Troubleshooting load tests. Various errors can occur when you run load tests. They might be caused by problems with the load test database, the counters set on your load test, an incorrectly configured rig, or one of the tests that is contained in the load test. For more information, see Troubleshooting Load Tests.
Troubleshooting data-driven unit tests. You might encounter connection, authentication, deployment, or other problems when you run data-driven unit tests. Use the information in Troubleshooting Data-Driven Unit Tests to solve those problems.
When you run unit tests, you are testing code in a binary. You can gather code coverage information when these tests run by instrumenting that binary; see How to: Obtain Code Coverage Data. The process of instrumentation adds code that generates code coverage information into the binary.
If the binary you are testing is a strong-named assembly, the code modification caused by instrumentation invalidates its signing. So Visual Studio automatically tries to re-sign the assembly immediately after the instrumentation step. For more information about strong-named assemblies, see Strong-Named Assemblies.
Various conditions can cause this re-signing to fail. For information about how to work around these conditions, see Instrumenting and Re-Signing Assemblies.
If you are running VSPerfMon.exe while simultaneously running tests for which you are collecting code-coverage data, the following events will occur:
If you are running VSPerfMon with the TRACE or SAMPLE option, the simultaneously running test run will fail, and an error is reported on the Test Run Details page.
If you are running VSPerfMon.exe with the COVERAGE option, the VSPerfMon.exe process is stopped.
In both cases, the workaround is to refrain from simultaneously running VSPerfMon.exe and running tests in which you are collecting code-coverage data. For more information about the VSPerfMon.exe tool, see VSPerfMon.
When might this happen?
The most common cases when VSPerfMon will be running are the following:
You have started a profiling session, possibly in an instance of Visual Studio other than the instance in which you are running tests.
You are collecting code-coverage or profiling data either by running VSPerfMon.exe directly or, as is more common, by using the wrapper VSPerfCmd.exe.
If you have requested that code coverage data be gathered for your tests but it does not appear, or it displays differently than you expect, one of the situations described here might apply:
No code coverage data appears. During test execution, certain binaries, such as COM DLLs, are loaded from their original location and not from the test deployment directory. Such binaries must be instrumented in place; otherwise, although test execution succeeds and no run-level warning is generated, code-coverage data is not collected. For more information, see Choosing the Instrumentation Folder.
Code coverage highlighting does not appear. When you run tests, collect code coverage data, and then view test results, Visual Studio indicates code that was tested in the test run by highlighting the code in its source-code file. You can choose the colors that indicate which code was covered, not covered, and partially covered. If some or all of this highlighting does not appear, make sure that the chosen colors differ from the background color of your source-code file. For more information about choosing colors for highlighting, see the section "Changing the Display of Code Coverage Data" in How to: Obtain Code Coverage Data.
Code coverage data does not merge correctly. You can merge results that include one or more ASP.NET test runs, but the Code Coverage Results window displays ASP.NET data under Merged Results in distinct nodes, instead of in a single, merged node. For more information, see Working with Merged Code Coverage Data.
Not all merged code coverage data is displayed. After you have merged code coverage data, you can export it to disk as an XML file. If you re-import this file and then merge it with additional data, not all statistics are displayed. For more information, see Working with Merged Code Coverage Data.
Code coverage data does not import. Visual Studio must be able to locate certain files on disk in order to import code coverage data. For more information, see Working with Merged Code Coverage Data.
Instrumented binaries are overwritten. You are trying to collect code coverage data from a program that you are running during a manual test. If you use CTRL+F5 to start that program, the CTRL+F5 action causes the program's binary to be rebuilt. This overwrites the instrumented binary, which means that no code coverage data can be gathered.
For general information about collecting code coverage data, see How to: Obtain Code Coverage Data.
When you add a new test method, background processing is performed to add this test method into the Test View window and the Test List Editor. Therefore, you can see it immediately. If you have many test methods in a single test class or in the entire project, you might experience a performance issue that is caused by this automatic processing when you add a new test method to this test class.
If you are experiencing this performance issue, there are three possible solutions:
You can split your test class into partial classes and divide your test methods between the partial classes. This reduces the number of methods in a single test class and improves the performance when you add a test method.
You can create a new test project, move some of the test classes into this new test project and then remove them from the original test project. This reduces the number of test methods in one assembly and improves your performance.
You have the option to turn off the background processing that adds test methods into the Test View window and the Test List Editor. This will improve the performance when a test method is added. However, when this option is set, the test method will not be seen in the Test View window or the Test List Editor until you compile the class that contains the test method and then click Refresh in the Test View window or Test List Editor.
When this option is set, the new test methods will be discovered at compile time. This will increase the total time to compile the solution.
To disable automatic discovery of test methods
On the Tools menu, click Options.
The Options dialog box is displayed.
Expand Test Tools in the left pane and then click Test Project.
To disable automatic discovery of test methods, select Disable Background Discovery of Test Methods.
When this option is set, any test method that you add to any test class will not be seen in the Test View window or the Test List Editor until you compile the class that contains the test method.