Larry BraderAlan Cameron Wills
Web applications are typically updated or extended every few weeks (or even days) in reaction to shifting business needs and customer feedback. To support continuous development, every aspect of the development cycle must be more efficient and lightweight than more traditional development processes. For example, it’s in the nature of software that even the smallest change requires everything to be retested. But repeating a full suite of manual tests every few days is impossible. This means that tests eventually must be automated, even if they begin as manual explorations. In this article we show how Microsoft Visual Studio 2012 supports testing in a continuous development environment.
Few software projects come to an end when the software is deployed and running. Feedback is obtained from the stakeholders, improvements or extensions are planned, and eventually a new version is released, whereupon the cycle starts again. This process, known as the DevOps cycle, is illustrated in Figure 1.
Figure 1 The Continuous Development Cycle
In a traditional software project, each cycle can take years. The first release typically is rich in features, delivered on a DVD and installed on users’ local machines. By contrast, in a modern Web application, the first release might be minimal, but extensions and improvements are released every few weeks (or even days).
For example, the managers of a social-networking site might try a new feature for a week, during which they monitor how customers use it. At the end of the trial period, they adjust the details of how the feature works.
In this much more rapid cycle, it’s important for development teams to think about the DevOps cycle as a whole process. In particular, they need to make each activity in the cycle more efficient, removing any roadblocks that hinder progress around the loop.
The improvements in the testing process we discuss here are all aimed at reducing the time it takes to cycle through the DevOps loop. In particular, these new tools and techniques aim to reduce the bottleneck that testing has traditionally caused in this loop.
The essential role of testing is to demonstrate that user stories and other requirements have been implemented. The most effective way to do this is to run the application manually and exercise each story just as the end users will. An experienced tester applies a variety of strategies to expose bugs and explores all the variants and edge cases of the application’s behavior.
When application code changes, it’s prudent to retest everything that might depend on it. Dependencies in software typically form a complex weave, and bugs notoriously turn up in features that seem unrelated to the focus of the update. For this reason, traditional development teams don’t like changing any component that has been written and tested. As noted, the slightest change demands a full retest, so if you test everything manually, this requires a great deal of effort and resources.
In contrast, the short cycles inherent in continuous development require that each part of the software be frequently revisited as its functionality is improved and extended. It would be impossible to manually retest every feature every few days. Short cycles require automated tests, which replace the manual tests with program code. You can run coded tests quickly and as often as you like.
Should continuous development teams code all their tests and abandon manual testing altogether? That approach has sometimes been suggested, but in practice, there’s an efficient compromise, which works as follows.
Test new and substantially changed stories manually. When each test passes consistently, create automated versions of the tests for that story. In consequence, although the total number of tests gradually increases as the product expands, the load of manual tests remains constant because manual testing is something you do only for relatively new features.
In fact, the bulk of coded tests in a typical continuous development project are unit tests, which are written along with the application code and test individual components inside the software, rather than testing the behavior of the whole application. Unit testing is a powerful tool for maintaining stability as the code base is updated.
Figure 2 shows a gradual transition from manual to automated tests over time, together with the expansion of unit tests along with the application code. The diagram is an ideal picture; in practice, most teams automate only some proportion of their manual tests. Visual Studio 2012 (and other versions) also provides for partial automation, which can be used to speed up tests without writing code.
Figure 2 Ideal Transition from Manual to Automated Tests over Time
Let’s see how testing is supported by Visual Studio 2012 and its associated products, Visual Studio Team Foundation Server (TFS) and Microsoft Test Manager (MTM).
If your specialty is testing the whole application, you’ll be more interested in the support provided by MTM. If you’re a developer, you might take more interest in the support for automated testing in Visual Studio 2012. However, continuous development demands a closer relationship between these two roles, and some teams dispense with the distinction altogether. Therefore, the Visual Studio 2012 tools are designed to integrate the different styles of testing, and they support a broad spectrum of testing practices, from the more traditional approaches through continuous development.
Automated testing includes all types of tests that are defined by writing or generating program code. You create automated tests in Visual Studio 2012, where you initially run them for debugging.
When the test—and the application code that it tests—are correct, you check in the test, along with the application code. From the source code repository, it’s picked up by the build service and run regularly according to your team’s build definitions.
Unit and Integration Tests Unit testing is one of the most effective ways of maintaining a bug-free codebase through successive changes in an application.
A unit test is a method that tests a method, class or larger component of your application in isolation from other parts, external systems and resources. In practice, developers often write integration tests—that is, tests written in a similar way to unit tests but which might depend on external databases, Web sites or other resources. Either way, these tests use the same tools and infrastructure.
In Visual Studio 2012, you can write tests that use any of several test frameworks, such as NUnit, xUnit and the default VSTest. When you have coded tests in any of these frameworks, you simply open the Test Explorer window and choose Run All. The test results are summarized in the window.
Background testing is an option that efficiently runs your tests in the background every time you build your solution. The tests affected by your changes are performed first. This means that as you work, you can constantly see which tests are passing or failing.
Isolate Units by Using Fakes True unit testing means disconnecting the unit under test from the code on which it’s dependent. This has a number of advantages. If your unit is being developed or updated at the same time as other units on which it’s dependent, you can test it without waiting for the others to be complete. If you restructure the application to use this unit in a different way, or in a different application, the tests go with it and don’t need to change.
Visual Studio 2012 provides two mechanisms, collectively called fakes, for disconnecting a unit from its dependencies. Calls from your unit to methods outside its boundary can be handled by small pieces of code that you provide. For example, you can define a shim that intercepts calls to any external method such as DateTime.Now. Because it always receives the same response from the shim, your unit will demonstrate the same behavior every time it’s invoked. You can also define stubs, which provide placeholder implementations of methods in assemblies that haven’t been loaded.
Performance and Load Tests Visual Studio 2012 Ultimate provides specific test facilities for performance and stress testing. An application can be instrumented and driven so as to measure its performance under specified loads. Web applications can be driven with multiple requests, simulating many users.
Coded UI Tests Coded UI tests let you run your application and generate code that drives its UI. Visual Studio 2012 includes specialized tools for creating and editing coded UI tests, and you can also edit and add to the code yourself. For example, you might create a simple procedure to buy something at a Web site and then edit the code to add a loop that buys many items.
Coded UI tests are particularly useful where there’s validation or other logic in the UI—in a Web page, for example. You can use them either as unit tests for the UI or as integration tests for the whole application.
Manual testing can be planned or exploratory. You perform manual tests with the help of MTM. Tests are normally performed on versions of your application that have been built from checked-in code.
Typically, manual tests are linked to user stories (or product backlog items or other requirements), and the results of the tests are displayed in reports on the project dashboard. This means that everyone can quickly see which stories have been successfully implemented.
Exploratory Testing Exploratory testing simply means running the application to try it out. However, why do you need MTM to help you run it?
MTM can record your actions, comments and screenshots while you work. If you decide to create a bug report, all this information is automatically added to it, making it unnecessary for you to add a precise description of how to reproduce the bug. Figure 3 shows an example of the MTM exploratory testing window alongside a Web application that’s being tested.
Figure 3 Recording a Screenshot and Making Notes in the Exploratory Testing Window
MTM can also instrument the application itself, both in the client and the server, and record event data that can be used to debug the application. This data is automatically attached to your bug report.
When the bug is fixed, you’ll want to repeat the steps you took in the exploration to verify the fix. To help with this, you can generate a test case from the exploratory session, in which you include the relevant steps.
Planned Testing with Test Cases Test cases are manual tests that you define as a series of steps the tester should perform. Figure 4 shows the steps defined in a test case.
Figure 4 Defining Steps and Expected Results in a Test Case
Test cases provide a great way to clarify what the users need. At the start of the sprint, when you’re discussing stories or requirements with the users and other stakeholders, you can use the steps as a precise example of what the users will be able to do by the end of the sprint. Each test case is just one instance of the requirement, and so each requirement is usually associated with more than one test case. For example, if the requirement is to be able to buy ice cream, one test case will detail the steps to buy a particular flavor. You would create another test case to describe buying a mixture of flavors. The guiding principle of the discussion with stakeholders should be: “When you can successfully perform these test cases, then we’ll consider the story to be implemented.”
In TFS, both the stories or requirements and the test cases are represented by work items. You can link them together so the progress of a requirement can be tracked by the results of the tests.
When you run a test case, the steps are displayed at the side of the screen. You check off each step while you run the application. At the end, you check off whether the test has passed or failed.
Just as with exploratory testing, your actions, comments, screenshots and application data are recorded so you can create a detailed bug report very quickly.
A great advantage of using steps is that they help anyone repeat the test reliably, even if they aren’t familiar with the application. When a test is repeated, you can be confident that whether it passes or fails, it isn’t simply because the test was run differently from the last time.
You can also generate a planned test case from an exploratory session. Doing this helps ensure that you always run the test using the same actions.
Automating a substantial portion of your manual tests is essential to minimize the time taken by testing in the DevOps cycle. Visual Studio 2012 supports this automation in several ways.
Record/Playback You can rerun a test case semiautomatically. On the second and subsequent times you run a test, MTM replays the keystrokes and gestures that you used on the first run. All you have to do is verify that the results you see conform to the expectations detailed in the steps.
Playback makes manual testing quicker and more reliable. It also makes it possible to distribute the testing load among colleagues who might not be completely familiar with the application.
Even if you don’t fully automate a test, rapid and reliable playback helps reduce the DevOps cycle time. This feature does not require Visual Studio 2012 to be installed and does not involve writing code.
Generating Coded UI Tests You can generate a fully automated coded UI test from a recorded manual test case run. The generated code performs the same actions as the manual test. By using a special editor in Visual Studio 2012, you can also extend the test to verify the results and generalize it to repeat the test for different input data. Figure 5 shows the special editor.
Figure 5 Editing UI Actions in Visual Studio 2012
Linking Test Cases to Test Methods You can link a test case to any test method, even if it hasn’t been generated from your test run. The result of running the test will be reported as if you had run the manual steps. Typically you would link the test case to an integration test that performs the same actions as the manual test but drives the business logic directly, rather than using the UI.
This approach has the benefit that changes in the UI layout don’t invalidate the test. It’s also useful when the development team has already created a suitable integration test.
When you test an application, the first thing you need is a machine to run it on. In fact, most applications today need several machines. For a realistic test environment, you might, for example, need to install a Web server, a database server and a client browser all on separate computers. Figure 6 illustrates such an environment.
Figure 6 A Sample Lab Environment for Testing a Sales Web Site
In addition to the basic installation, you’ll also want to install agents that can collect the event data that was mentioned earlier in the “Exploratory Testing” section.
In MTM a feature named Lab Center makes all of this straightforward. Lab Center lets you define lab environments. A lab environment is a set of machines that will be used as a group for test purposes.
In addition to handling the assignment of the machines (so you don’t accidentally use a machine that’s running someone else’s tests), Lab Center also installs the necessary test agents. Lab Center provides a console where you can quickly log in to any of the machines in an environment.
Lab Center is also good at creating and managing virtual machines (VMs). You can create a virtual environment, install the relevant platform software and then store a library copy of it that you can use whenever you want to test your application. All you have to do is reinstantiate a clean copy of the environment and install the new versions of the application’s components. You can also automate this deployment process.
Using Lab Center—and, in particular, taking advantage of its facility with VMs—can significantly decrease lab setup time compared to more traditional approaches to maintaining a lab. Lab Center is a substantial contributor to reducing the time spent in each DevOps cycle.
Automated tests are initially performed in Visual Studio 2012 on a developer’s computer. After the code has been checked in to the source repository, there are a number of ways in which tests on the integrated code can be run by the build service.
Periodic Builds The build service compiles the code and runs the tests. You can create build definitions to specify which tests should be run, and you can specify when they should run. For example, you might run a core set of tests on a continuous basis and run a more extensive set every night.
Build results can be viewed in Visual Studio 2012 and are also available from your project’s TFS Web service. E-mails can notify you of failures.
Lab Deployment As we previously described, you can assign a group of lab machines to a test by using Lab Center. By defining a lab build, you can automate this process. When your build is triggered—for example, when code is checked in, or at a particular time of day—the build starts by compiling all the application and test code. If this is successful, a lab environment is assigned, and if it’s a virtual environment, it can be set back to a fresh state. Your application components are then deployed to the correct machines, and the tests are installed on the designated client machine from where they drive the application.
The tests can be automated tests of any kind, but typically you use this type of build to perform large integration tests or tests of the whole application.
If your tests are linked to test cases, the results will be recorded against the related user stories or requirements and displayed in the project progress reports.
TFS provides a number of charts and tables that show the progress of a project. You can view them either individually or in project dashboards. Among the reports are several related to testing.
For example, the User Story Test Status report, illustrated in Figure 7, shows the list of stories you’re working on in the current sprint. In addition to the development work performed for each story, the chart shows the success or failure of its associated tests. If failures were discovered while running the tests, the resulting bug reports are also linked to the requirements.
Figure 7 User Story Test Status Report
The results on the chart come from both the most recently run manual tests and the automated test runs.
The Test Plan Progress report, illustrated in Figure 8, shows how many test cases were created for the current sprint and how many have been run.
Figure 8 Test Plan Progress Report for a Sprint
Some teams like to create test cases at the start of each sprint, as a target for the team. All the tests should be green at the end of the sprint.
You now have some familiarity with the range of the Visual Studio 2012 testing features, from unit tests to whole-application manual tests.
The DevOps cycle views development as just one half of a process that also incorporates feedback from operations. Depending on the context, the DevOps cycle will be short or long. If you’re developing a nuclear power station, hopefully you’ll go around the loop very slowly. If you’re running a Web application, you might go around it every few days. Slow and fast cycles are equally valid and appropriate for different types of systems. Both incorporate the need for testing, to different degrees. If the fast cycle of continuous development is appropriate for your project, it’s important to reduce the time it takes to perform every action in the loop. You’ll probably also make less distinction between the roles of developer and tester than in projects that work at a more measured pace.
The tools available in Visual Studio 2012 can substantially reduce the amount of time it takes to test your application. Here are some points to remember:
Figure 9 shows the progression from exploratory tests to automated tests.Figure 9 Test Progression
We’ve given you an overview of how Visual Studio 2012 fits in with the DevOps cycle. You should now understand why you need a streamlined approach to testing and what tools Visual Studio 2012 offers to help you fulfill this goal. If you want more information, read “Testing for Continuous Delivery with Visual Studio 2012 RC” in the MSDN Library at bit.ly/KHdOq4. It’s an in-depth guide that covers every aspect of the testing infrastructure provided by Visual Studio 2012. Related articles include “Verifying Code by Using Unit Tests” at bit.ly/dz5U3m and “Testing the Application” at bit.ly/NbJ01v.
Larry Brader has been a senior tester on the Microsoft patterns & practices team for the past several years. Before that he worked as a developer and tester on military and medical technologies.
AlanCameron Wills is a programming writer in the Microsoft Developer Division. In previous lives he has been a developer, a software architect and a consultant in development methods.
Thanks to the following technical experts for reviewing this article: Howie Hilliker, Katrina Lyon-Smith, Peter Provost and Rohit Sharma
"The essential role of testing is to demonstrate that user stories and other requirements have been implemented." I take issue with this for three reasons. (1) From a technical perspective, the essential role of testing is to reveal bugs so they can be fixed before they're inflicted on users or paying customers. (2) From a business perspective, the essential role of testing is to provide unbiased information to decision makers about the risk of releasing defective or incomplete systems. (3) From a professional perspective, the essential role of developers is to be sure that user stories and other requirements have been understood and implemented. Under what conditions could that not be the case? Better coordination between ops and development is a great idea and so is continuous integration, but it will be of limited use if the players set the bar as low as you suggest.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Subscribe to MSDN Flash newsletter
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.