by Munjal Budhabhatti
Summary: This article demonstrates how test-driven development and continuous integration addresses the unique challenges encountered when creating Windows Mobile applications.
Current State of Mobile
Development: Issues and Challenges
Source Code Repository
Faster Successful Builds
Benefits of Using CI
About the Author
Globally, the number of mobile phone subscribers is approximately 2.5 billion and is expected to grow to 4 billion by 2010. The mobile device is now a richer platform for application delivery due to such an exponential growth and wide spread usage. The critical factor, as always, is the end user experience: application usability, reliability, and performance.
Complicating matters, the software development world is moving from weekly and monthly deployment cycles to continuous deployment. So how can one ensure that a user always has the best experience?
Many that have looked at the agile space will be familiar with two of the core extreme programming practices: The driving of development with automated tests, a style of development introduced by Kent Beck called test-driven development; and a software development practice of frequently integrating builds called continuous integration, as articulated by Matthew Foemmel and Martin Fowler.
These practices are not new to the software world. However, mobile application development has lagged in taking advantage of the test-driven development and continuous integration endowed by the enterprise software community. This is partially a result of limited or unavailable mobile platform support in existing toolsets such as NUnit/MSTest or Cruisecontrol.net/Team Foundation Server.
A few mobile testing tools allow recording user interactions via a graphical representation of the client device but do not provide granular control over the tests. Other tools either demand scripting on a mobile device or expect tests to be executed, manually, on the device. As a result, mobile application testing is inefficient and complex, hindering productivity.
Test-driven development (TDD) is an evolutionary approach where the development of code is driven by first writing an automated test case, followed by writing the code to fulfill the test and then refactoring.
Red/Green/Refactor—the TDD mantra—is an order to the task of
Figure 1. Sample test cases
Figure 2. Sample code
Figure 3. Sample refactoring
This technique is thus reverse to traditional programming—developing code followed by writing a test, which is executed either manually or automatically. Why embrace such a change, especially when one might tend to think that it’s extra work? In reality, test-driven development is risk-averse programming, investing work in the near term to avoid failures (and even more work) in the late term—Kent Beck has called it “a way of managing fear during programming.”
“Test-driven development is a way of managing fear during programming—fear not in a bad way—but fear in the legitimate. If pain is nature’s way of saying ‘Stop! ’ then fear is nature’s way of saying ‘Be careful.’”—Kent Beck
Automated unit test execution is one of the vital requirements of TDD. However, because the testing tools are still evolving, the automated execution is not currently viable in mobile application development. Implementing TDD in this environment is therefore quite challenging, if not impossible.
Testing is customarily thought to be as a methodical process of proving the existence or lack of faults in a system. When a test case is written before writing the code, the test case becomes a specification, instead of a mere verification of the feature.
Tests are also a way of documenting found defects. Let’s assume a defect was discovered in quality assurance while testing newly deployed bits. Even if this defect was very trivial to fix, TDD demands a test case. First, write a test case that simulates such a failing behavior, and then write code to pass the test. Such a practice would ensure that defects, no matter how petty, do not creep through the system and regression testing becomes part of the test suite. Automated test execution locally, before committing the changes to the Source Control Repository (SCR), would further reduce the broken builds phenomenon.
It is important to prepare the test environment on an emulator as close to the target hardware as possible. Developing and testing Windows Mobile applications on an x86 emulator makes little sense when targeting hardware exclusively for ARM architecture, an architecture dominant in low power consumption electronics. Furthermore, in the real world, components often have dependencies on other objects, databases, or network connections. It is very easy to fall into a trap of assuming that these dependencies work flawlessly. Hence if tests are written without taking dependencies into consideration, an incorrect feedback is possible for those tests which fail due to dependency problems.
One way to safeguard against dependencies is to build the object graph or set up the database in a required state before executing the test case. This would solve the issue but would increase test execution time and build time.
A more elegant approach would be to instantiate test objects and replace object dependency by implementing mocks or stubs—objects that imitate the behavior of real objects. This ensures isolated test execution and hence reliable test results. Caution should be taken while faking the real objects with mocks or stubs. It is probable that the entire unit test suite executes faultlessly, yet the product might fail in quality assurance testing. I have found in my own experience that complementing mocks and stubs objects with integration tests provides a true sense of confidence.
Martin Fowler has described continuous integration (CI) as a software development practice of frequently integrating builds, often multiple times a day. A typical CI workflow, as shown in Figure 4, would be:
Figure 4. Continuous integration in .Net Windows Mobile application
All the essential files required to build a product reside in the source code repository (SCR). It plays an important role in the software development life cycle and CI. SCR tools such as Subversion and Visual Studio Team Foundation source control enable teams to work collaboratively—on the same or different artefacts simultaneously, track code changes effortlessly, and work on different versions of files concurrently.
The CI server obtains the latest source code from SCR, locates all required dependencies, and builds the product independently from the previous build output. SCR allows the team to be more productive—a new team member does not need to reconfigure third-party libraries, project structures, or IDE settings for the project. Moreover, it reduces debugging time by allowing the team to remove the current changes which would be small and incremental if using TDD practices. The system could be safely reverted to a previous version of the code.
It is vital to include all dependencies in the SCR: This includes the Windows Mobile SDK, .Net Compact framework installer, Virtual Machine Network service drivers and other third-party components and utilities.
“Continuous integration is a software development practice where members of a team integrate their work frequently—usually each person integrates at least daily—leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.”—Martin Fowler
Contradictory to some misconceptions, the automated build in mobile applications is much more than a simple code compilation. It assembles source code from the SCR, compiles code to create binaries and dependencies (such as configuration objects, resource assemblies, and so forth), inspects and deploys compiled binaries to a mobile device or an emulator, loads database schemas, and executes tests remotely on the mobile device.
Build tools such as Nant and MSBuild do not support mobile application deployment, partially because mobile device support in these tools is still immature. In addition, unit testing tools such as NUnit and MSTest have a similar problem of executing tests remotely on a mobile device. To avoid these limitations a new tool is essential.
To address these problems, I created a tool wMobinium.net which assists TDD and CI implementation in .Net mobile applications. wMobinium.net is a unit testing tool that supports automated deployment and automated remote unit test execution. It has a Visual Studio add-in to support TDD in .Net mobile applications. It is a freely available tool on the CodePlex open source Website (see Resources).
Unlike desktop application development, mobile application development faces unique challenges in deployment: first, a deployment is required for unit testing features on the device and second, deployment is necessary after a successful build to deliver the working solution on a staging environment for quality assurance testing.
For every build, the newly compiled binaries and dependencies must be copied to a program folder on a device. When an application entails testing on multiple devices, there will be added complications to the deployment process. To circumvent these problems, wMobinium.net offers a deployment tool which implements Window CE’s Remote Application Programming Interface (RAPI), and facilitates file and folder deployments. This relieves the pain of manual deployments.
One of the key components for a successful CI implementation is frequent commits and hence frequent builds. With longer commit intervals, team members tend to work in isolation and the build is more prone to integration issues. When a team member spends more time on a feature and encounters integration problems while committing the code, he tends to be reluctant to remove the changes and revert to a previous version of the code. This reluctance might increase the amount of time and resources spent on debugging activities. In an ideal scenario, team members check-in the code at intervals of 30-60 minutes or less. The check-in duration could be extended by a few hours, but should always be less than a day.
Once a developer commits code to the SCR, waiting for the feedback from CI slows down the development process. Longer waits result in decreased productivity. Furthermore, some of the subprocesses of the build, such as deployment and testing, are executed on a device/emulator making these processes inherently slower than it would have been on a desktop.
To reduce build times, concentrate on the weakest link—the component that takes the longest time to execute. More often than not, the cause would be an external dependency, such as a database or other objects. Accessing a database and setting up test data for each test case is a resource-intensive operation. As I mentioned earlier, mocks or stubs, should do the trick. If it is impractical to do so, move the test cases to secondary or nightly builds—scheduled builds that execute at night when most of the resources are idle. Test cases targeting tests scenarios on multiple devices should be added to secondary or nightly builds as well.
Failing builds cause the most frustration—as if the entire chain of software churning has been stopped. The code in the SCR is no more reliable and the team is blocked from getting the latest source code. The team needs to resolve the issue quickly by fixing the build. If a test case is causing the failure and fixing the problem might require a longer duration, it is safe to ignore the test case temporarily to allow the build to succeed. However, it is vital to track these ignored test cases on an easily accessible project wall, a physical white board or a virtual board using collaborative software. The ignored tests should be fixed at a later time and added back to the test suite.
It is quite common to see that the same defects resurface after a few builds and we often hear a quality assurance analyst say, “but this defect was already fixed in a previous build.” Boomerang defects reveal the importance of writing a test for each encountered defect before modifying the code base. Once the test case is fixed, the entire test suite must be executed and passed before checking in the modified code to the SCR.
I have been on a few projects where the team executes the test suite manually on a mobile device. Imagine a mobile application developer who spends few minutes changing the functionality, but spends double the time to test it manually. This will not only discourage a developer but will also affect productivity.
wMobinium.net resolves this annoyance by automating the entire unit testing workflow. Unlike traditional unit testing, wMobinium.net presents the test case selection on the desktop, executes tests on the device, and displays results on the desktop. It takes care of some of the complications such as the following:
Remote execution of test cases
In order to execute the test cases remotely on a mobile device/emulator, the tool serializes metadata information of the selected test cases, starts a conduit process and executes tests on the device. To provide correct reporting to the CI server, the remote process must be started synchronously and monitored continuously, which is quite a challenge.
Serializing test results to desktop
In the absence of support for remoting in the .Net Compact Framework version 2.0, the device must communicate with the desktop using sockets. The events must be serialized, sent to the desktop through a socket, deserialized, and propagated to the appropriate event listeners.
The tools described here assist the CI server to continuously build, deploy, and test a .Net mobile application. It would be convenient if the unit testing feature was supported as an integrated tool in Visual Studio.
wMobinium.net add-in, a Visual Studio add-in (Figure 5), is a part of the wMobinium.net toolset. After activating the add-in, all the available tests in the solution are displayed in the tool window. A typical workflow would be:
Figure 5. wMobinium.net add-in
Stakeholders and project sponsors always favor reliable outcomes, clear communication, project visibility, and superior quality of software. Software development, however, seldom offers such qualities without the right processes and practices.
When everything seems to be going well, any defect might suddenly jeopardize the development schedule. Especially during “Big-bang” integrations, even small issues—like missing configuration entries, out-of-sync database, or missing dependencies—could be extremely detrimental when encountered together.
Continuous integration enables faster feedback. At every change—adding new or modifying existing features, no matter how big or small—the CI server would integrate the new parts which would pass through the entire automatic build cycle—compilation, testing, inspection, and deployment. This provides visibility of the progress of the project, enhances the quality of the software developed, and builds the morale of the team.
CI does not provide these functionalities out-of-the-box. It is very possible to implement CI without including automated tests or inspection in the builds process; however, such a setup would be the least beneficial. Many, including me, consider that CI without testing is not CI at all.
From a user perspective, TDD and CI implementation is the same in a traditional desktop application as it is in a mobile application. The user creates a new failing automated test case, writes the code to pass the test, and refactors the code without changing the intent. The CI server polls for the latest source code creates new binaries, executes the tests, and generates the feedback. However, in a mobile application the implementation differs in the remote execution of test cases, notification of test results, and build deployments. These complexities are handled by the wMobinium tool.
In my past experience at one of the biggest microfinance organizations in Africa, my team and I developed a .Net mobile application. In the absence of supporting tools, the development implementing TDD and CI, although arduous, improved overall application design, reliability, and performance.
With the release of .Net Compact Framework version 2.0, the performance of .Net mobile applications has improved radically. The newer version provides improved developer productivity, greater compatibility with the full .Net framework, and increased support for device debugging. Combining .Net Compact Framework with TDD and CI (using wMobinium.net) would bring greater benefits to an organization and take the mobile application platform to the next level.
With the proliferation of new mobile devices today, the mobile application is becoming a crucial part of a broader enterprise product offering. It is pragmatic, more than ever, to bring mobile application development out from isolation and include it in enterprise-wide test-driven development and continuous integration efforts.
Munjal Budhabhatti is a senior solution developer at ThoughtWorks. He possesses over 10 years of experience in designing large-scale enterprise applications and has implemented innovative solutions for some of the largest microfinance, insurance and financial organizations in Africa, Asia, Europe, and North America. He spends most of his time writing well-designed enterprise applications using agile processes.
This article was published in the Architecture Journal, a print and online publication produced by Microsoft. For more articles from this publication, please visit the Architecture Journal Web site.