Code download available from the MSDN Code Gallery
I don’t have to remind everyone that we’re in the middle of a world-wide economic downturn. When the economy is good, it is hard enough to convince your client to re-build an application from scratch. When the economy is bad, it is close to impossible. As developers, we’re going to see more push from our clients for evolutionary development of applications rather than wholesale replacement. We will be called upon to improve existing codebases, implement new features, and take these projects in initially unforeseen directions.
In this nine-part article series, I will take you on a journey to improve an existing "classic" ASP.NET Web application. By classic, I mean a Web application written in ASP.NET 1.X or 2.0, but before the widespread use of AJAX techniques or ASP.NET MVC. In other words, a Web application that you wouldn't be surprised to find running in your corporate data center or somewhere on the internet, or a workhorse of an application that gets the job done, but doesn't necessarily do it in a manner that is maintainable or that improves the productivity of its users.
In choosing a suitable Web application, I had a few requirements:
Based on these requirements, I chose ScrewTurn Wiki, which is available from http://www.screwturn.eu/.
You can also take a tour of the ScrewTurn Wiki by viewing the following video clip.
Let me explain why I chose ScrewTurn Wiki.
I wanted the code to feel familiar so that techniques discussed can be easily transferred to your own codebases. The ScrewTurn Wiki codebase should be familiar to anyone who has ever written an ASP.NET 2.0 Web application—ASPX pages, ASCX controls, master pages, dynamic content stored in a database (or files).
I wanted the code to look like code that any of us could have written last year, last month, or last week. I want to show ways to improve the codebase, not ridicule its inadequacies. The reason for selecting ScrewTurn Wiki is not because it is desperately in need of improvement. On the contrary, ScrewTurn Wiki is an overall good codebase and could use improvement the way any quality codebase would benefit from improvement.
I wanted to choose a business domain that was easily recognizable and understandable to nearly everyone. Shopping carts and eCommerce sites have been used as examples far too frequently. A stock trading application is not a good choice in the current economy. Many other business domains, such as shipping, accounting, and others, simply require too much explanation of core concepts.
With the success of wikis such as Wikipedia.org, every developer should have at least a passing familiarity with the business domain concepts in a wiki—pages, edits, history, diffs, and similar. I hope that this familiarity makes the code easily approachable for a wide developer audience.
I wanted to improve a real Web application, not some contrived example. Additionally, I wanted to find an application with the source code available so that you could follow along as I refactored the codebase (a code download is available from the MSDN Code Gallery). ScrewTurn Wiki has a public Subversion repository and you can get the latest version of the source code from here:
You'll need a Subversion client such as TortoiseSVN, available from http://tortoisesvn.net, to download the latest source.
Note that I will be starting from the ScrewTurn Wiki 3.0 Beta codebase, "The Trunk" (noted above). Most ScrewTurn Wiki sites are currently running ScrewTurn Wiki 2.0. I felt that it would be more valuable and relevant to work with the latest source code rather than a previous version.
Choosing ScrewTurn Wiki for this series has a number of other advantages:
As such, many of you have probably heard of ScrewTurn Wiki even if you haven't looked at the source code.
Another big advantage is that when I contacted ScrewTurn Wiki's author/maintainer, Dario Solera, about using it as the Web application for this article series, he was excited and supportive. When finished with the article series, I will donate all source created back to the ScrewTurn Wiki project.
So what will I be doing to ScrewTurn Wiki in this nine-part series? I will apply a combination of good software engineering practices and integrate new technologies into the existing ScrewTurn Wiki 3.0 codebase. Each article will cover a different type of improvement and can be read alone or as part of the larger series. Topics will include everything from build automation/scripting, testing, and refactoring to HTTP handlers/modules, AJAX, jQuery, and ASP.NET MVC. This article will focus on “Brownfield Basics” or how to get started with improving an existing application.
Most developers have probably heard the term "greenfield development," meaning a brand new project without any existing source code. The project is a clean slate on which you can make your mark as an architect, developer, or designer. It is your own personal playground.
So what is brownfield? Wikipedia defines brownfield as:
...land previously used for industrial purposes, or certain commercial uses, and that may be contaminated by low concentrations of hazardous waste or pollution and has the potential to be reused once it is cleaned up.
In software development terms, a brownfield application is an existing application which is developmentally hindered by poorly implemented practices, but has the potential for improvement. Poor practices might sound overly harsh, but almost every project can improve its software development practices in one way or another. When starting (or continuing) work on an existing application, we should consider the current practices implemented on the project and how they can be improved. I will examine ScrewTurn Wiki in light of some brownfield basics. (For a deeper look at brownfield development, I recommend Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.)
There are some fundamental software engineering principles that every project should adopt, regardless of the particular methodology to which you might subscribe. They are:
In the rest of this article, we will discuss the first three. Automated testing will be the topic of the next article.
All software development projects should use a version control system (VCS) for tracking and coordinating changes to code files and other project artifacts. The size of the project team doesn't matter as even a single developer benefits from using version control. If you are still using ZIP files to track changes to your source, you should reconsider.
A VCS helps a team coordinate changes to source code and other artifacts during development. If a developer tries to commit changes without updating to the latest version in the repository, the VCS will require the developer to update before committing. This will give the developer a chance to resolve any conflicts that may be present. The VCS also prevents one developer from accidentally overwriting the changes made by another developer.
A VCS also acts as a safety net for developers. If a developer makes a change that breaks the application, the change can easily be reverted by rolling back to a build with the latest known good state in the repository. Even if the change has been committed to the repository, it is possible to revert the change. Modern version control systems support branching, merging, and diffing. (For definitions of these and other terms, please take a look at our Glossary.) They can answer questions such as who last changed this code, what changes were made, and which other files were modified at the same time. They prevent developers from accidentally overwriting other changes either via edit/merge or a pessimistic locking (also known as checkout) mechanism. Although it sounds dangerous at first glance, edit/merge causes less friction on a development team and works better in practice than pessimistic locking. In most cases, merges happen automatically because developers are working on different parts of a code file. If developers happen to be working on the same part of the codebase, they should probably be coordinating their efforts somewhere other than the VCS.
There are many modern version control systems available today. For open source projects, the most popular—and free—choice is Subversion (SVN) and it is supported by most open source project host sites, such as SourceForge, Google Code, and CodePlex. (CodePlex actually uses Microsoft Team Foundation Server for source control, but hosts SvnBridge, which allows Subversion clients to talk to Team Foundation Server.) In corporate environments, teams often install these same version control systems, such as Subversion (SVN) or Team Foundation Server , behind the firewall.
The ScrewTurn Wiki project manages its source code using Subversion. If you want to get started with Subversion, I would recommend downloading TortoiseSVN (http://tortoisesvn.net), which is a Windows Explorer Shell Extension and provides context menus for source control operations. TortoiseSVN will also optionally install the command line tools, which allow you to use Subversion from cmd.exe or PowerShell. Let's take a quick tour of Subversion and TortoiseSVN ( see Figures 2 and 3):
branch: A temporary development line in a VCS repository. Often used while creating a patch, stabilizing a release (while development on the trunk continues), or experimenting with a new (and disruptive) feature.
changeset: Combined set of changes that should be atomically committed to a repository.
commit: Send a changeset to the VCS repository.
diff: Changes made to a file.
merge: Integrating multiple changes into a working copy. The changes can be from a different developer working on the trunk or from another branch.
patch: Changeset expressed as a single file that can be electronically sent to another developer.
tag: An important milestone in a VCS repository, such as a publicly released version (alpha, beta, full release, or patch). Similar in concept to labels in VSS.
trunk: The main development line in a VCS repository.
update: Retrieve any changes made to the trunk (or branch) since last update.
working copy: Local copy of a project checked out from a VCS repository.
Figure 2 TortoiseSVN File Manager View Showing Overlay Icons
You will notice the icon overlays in Figure 2, which is TortoiseSVN indicating the current status of a file/directory. (Green check = no pending changes; red exclamation = pending changes; question mark = not versioned; grey dash = ignored file) If you right click on a file/directory, as shown in Figure 3, you can see the TortoiseSVN context menu appear.
Figure 3 Context Menus for Source Control Operations
You can learn more by viewing the following video clip.
Whether we call them bugs, issues, or defects, every piece of software has them. If you think that your software doesn't have any bugs, in reality it just doesn't have any known bugs. Put that software in front of real users and they'll surely find bugs that the developers and testers missed. You need a way to track and report on bugs. Questions that a good issue tracking system will help you answer include:
It can then help you perform a variety of necessary tasks, including:
At a high level, an issue tracking system can provide metrics to give you greater insight into your software:
Tracking bugs using emails, spreadsheets, or user forums is not efficient or recommended. (Currently ScrewTurn Wiki uses this last technique of a user forum called "Bugs, Issues and Patches".) These methods make it too easy to misplace a bug and too difficult to manage its status, find all data related to a bug, and report on metrics. There are entire products—many free—whose sole purpose is to track and report on bugs. Team Foundation Server includes issue tracking along with its version control system. Trac is an example of a popular open source issue tracking system that can integrate with Subversion and other version control systems. There are more issue tracking systems than version control systems! You should investigate and choose an appropriate issue tracking system for your project. It is beyond the scope of this article to setup an issue tracking system for ScrewTurn Wiki, but I would encourage the ScrewTurn Wiki team to evaluate and select an issue tracking system for their project.
Let me be blunt: F5 is not a build process and neither is CTRL-SHIFT-B. Both are handy shortcuts to use while developing software with Visual Studio, but neither has a reputation for creating reproducible builds. For example, Visual Studio will use cached copies of assemblies sitting in the bin folder if it cannot find the referenced one on the file system. Your build appears to be fine until another developer tries to build or you delete your bin folder. Suddenly the build stops working.
Every project needs a reliable, consistent, and reproducible method for building its source code with preferably no manual steps. Manual steps in your build process leave room for human error and/or differences between developers. You want consistent build results regardless of whose machine the project is built on. We've all been in the situation of "works on my machine" and that's a bad place to be. You spend a lot of time troubleshooting differences between different developer workstations.
The troubleshooting problem is magnified when you are asked to make changes to an existing application that hasn't been worked on for a year or two. You check the code out of the VCS and it won't build. Myriad questions race through your mind, such as:
Sometimes it can take hours or days to get the source building again as you track down old versions of third-party components and wade through compiler errors.
There are two simple things that you can do to improve the reliability, consistency, and reproducibility of your builds:
A project's dependencies include third-party components and libraries. Without a certain grid control or logging library, your project's source code won't build. You should be just as careful versioning these external dependencies as you are with versioning your own code. Your code is built and tested against a specific version of each third-party component. Upgrading to a newer version should be an explicit decision on the part of the developer and not implicitly based on whichever version happens to be installed on the developer's workstation when the project is built.
ScrewTurn Wiki versions some, but not all, of its dependencies in the $\References directory. ($ represents the solution root.) When I try to compile the project, the build fails with a missing reference because NUnit 2.4.8 is not installed. Now I could just install NUnit to solve the issue, but I still have the problem that every developer needs to install NUnit to the same location. The problem is magnified when you consider the dozen or more third-party components that a typical application uses. What if another developer needs to install the library in another location? What if different projects (or different versions of the same project) need different—and incompatible—versions of the third-party component? There are many ways to solve these problems including NTFS junctions, customized installation directories, developer-specific reference override paths in Visual Studio, and more. All of these solutions tend to be fragile and require manual setup.
What if we just versioned all third-party components with our project's source code and used relative references? This removes a lot of guesswork when setting up a new developer workstation as all the required components and libraries for a project are fetched with the source code. It also means that if one developer updates a library to a later version to implement a feature or fix a bug, all the other developers immediately get the updated library the next time that they update their source tree. You eliminate compatibility problems between different projects (or different versions of the same project) because each project has its own private copy of its external dependencies. It is a simple and effective solution to the problem.
With ScrewTurn Wiki, I set up a directory called $\thirdparty with two subdirectories: libs and tools. Libs contains all the external dependencies that will get deployed with my application. Tools contains any external dependencies that are used in building, testing, or deploying my application. For example, a grid control suite would go into libs, whereas a testing framework, like NUnit, would go into tools. Each component has its own subdirectory without version number. So NUnit gets placed in $\thirdparty\tools\NUnit. The reason for no version number is for ease of upgrade. If a new version of NUnit is released, simply copy the new files over the existing and build/test the application. There is no need to modify project references to point to the new directory. If successful, you can commit the changed files in $\thirdparty\tools\NUnit to source control. If failed, you can undo your changes easily or troubleshoot the issue. Note: You can, of course, include version numbers for libs and tools, but this makes the upgrade more complex since you would need to update your build script/csproj files to account for the changed version number.
Separating dependencies into their own subdirectories improves organization and eliminates guesswork on which files belong to which external dependency. To visualize this point, let's look at ScrewTurn Wiki before separation, with all dependencies in the $\References directory (see Figure 4).
Figure 4 File Structure without Separated External Dependencies
After we re-arrange the dependencies into their own separate directories, our structure looks more like the screen in Figure 5.
Figure 5 File Structure with Separated External Dependencies
In creating a build script, we want to automate common tasks for building the project source code as much as possible. Whether you know it or not, you are already using build scripts. When you create a new project, Visual Studio creates a build script for that project in the form of a .csproj or .vbproj file. (MSBuild was introduced with .NET Framework 2.0, so project files created by Visual Studio 2005 and above use the new MSBuild format under the covers.) If you open a project file in a text editor such as Notepad rather than Visual Studio, you will see something like this, as shown in the following video clip:
To run a build from the command line, simply launch a Visual Studio 2008 Command Prompt, navigate to the solution root, and execute:
Running the Visual Studio 2008 Command Prompt instead of the standard command prompt ensures that the Visual Studio and the .NET Framework directories are added to the PATH environment variable, enabling you to execute commands like msbuild.exe without specifying the full path to the executable. In addition to compiling the entire solution, you can compile individual projects:
To automate the build, you could modify the project files directly. However, doing so will cause Visual Studio to present you with a warning dialog every time you open the solution, as shown in Figure 6.
Figure 6 Security Warning Dialog in Visual Studio
Rather than editing the project files directly, I will create a separate build file for building the solution from the command line. If you are creating a custom build file, the two most commonly used build engines on the .NET platform are MSBuild and NAnt. (Other options include rake, Bake, and psake, among others.) I will use MSBuild since it is installed by default with .NET Framework 2.0 and above.
Let me give you a very brief introduction to MSBuild. In MSBuild terminology, build steps are called "targets." We will define three common targets initially: Clean, Init, and Compile. You can create other targets to automate other common tasks such as running tests, creating installer packages, deploying to a staging server, or generating documentation. MSBuild uses tasks to define operations within a target. There are many default tasks, such as RemoveDir, MakeDir, MSBuild, and Exec, among others. MSBuild also has the notions of items and properties. Items are files, directories, and references. Properties are name-value pairs representing configuration (Debug/Release), architecture (x86/x64/ia64/AnyCPU), or anything else.
Let's start by defining the Clean target, as shown in the following code:
We use the RemoveDir/ task to remove the $\build\buildartifacts\ directory. Nothing surprising here.
Next we want to initialize our build environment, which means recreating the $\build\buildartifacts\ directory:
<Target Name="Init" DependsOnTargets="Clean">
Note that the Init target depends on the Clean target so the $\build\buildartifacts\ directory will be removed and recreated if it already exists. This ensures a clean build.
Moving onto the Compile task itself, I don't want to re-specify the project references, C# files, and configuration options that are already present in the project file. (Some developers do like to have this level of control with their builds and use the project files solely for organizing files within Visual Studio.) I will simply use the MSBuild/ task to compile the solution (which will compile the associated project files) defined by ScrewTurnWiki.sln. I override OutDir to place the build output in $\build\buildartifacts\ rather than in various <Project>\bin\Debug directories. I pass through the build configuration (Debug or Release) and set it to Debug if it is left unspecified, as shown in Figure 7. You can override additional options, such as CPU architecture, in a similar manner:
Figure 7 Compile Using MSBuild
<Configuration Condition=" '$(Configuration)' == " ">Debug</Configuration>
<Target Name="Compile" DependsOnTargets="Init">
The full build script is as follows, as shown in Figure 8.
Figure 8 Full Build Script
<?xml version="1.0" encoding="utf-8"?/>
<Project ToolsVersion="3.5" DefaultTargets="Compile"
<Configuration Condition=" '$(Configuration)' == " ">Debug</Configuration>
<Target Name="Init" DependsOnTargets="Clean">
<Target Name="Compile" DependsOnTargets="Init">
Notice the Project DefaultTargets="Compile"/, which specifies that the Compile target should be run if none is specified. You can have msbuild run a specific target using the /t switch. For example, to only run the Clean target:
msbuild build/ScrewTurnWiki.build /t:Clean
The following video clip takes a look at building from the command line.
For more information on creating MSBuild scripts, I encourage you to read Sayed Ibrahim Hashimi's Best Practices For Creating Reliable Builds Part 1 and Part 2. I would also like to extend my thanks to Sayed for his assistance in troubleshooting a MSBuild issue that I encountered while writing this article.
This article is just the beginning of our journey as we set out to improve ScrewTurn Wiki. It is never too late to implement good development practices on an existing codebase, such as version control, issue tracking, automated self-contained build scripts, and automated testing. These practices provide your team with a safety net that allows you to more confidently make changes to improve an existing codebase. The next article will focus on automated testing, including acceptance testing with WatiN, a Web testing framework, and unit/integration testing with NUnit.
James Kovacs is an independent architect, developer, trainer, and jack-of-all-trades living in Calgary, Alberta, specializing in agile development using the .NET Framework. He is a Microsoft MVP for Solutions Architecture and received his master's degree from Harvard University. James can be reached at email@example.com or www.jameskovacs.com.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Subscribe to MSDN Flash newsletter
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.