In the December 2009 issue of MSDN Magazine I gave advice for identifying and building a case to tackle technical debt. In summary, I believe it’s important to identify the debt that’s likely to harm you in the near future. Introducing technical excellence to seldom touched parts of your codebase won’t help you realize productivity gains tomorrow.
Also, I hope that you understand the importance of obtaining license and buy-in from management on the importance of paying back debt and have some basic tools to start building a rock-solid case for the same.
Now let’s turn our attention to tactics that might help you pay back high interest technical debt. There are many proven tactics in dealing with technical debt. A full catalog of the patterns, tools and techniques for wrangling difficult code is well beyond the scope of this article. Instead, I’ll supply some of the more applicable tricks I’ve added to my repertoire over the years.
If you know you have issues, but you’re not sure how to fix them, it might be time to acquire new knowledge and skills that will help you raise your code out of the muck. Learning, as they say, is fundamental.
Learning can take many forms. You might need outside help in the form of consultants or classroom training. You might be able to get by with books.
Try to involve your team in the learning process. Perhaps you could start a book club within your team. Maybe you can bring back the benefits of a course or conference in the form of an instructive presentation.
A collaborative and hands-on technique for involving the whole team is the Coding Dojo. A basic Coding Dojo involves picking a programming challenge and tackling that as a group. I’ve experimented with a rotating pair watched by a peanut gallery. In this method, two members of the team work together on a programming task, with “tag” intervals where other members of the team enter the dojo as another person leaves.
If you learn best at your own pace or want to start a book club, there are a couple of good texts I can recommend on the subject of improving the maintainability of legacy code.
Michael Feathers’ aptly titled volume, Working Effectively with Legacy Code (Prentice Hall 2004), provides a patterns-based approach to teasing out legacy code. The author makes the statement that legacy code is untested code. It is difficult to change, and you can’t be certain your changes aren’t introducing regression defects. In this book you’ll find a number of focused strategies and tactics for reducing coupling in your code and making it more testable.
Kyle Baley and Donald Belcham have one of the newest books on the scene, Brownfield Application Development in .NET (Manning Publications, 2010). They take a systemic approach toward improving so-called brownfield (versus new development. or greenfield) codebases. One benefit of this book is that, while the approaches they recommend are broadly applicable, their code examples are designed around the Microsoft .NET Framework, a likely benefit to readers of this article. I quite like how they take a team-practices approach as well. That is, while you’re making changes in a wild codebase, the confidence you’ll get from implementing some basics such as continuous integration and version control is worth its weight in gold.
There’s a high probability the messy code you have to deal with was written by someone currently on your team. It’s important that you take this into account when reasoning about the code in its current state. Hurt feelings lead to defensiveness that, in turn, leads to slow going on the improvement train.
Try defusing the situation with anecdotes of mistakes you’ve made in the past. Stay professional, avoid personal attacks and encourage the author of the original code to make suggestions about how you might go about improving it.
Then again, it’s quite possible you’re one of the developers that contributed to your mess. I want you to repeat after me: “I am not my code. I am learning every day and am dedicated to finding a better way moving forward. I will not let my colleagues’ critiques or my own ego stand in the way of my team’s effort to improve.”
In truth, it takes time to get over these issues. I find the best way to reason and talk about improvements is to focus on the present and near future rather than the past. What could this code be? What do you want to see it evolve into?
A little diplomacy and consideration for other people’s emotional investment in work that’s already been committed will go a long, long way toward moving forward.
Some code is so horrendous it’s hard to understand what’s going on at all. Perhaps all classes are in a single namespace. Perhaps the codebase is in such a tangled web of dependencies that following the stack greatly exceeds your short-term memory’s ability to keep your place.
Symptoms like these often imply a diagnosis of debt at the architectural and design levels rather than at an implementation level. This, as far as I’m concerned, is the most insidious kind of debt and usually leads to the greatest costs of change.
Brian Foote and Joseph Yoder call architectures with no discernible shape, where everything depends on everything else, the “big ball of mud” (laputan.org/mud):
“A big ball of mud is a casually, even haphazardly, structured system. Its organization, if one can call it that, is dictated more by expediency than design. Yet, its enduring popularity cannot merely be indicative of a general disregard for architecture.”
I’d bet my last dollar that most software applications in production today are big balls of mud. This isn’t necessarily a value judgement. There are billions of lines of terrible code out there in the world making people lots and lots of money. It stands to reason that big balls of mud are fulfilling the champagne dreams and caviar wishes of many a business owner and shareholder.
The problem is that ball-of-mud applications become increasingly costly to change. While the business environment remains dynamic, the software becomes inflexible. The typical strategy for dealing with this is the software equivalent of a nuclear bomb: the big rewrite. There are many risks associated with big rewrites and it’s often better to try to improve the design of the incumbent system.
Before you can start to employ some of the lower-level techniques, it’s often valuable to introduce a shape to your system. The typical example is that of a layered architecture. Classically this means the UI talks to services and services talk to some kind of model and the model, in turn, talks to your persistence layer.
Shaping up your code into layers can be a very low-fidelity activity. Start by organizing code into namespaces named after the layers of your architecture.
Now you have your marching orders: enforce the rule that higher-level layers (user interface layer) may only depend on the next level up (services layer). As simple way of enforcing the rule is to move your layers into separate projects in Visual Studio. The solution won’t compile if you violate the rule.
By making the rule pass, you have decreased coupling. The model is no longer coupled to your application’s views. By introducing a shape you have increased cohesion. Classes inside your layer are all working to the same purpose whether that be to display data to an end user or to encapsulate business behavior.
Introduce facades between layers and make higher level layers such as your UI depend on facades provided by lower level layers rather than granular classes inside the layers. You can apply this technique this process incrementally and opportunistically.
The power of a imposing a shape on the monolithic big ball of mud is you can now start to identify more targeted opportunities for paying back technical debt. That is, if you’re doing lots of work in, say, CompanyX.ProductY.Model, you might drill down with a static analysis tool to find the most coupled or complicated classes.
The process of making changes without changing system behavior is called refactoring. There are entire refactoring pattern languages dedicated for both object-oriented (refactoring.com) and relational-database (agiledata.org/essays/databaseRefactoringCatalog.html) code: Extract Method, Split Table and so on. The fact of the matter is, it’s difficult to apply these granular and safe methods when you don’t fully understand the code base.
So how do you start making changes in a legacy project? The first thing to notice is that, given a choice, it is always safer to have tests around the changes that you make. When you change code, you can introduce errors. But when you cover your code with tests before you change the code, you’re more likely to catch any mistakes.
The practice of shotgun surgery, plunging headlong into code without any real confidence the changes you’re introducing aren’t also introducing dangerous defects, isn’t the only way to force a change.
Before you start changing code, determine whether there’s a hard interface in the system against which you can write tests. These tests are of the black-box variety. That is, you’re feeding a system inputs and inspecting the outputs. When you make your change, continually run the tests to verify your changes haven’t broken existing behavior.
Application of this tactic can be challenging when you’re tackling parts of your system that are tightly coupled. The cost of testing may very well exceed the benefit of removing debt. This constant cost-benefit analysis permeates the process of turning a codebase around and, sometimes, it’s more cost-effective to straight up rewrite an application or large section of an application’s codebase.
Build measurements around the area of code you’re improving. For the sake of argument, let’s say that you’re trying to better organize the core business logic of your application. There are lots of paths through the members in the types of this namespace: switch statements, nested if statements and the like. A measurement such as cyclomatic complexity can give you a rough sense of whether improvement efforts are simplifying your code.
You can obtain extremely specific measurements of specific parts of your codebase with the NDepend code-analysis tool (ndepend.com). NDepend provides a powerful Code Query Language (CQL) over namespaces, types and members in your .NET assemblies.
Consider the CQL statements in Figure 1. Note that I’m probing measurements like coupling and complexity (just a few of the many metrics NDepend makes available) inside a particular namespace. This implies that I’ve already introduced a shape so I can focus efforts in definable areas of my code. If I’m successful in introducing positive changes, I should see measurements like coupling and complexity decrease over time.
Figure 1 NDepend CQL
-- Efferent coupling outside a namespace
WHERE TypeCe > 0
AND (FullNameLike "MyCompany.MyProduct.Web")
-- Afferent coupling inside a namespace
WHERE TypeCa > 0
AND (FullNameLike "MyCompany.MyProduct.Web")
-- Top 20 most complicated methods
SELECT TOP 20 METHODS
WHERE CyclomaticComplexity > 4
AND FullNameLike "MyCompany.MyProduct.Web"
A nice side-effect of this tactic is that the measurements can help you hold the line and maintain discipline once debt has been removed. They will give you an early warning system toward the reintroduction of new debt into an already improved area.
You don’t live in a vacuum. Chances are, during your improvement efforts, you’ll be asked to continue to deliver new features and modifications to existing features. Delivery pressure causes feelings of being under the gun. But maintenance is a fact of life you should embrace rather than trying to ignore.
One way to deal with this is to secure approval from the business to dedicate resources—an individual, a pair or an entire team—to improving debt items concurrently with delivering new features.
This can be a highly effective strategy but is best when the entire team (all the developers and testers who make modifications to the codebase) takes a part in the improvements being made. Try regularly rotating individuals as pairs. The developer that’s been in the improvement stream the longest rotates out, leaving the other developer to brief the new pair on what’s happening.
By spreading the knowledge you get closer to collective ownership, thereby reducing risk and improving designs. Sometimes you’ll find opportunities for improvement that lie directly in the way of some new functionality you’re trying to deliver. Whenever you start work on a new or modified feature, it’s good practice to review the list to determine whether the team hasn’t already identified an area for improvement that intersects with the work you’re about to do.
Opportunities for improvement occur all the time, often identified on-the-fly and achieved with a few simple refactorings that make a difference the next time a teammate encounters the code.
There’s a constant cost-benefit analysis that goes on when you’re improving the existing code base while delivering new features. If the improvement seems too costly, add it back to your list and discuss it in your improvement planning.
You’ve payed back some debt. Time to go back to step one and identify, prioritize and build consensus on the next item that needs fixing, right?
Yes, but there’s a bit more to it than mindlessly plowing through your list. You have to be sure that you’re not incurring more debt than your fixing. You should also regularly incorporate your
learnings into future efforts in new development and improvement efforts alike.
Opportunities to improve a codebase change regularly. They emerge and their importance ebbs and flows. Reasons for the dynamic nature of high-interest debt change from release to release.
What’s worked well for me is scheduling a short, weekly meeting with developers to review new debt items and prioritize the backlog of existing debt items. This keeps the consensus you’ve built alive and the list fresh. Again, I’d give priority to fixing the debt that’s likely to slow down your current release or project.
Begin the meeting by reviewing new items. Have the identifier pitch their case and put it to vote: does it merit inclusion in the backlog or not? Once you’ve gone through the new items, review the old items. Is there work that no longer applies? Will there be immediate value in completing this work, that is, will it remove day-to-day impediments? Last, prioritize the opportunity against others—re-rank your list. The top item on the list should be the very next improvement to make.
While you and your team are merrily paying down high-interest technical debt, you’ll likely also be delivering new software. As you learn about solid programming techniques and introduce new patterns into your code, apply this knowledge going forward. It’s possible that additive work will pile on existing technical debt creating an inescapable inertia.
It’s important that you set expectations for your business stakeholder for new work. Higher quality takes more time to attain than rushed, get-it-done-style code. This fact brings me back to the systems-thinking concept introduced back in my December 2009 article. For me this is a cultural attribute. That is, organizations can either think sustainably for the long term or continue with a buy now, pay later mentality—the oh so fertile breeding ground of technical debt. Never forget the central question, how did we end up here in the first place?
While you’re learning about how to improve a codebase, you will very likely develop some team norms that apply to new code. I suggest capturing these in a tool like a wiki and holding small, informal learning sessions where you share your findings with your team. You will also develop techniques for dealing with similar improvement items. When you notice you’ve done the same thing to correct a flawed design or clean up implementation three or four times, codify it in your team’s doctrine. That is, write it down in a well-known place and, very simply, tell people it’s there.
Technical debt is a people problem. People, through lack of knowledge or unrealistic expectations, created the mess and are now dealing with the consequences. And it’ll take people working as a group to fix it.
Giving advice like this is all well and good, and I’d be surprised if you, a software professional and likely one who’s passionate about their craft, weren’t in complete agreement.
A successful turnaround requires fundamental changes in the value system of everyone involved—the entire team. The economics of quality are proven to pay back in the end, but you’ll have to take that step of faith in the near term. You have to win hearts and minds in order to change a culture, and that can be a tough job indeed. The most useful suggestion I can make is: don’t go it alone. Get the team behind the effort and make sure everyone has a stake in the results.
Setting a goal like “we want 90 percent coverage” or “we want to do Test-Driven Development (TDD) all the time” is relatively meaningless. Tackle the problem areas that are slowing you down at the moment and in the near future. That might mean introducing TDD and living by the coverage report—or it might not. It might be something more primitive like making sure your team knows the basics of object-oriented analysis and design.
While I hope I’ve given you some tools and techniques for tackling debt or, at the very least, made some of the implicit ideas and experiences you’ve had explicit, it’s important to realize that dealing with technical debt is very much a product-to-product issue. You may, for example, be in an environment where there’s not a lot of trust between the development and business parties and find you have to pitch your case with the preparation of a trial attorney.
There’s no out-of-box process that’ll tell you the how to for driving down debt, but as for the when and the where, today is a fine day to start making a difference. March toward technical excellence can be slow and rough going in the beginning. It’s only through sustained effort, constant learning and, above all, an earnest attitude that you’ll pull through the tough times, bringing debt-crippled code back into the black. I encourage you to stick with the program. Not only will you increase value for your customers, you will greatly expand your craftsman’s toolbox.
Dave Laribee coaches the product development team at VersionOne Inc. He’s a frequent speaker at local and national developer events and was awarded aMicrosoft Architecture MVP for 2007 and 2008. He writes on the CodeBetter blog network at thebeelog.com.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Subscribe to MSDN Flash newsletter
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.