August 2009

Volume 24 Number 08

Patterns in Practice - Incremental Delivery Through Continuous Design

By Jeremy Miller | August 2009

.SidebarContainerA { width: 400px; height: 950px; float: right; border: 2px solid #008080; padding: 10px 20px 0px 20px; background-color: #e1e8f5; margin: 20px 0px 20px 10px; font-family: Verdana, Arial; font-size: 12px; padding-bottom: 7px; line-height: 16px; color: #333333; } .SidebarContainerB { width: 400px; height:370px; float: right; border: 2px solid #008080; padding: 10px 20px 0px 20px; background-color: #e1e8f5; margin: 20px 0px 20px 10px; font-family: Verdana, Arial; font-size: 12px; padding-bottom: 7px; line-height: 16px; color: #333333; }

Contents

Incremental Delivery of Features
Continuous Design
The Last Responsible Moment
Reversibility
YAGNI and the Simplest Thing that Could Possibly Work
How Much Modeling Before Coding?
What's Ahead

In earlier Patterns in Practice columns, I've focused mainly on technical "patterns," but in this article I'll discuss the softer "practice" side of software design. The end goal of software projects is to deliver value to the customer, and my experience is that software design is a major factor in how successfully a team can deliver that value. Over design, under design, or just flat out wrong design impedes a project. Good design enables a team to be more successful in its efforts.

My experience is also that the best designs are a product of continuous design (also known as emergent or evolutionary design) rather than the result of an effort that tries to get the entire design right up front. In continuous design, you might start with a modicum of up-front design, but you delay committing to technical directions as long as you can. This approach lets you strive to apply lessons learned from the project to continuously improve the design, instead of becoming locked into an erroneous design developed too early in the project.

In addition, I firmly believe that the best way to create business value is through incremental delivery of working features rather than focusing first on building infrastructure. In this article, I'll explore how incremental delivery of working features enables a project team to better deliver business value, and how using continuous design can enable incremental delivery to be more efficient and help you create better software designs.

Predictive versus Reactive Design

First of all, what is software design? For many people, software design means "creating a design specification before coding starts" or the "Planning/Elaboration Phase." I'd like to step away from formal processes and intermediate documentation and define software design more generally as "the act of determining how the code should be structured." That being said, we can now think of software design happening in two different modes: predictive or reactive (or reflective if you prefer).

Predictive design is the design work you do before coding. Predictive design is creating UML or CRC models, performing design sessions with the development team at the beginning of iteration, and writing design specifications. Reactive design is the adjustments you make based on feedback during or after coding. Refactoring is reactive design. Every team and even individuals within a team have different preferences in using predictive or reactive design. Continuous design simply puts more importance on reactive design than does traditional software development processes.

Incremental Delivery of Features

In 2002, my then-employer was experimenting with the newly minted Microsoft .Net Framework and had launched a trial project using ASP.NET 1.0. I, along with many others, eagerly watched the project, hoping for success so that we could start using this exciting new framework on projects of our own. Six months later the project was canceled. The team had certainly been busy, and by all accounts it had written a lot of code, but none of that code was suitable for production.

The experience of that project team yields some important lessons. The team first wrote a design specification that was apparently fairly complete and conformed to our organization's standards. With this document in hand, the team started the project by attempting to build the entire data access layer, then the business logic layer, and finally the user interface. When they started to code the user interface screens, the developers quickly realized that the existing data access code wasn't exactly what they needed to build the user interface, even though that code conformed to the design documents. At that point, IT management and the business partners didn't see any value being delivered by the project and threw in the towel in favor of other initiatives.

Ironically, to me the parent company was and is one of the world's leading examples of a lean or just-in-time manufacturer. Most of our competitors used "push" manufacturing, in which large quantities of parts are ordered for factory lines based on the forecasted demand over some period of time. The downfalls of push manufacturing are that you lose money any time you order more parts than you can use; you have to pay extra to store the stocks of surplus parts before you're ready to use them; and you are vulnerable to part shortages on the factory floor any time the forecasts are wrong—and forecasts are rarely accurate.

In contrast, my then-employer used "pull" manufacturing. Several times a day the factory systems scheduled the customer orders they needed to build over the next couple of hours, determined the quantities of parts they needed to complete those orders, and then ordered for immediate delivery exactly the number and type of parts needed. The advantages of pull manufacturing are that by buying only what is needed, you waste much less money on parts that can't be used; factories have far fewer on-hand part stocks to contend with, making manufacturing somewhat more efficient; you can quickly adapt to new circumstances and market forces when you aren't bound by forecasts made months ago; and forecasts and estimates are more accurate when made over the short term rather than a longer term.

So how does the pull versus push issue apply to software development? The failed project I described earlier used push design by trying to first determine all the infrastructural needs of the system and then trying to build out the data access infrastructure before writing other types of code. The team wasted a lot of effort designing, documenting, and building code that was never used in production.

Instead, what if the team had settled for quickly writing a high-level specification with minimal details, then proceeded to develop the highest-priority feature to production-ready quality, then the next highest-priority feature, and so on. In this scenario, the team would build out only infrastructure code, like data access code, that was pulled in by the requirements of the particular feature.

Think about this. What is a better outcome for a project at the end of its scheduled timeline?

  1. 1.Only 50 percent of the proposed features are complete, but the most important features of the initial project proposal are ready to deploy to production.
  2. 2.Most of the coding infrastructure is complete, but no features are completely usable and nothing can be deployed to production.

In both cases the team is only roughly half done and neither outcome is truly a success compared to the initial plan and schedule. But which "failure" would you rather explain to your boss? I know my boss and our sales team would definitely prefer the first outcome based on incremental delivery.

The key advantages of incremental delivery are the following:

  1. Working in order of business priority. Building incrementally by feature gives you a better chance to complete the features most important to the business. After all, why should you spend any time whatsoever designing, building, and testing a "nice to have" feature before the "must have" features are complete?
  2. Risk mitigation. Frankly, the biggest risk in most projects isn't technical. The biggest risk is that you don't deliver business value or you deliver the wrong system. Also, the harsh reality is that the requirements and project analysis given to you is just as likely to be wrong as your design or code. By demonstrating working features to the business partners early in the project, you can get valuable feedback on your project's requirements. For my team, early demonstrations to our product manager and sales team have been invaluable for fine-tuning our application's usability.
  3. Early delivery. Completed features can be put into production before the rest of the system to start earning some return on value.
  4. Flexible delivery. Believe it or not, the business partners and your product manager sometimes change their priorities. Instead of gnashing your teeth at the injustice of it all, you can work in such a way that assumes that priorities will change. By tying infrastructure code to the features in play, you reduce the likelihood of wasted effort due to changing priorities.

Now, for the downside of incremental delivery: it's hard to do. In the lean manufacturing example, pull manufacturing worked only because the company's supply chain was ultraefficient and was able to stock factories with parts almost on demand. The same holds true for incremental delivery. You must be able to quickly design the elements of the new features and keep the quality of the code structure high enough that you don't make building future features more difficult. What you don't have time to do is spend weeks or even months at a time working strictly on architectural concerns—but those architectural concerns still exist. You need to change the way you design software systems to fit the incremental delivery model. This is where continuous design comes into the picture.

Continuous Design

Proponents of traditional development often believe that projects are most successful when the design can be completely specified up front to reduce wasted effort in coding and rework. The rise of Agile and Lean programming has challenged traditional notions of the timing of software design by introducing a process of continuous design that happens throughout the project life cycle. Continuous design purposely delays commitments to particular designs, spreads more design work over the life cycle of the project, and encourages a team to evolve a design as the project unfolds by applying lessons learned from the code.

Think of it this way. I simply won't develop the detailed design for a feature until it's time to build that feature. I could try to design it now, but that design work wouldn't provide any benefits until much later—and by the time my team gets to that feature, I'm likely to understand much more about our architecture and system and be able to come up with a better design than I could have at the beginning of the project.

Before I go any further, I'd like to say that continuous design does not imply that no design work takes place up front. I like this quote from Robert C. (Uncle Bob) Martin: "The goal is to create a small but capable initial design, and then maintain and evolve that design over the life of the system."

Before you write off continuous design as risky and prone to error, let's discuss how to make continuous design succeed (in other words, I'm going to try to convince you that this isn't crazy).

The Importance of Feedback

Many projects are truly straightforward, with well- understood requirements, and strictly use well-known technologies. Up-front design might work fairly well with these projects, but my experience is the opposite. Almost every project I've worked on has had some degree of novelty, either in the technology used, the development techniques employed, or in the requirements. In those cases, I believe that the best way to be successful is to adopt an attitude of humility and doubt. You should never assume that what you're doing and thinking works until you have some sort of feedback that verifies the code or design.

Because continuous design involves the evolution of the code structure, it's even more important when using that approach to create rapid feedback cycles to detect early errors caused by changes to the code. Let's take the Extreme Programming (XP) model of development as an example. XP calls for a highly iterative approach to development that remains controversial. Almost as controversial is the fact that XP specifies a series of practices that are somewhat difficult to accept for many developers and shops. Specifically, XP practices are largely meant to compensate for the rapid rate of iteration by providing rapid and comprehensive feedback cycles.

  • Collective ownership through pair programming. Love it or hate it, pair programming requires that at least two pairs of eyes review each and every line of production code. Pair programming provides feedback from a design or code review mere seconds after the code is written
  • Test-driven development (TDD), behavior-driven development (BDD), and acceptance tests. All these activities create very rapid feedback. TDD and BDD help drive out defects in the code when initially written, but just as important, the high level of unit-test coverage makes later design changes and additions to the code much safer by detecting regression failures in a fine-grained way.
  • Continuous integration. When combined with a high level of automated test coverage and possibly static code analysis tools, continuous integration can quickly find problems in the code base each and every time code is checked in.
  • Retrospectives. This requires that the development team stop and discuss how the software design is helping or hurting the development effort. I've seen numerous design improvements come out of iteration and release retrospectives.

The quality and quantity of your feedback mechanisms greatly affect how you do design. For example, high automated test coverage with well-written unit tests makes refactoring much easier and more effective. Refactoring with low or no automated test coverage is probably too risky. Poorly written unit tests can be almost as unhelpful as having no tests whatsoever.

The reversibility of your code is greatly enhanced by solid feedback mechanisms.

The Last Responsible Moment

If not up front, when do you make design decisions? One of the most important lessons you learn through continuous design is to be cognizant of the decisions you make about your design and to consciously decide when to make those decisions. Lean programming teaches us to make decisions at the "last responsible moment." According to Mary Poppendieck (in her book Lean Software Development), following this principle means to "delay commitment until … the moment at which failing to make a decision eliminates an important alternative."

The point is to make decisions as late as possible because that's when you have the most information with which to make the decision. Think back to the failed project I described at the beginning of this article. That team developed and committed to a detailed design for the data access code far too early. If the developers had let the user interface and business logic needs drive the shape of the data access code as they built the user interface features, they could have prevented quite a bit of wasted effort. (This is an example of "client-driven design," where you build out the consumer of an API first in order to define the shape and signature of the API itself.)

One of the key ideas here is that you should think ahead and continuously propose design changes, but you shouldn't commit irrevocably to a design direction until you have to. We don't want to act based on speculative design. Committing early to a design precludes the possibility of using a simpler or better alternative that might present itself later in the project. To quote a former colleague, Mike Two of NUnit 2 fame, "Think ahead yes, do ahead no."

Reversibility

Martin Fowler says, "If you can easily change your decisions, this means it's less important to get them right—which makes your life much simpler." Closely related to the last responsible moment is the concept of reversibility, which I would describe as the ability or inability to change a decision. Being cognizant of the inherent reversibility of your decisions is essential to following the principle of the last responsible moment. The first decision my team made for a recent project was whether to develop with Ruby on Rails or stay with a .NET architecture. Choosing a platform and programming language is not an easily reversible decision, and we knew we needed to make that decision early. On other projects, I've had to coordinate with external groups that needed to define and schedule their time months in advance. In cases like those, my team absolutely had to make decisions up front to engage with the external teams.

A classic decision involving reversibility is whether to build caching in an application. Think about cases where you don't know for sure if you really need to cache some piece of data. If you're afraid that caching will be impossible to retrofit later, you invariably have to build that caching at the start—even though that may be a waste of time. On the other hand, what if you've structured the code to isolate the access to this data in such a way that you could easily retrofit caching into the existing code with little risk? In the second case, you can responsibly forgo the caching support for the moment and deliver the functionality faster.

Reversibility also guides my team in what technologies and techniques we use. Because we use an incremental delivery process (Kanban with Extreme Programming engineering practices), we definitely favor technologies and practices that promote higher reversibility. Our system will probably have to support multiple database engines at some point in the future. To that end, we use an Object Relational Mapping framework to largely decouple our middle tier from the actual database engine. Just as important, we've got a fairly comprehensive set of automated tests that exercise our database access. When it's time to swap database engines, we can use those tests to be confident that our system works with the new database engine—or at least point out exactly where we're incompatible.

YAGNI and the Simplest Thing that Could Possibly Work

To do continuous design, we have to make our code easy to change, but we'd really like to prevent a lot of rework in our code as we're making changes to it. To do incremental delivery, we want to focus on building only the features we're tasked with building right now, but we don't want to make the next feature impossible or harder to develop by making the design incompatible with future needs.

Extreme programming introduced two sayings to the development vernacular that are relevant here: "You aren't gonna need it" (YAGNI, pronounced "yawg-nee"), and "The simplest thing that could possibly work."

First, YAGNI forbids you to add any code to the system now that will not be used by current features. "Analysis paralysis" in software development is a very real problem, and YAGNI cuts through this problem by forcing you to focus on only the immediate problem. Dealing with complexity is hard, but YAGNI helps by reducing the scope of the system design you need to consider at any one time.

Of course, YAGNI can sound scary and maybe even irresponsible because you might very well need the level of complexity you bypassed the first time around. Following YAGNI shouldn't mean that you eliminate future possibilities. One of the best ways to ensure that is to employ "the simplest thing that could possibly work."

I like Alan Shalloway's definition of the simplest thing that could possibly work shown in the following list. (The once-and-only-once rule refers to the elimination of duplication from the code; it's another way of describing the "don't repeat yourself" principle). You should choose the simplest solution that still conforms to these rules:

  1. Runs all the tests.
  2. Follows the once-and-only-once rule.
  3. Has high cohesion.
  4. Has loose coupling.

These structural qualities of code make code easier to modify later.

The point of these complementary sayings is that each piece of complexity has to earn its right to exist. Think about all the things that can happen when you choose a more complex solution over a simpler one:

  1. The extra complexity is clearly warranted.
  2. The extra complexity isn't necessary and represents wasted effort over a simpler approach.
  3. The extra complexity makes further development harder.
  4. The extra complexity turns out to be flat-out wrong and has to be changed or replaced.

The results of adding complexity include one positive outcome and three negative outcomes. In contrast, until proven otherwise, a simple solution may be adequate. More important, the simple approach will probably be much easier to build and to use with other parts of the code, and if it does have to be changed, well, it's easier to change simple code than complex code. The worst case scenario is that you have to throw away the simple code and start over, but by that time you're likely to have a much better understanding of the problem anyway.

Sometimes a more complex solution will definitely turn out to be justified and the correct choice, but more often than not, using a simpler approach is better in the end. Consistently following YAGNI and "the simplest thing" when you're in doubt is simply following the odds.

How Much Modeling Before Coding?

Let's put documentation requirements aside for the moment. Here's a classic question in software development: "How much design and modeling should I do before starting to code?" There is no definitive answer because every situation is different. The key point here is that when you're unsure how to proceed, this means you are in a learning mode. Whether you do some modeling or exploratory coding first strictly depends on which approach helps you learn faster about the problem at hand, and, of course, I have to repeat this classic quote from Bertrand Meyer: "Bubbles don't crash."

  • If you're working with an unfamiliar technology or design pattern, I think that modeling isn't nearly as useful as getting your hands dirty with some exploratory coding.
  • If a design idea is much easier for you to visualize in a model than in code, by all means draw some models.
  • If you have no idea where to start in the code, don't just stare at the IDE window hoping for inspiration. Take out a pen and paper and write down the logical tasks and responsibilities for the task you're working on.
  • Switch to coding the second that you reach a point of diminishing returns with modeling. (Remember, bubbles don't crash!) Better aligning the boxes in your diagram does not help you write better code!
  • If you do jump straight into coding and begin to struggle, stop and go back to modeling.
  • Remember that you can switch between coding and modeling. Many times when you're confronted with a difficult coding problem, the best thing to do is pick out the simplest tasks, code those in isolation, and use the form of that code to help you determine what the rest of the code should look like.

Another thing to keep in mind is that some forms of modeling are more lightweight than others. If UML isn't helping you with a problem, switch to CRC cards or even entity relationship diagrams.

What's Ahead

This article had no code whatsoever, but I feel strongly that these concepts apply to almost all design decisions. In a future column, I'll talk about some specific concepts and strategies for developing designs that allow you to use continuous design principles. I'll also describe in much more detail how refactoring fits into continuous design.

Send your questions and comments to mmpatt@microsoft.com.

Jeremy Miller, a Microsoft MVP for C#, is also the author of the open-source StructureMap tool for Dependency Injection with .NET and the forthcoming StoryTeller tool for automated acceptance testing in .NET. Visit his blog, "The Shade Tree Developer," part of the CodeBetter site.