Have Your Layer Cake and Eat It, Too


David Jobling

March 2007

Summary: As a design pattern, layering is an established best practice, but it can be used for more than just providing structure to an application. (6 printed pages)


Application Layering
The Productivity of a Plebe
Bigger (Not Better) Things
Critical-Thinking Questions

Application Layering

In the closing months of 1997, I started my career in IT. Back then, however, I was actually a Management Consultant. The boom had yet to arrive, and the shift to IT that the consulting industry would see in the next few years was not even on the horizon.

As a consultant, you can pretty much end up anywhere, doing anything, and I was originally assigned to the business-process area—destined for a life of meetings and spreadsheets. However, by an act of God (I like to believe), I was assigned as a junior developer to a small supply-chain management project (code name "MTV"), and the rest, as they say, is history.

The Productivity of a Plebe

I have to admit that, since then, my assignments have been part random luck and part deviant coercion on my part. But I've been lucky enough to be in the right place at the right time, to stay on the breaking wave of technology that crashed its way through the millennium and is now washing up on the shores of service-oriented architecture (SOA).

A lot has changed in the world of application architecture since 1997, but there is one constant that can be used to draw a line through that period—a concept that existed then and that is still core to application architecture today: the concept of application layering. For me, 1997 was the void, and it was dark. And then my development manager said, "Let there be layering," and thus it was.

This was my first foray into programming, and I was what I would refer to as a plebe, a new developer with no previous experience, on a steep learning curve. I was going to make mistakes, and it would take some time to get up to speed. I was joined on the project by two fellow plebes; therefore, the application architect deemed it wise to try to isolate us from much of the complexity of the client-server application that we were going to develop by implementing a layer of common services.

This was my first experience of layering and, at first, it was pleasant. I did not have to worry or even know about database connections or commands; all I had to do was call gStrExecStoredProcedure.

It was all good until, towards the end of the project, I was tasked with remaining behind to support final testing and deployment. A few bugs in, and I was knee-deep in the code beneath the common-services veneer, and it was nasty. It was not so much a layer as a carpet, underneath which was swept the most complex and hack-ridden code ever written.

It worked for the most part, but it was an absolute nightmare to maintain. Thus, the first lesson of layering was learned:

It's all well and good to isolate the complexity in order to make plebes more productive. However, never lose sight of it; and keep in mind that it will never, ever be a black box.

Spooky Prediction

My degree is in civil engineering, and I could not help but draw parallels between application architecture and building architecture when it comes to construction.

Quite a few of our later issues with this application were due to problems in the common-services code, which I equated to the foundations of a building. It goes without saying that problems with foundations are harder to fix after the building has been constructed, and techniques such as underpinning are equivalent to some of the solutions (hacks) that we had to put into the common-services layer to prevent having to rewrite the rest of the code.

There had to be a better way, and it was late one night, pondering this thought, that I drew a diagram with a big circle around the point at which the application code called the common-service code.

I used the engineering notation denoting a pinned support (one structural member that is not rigidly connected to another) and labeled it "flexibility."


Figure 1. Flexibility at the layering interface

If we knew back then what we know now, we would have designed it differently. I put it down to one of those things and left it.

Bigger (Not Better) Things

"MTV" was a small development project with a small team, each of whose members had control of their own areas and knew them intimately. I then moved on to a much larger project that had been in existence for over 6 years in some form or other (code name "DVA"). I entered during the development stage of Phase 2, while Phase 1 was being tested.

It was a very large data-access and edit application. The concept of the application was not that complex, but there were a lot of data entities to build (person, car, residence) and many, many screens to give the appropriate level of access to each of them and establish the relationships.

If the "MTV" project could be described as light on layers, "DVA" was layering on steroids. From database to screen, the application had no fewer than 12 logical layers to get through—each layer adding a little bit more to the party, such as decoding or mapping.

The reason for so many layers was that the vendor had at its disposal lots of relatively unskilled people. Faced with having to build all these different entities and screens, they broke down the work packages to such a level that they could be understood and completed by a history graduate with no (and I mean absolutely no) previous computer experience.

This resulted in a very structured and layered application; hence, the architecture was primarily based on how they were going to build it as opposed to any end-product functional or non-functional requirements, such as scalability.

Henry Ford would have been proud of the level of industrialization. It did not amaze me to hear that it nearly took just as long to design this factory process as it did to build the resulting application.

There were other benefits to this highly structured approach. Work packages could be very accurately estimated (regardless of who was doing them) and, as a result, the entire program was largely on-time and on-budget. The framework was so tight that defects were rare; and, thanks to a very detailed tracing mechanism (introduced, once again, with the development process—and not final operations—in mind), they were easily located and rectified.

Until then (and it was well into the project before I plucked up the courage to ask "WHY!!??"), I had never considered architecting an application to suit the construction phase, but in this, they were entirely justified. The purists among us will probably be aghast at that statement; but as a practical, pragmatic, and practicing architect, you must consider it. This particular instance was definitely overkill. But ever since I started architecting my own solutions, I have pushed this aspect of design further and further—each time, getting a better return on investment.

Thus came my second mantra of application layering:

Adding layering complexity to make life easier during design, development, or testing is all right—WITHIN REASON!

Brave New World

Two projects down, and I had seen both poles of enterprise-scale development projects. How lucky was that? However, the landscape was changing. Less than a year later, my laptop was full of MP3s, and the Internet revolution had begun. But how had it affected application layering?

Disappointingly, not much.

After "DVA," I went on a string of projects—edging closer and closer to my goal of Internet work. I had played with Active Server Pages (ASP) while on "MTV" and liked what I saw. We went through such hell deploying the client application that, one night, I had seriously considered rewriting the entire thing in ASP. My next project gave me just such an opportunity. It was a browser-based application that used ASP and Visual Basic (VB) in the background to deliver content to a showroom intranet (code name "PointThree").

Okay, so it wasn't the Internet. But it was ASP, nevertheless, and I was eager to get my hands dirty. Unfortunately, I joined the project after it had been designed—during the build phase—and, instead of the Dynamic Internet Architecture (DNA) that I was expecting, I got a client-server application, with the client being awkwardly and poorly presented by way of an HTML interface.

The same mistakes were being made, and it was a surprise to see the same issues here that we had on "MTV." The server code was so tightly coupled to the front end that the small changes that came up so regularly had a catastrophic effect on the rest of the code base. Drawing a parallel back to my engineering days, a similar situation would occur when the building was bolted so tightly to the ground that even the slightest tremor made the entire structure shake. Too many small changes, and the entire structure would fall and the project would fail.

I returned to the diagram that I had drawn on "MTV" and isolated the cause of the problems on "PointThree." I updated it, based on a TV program that I saw a couple of nights previously about new construction techniques used in San Francisco to combat earthquakes, in which vibration dampeners were placed between the foundations and the main building. But before I had a chance to fix it, the project ended.


Figure 2. The flexible interface, rethought as part of both design and development

It was clear that these issues were not isolated incidents. They were common problems, but they could be resolved. At this stage, I had not tested the mantra and, from "DVA," I was wary about adding too many layers. However, I was certain that:

A flexible architecture, while more flexible than the end application strictly requires, would pay for itself in reduced development costs.

For me, this was an important revelation. From this point onwards in my career, I moved from being a developer to having more responsibilities and leading development teams. I had to deliver, and tools and techniques that would reduce the issues and risks during development were welcomed.

But what about the hit on performance? "DVA" suffered heavily as a result of its layered architecture. Could there really be some middle ground? Could I have my layer cake and eat it, too?


In the Microsoft world, DNA was the key. This was an architecture pattern that allowed server-based solutions that could scale as required. At the simplest level, the reduced performance of a solution due to its multilayered architecture could be absorbed by scaling out the system to achieve the desired capacity; that is, whack in three servers, instead of two.

This architecture was now even more suitable, because another requirement of such systems was resilience. Only one server was required for capacity, but two were required for resilience. So, why not use the extra capacity to speed up development?

It was not too long before I had my chance to put into practice what I had learned. Through insistence and coercion, I became the development manager for a dot-com project right at the height of the dot-com boom—another lucky stroke.

I used DNA and designed a presentation tier on a Web server, an application tier on the application server, and the data on its own database server. My initial stab at layering was to convert these physical tiers into logical ones.

The data layer would do database things, such as select and insert; the application layer would do all sorts of cool business logic; and the presentation layer would create the HTML.

I then applied the mantras that I had painstakingly learned:

  1. Good code at all levels. If I hide something, it will only come back to bite me later. I made sure that I had a specific workflow for foundation code, and that it was not just built when and where it was needed.
  2. Play to the strengths of the team. I had quite a good team and, therefore, could be pretty generous in assigning work packages. As a result, I didn't have to subdivide the layers into sublayers to hide complexity from the plebes. My plebes were actually quite good.
  3. At last, I could implement the penned support model that I had drafted many years ago. So, I drew up plans as to how each layer would interface with the other layers.

The team was skeptical, and the initial reaction was, "Why do you need to map data between layers? Surely, I can just take what the lower layer produces. Why do I have to do all this extra work?"

Unfortunately, my only real answer was, "Because, in all likelihood, this specific aspect of the application will change—leading to the need for change here, here, and here—and it will cost more time and introduce more defects than if we build in that flexibility now." But, unfortunately, what they heard was, "Because I say so."

I learned to be a little less honest with the answers, as I continue to have the same challenge today. If someone asks me why I have designed the system in a certain way, it is nearly always due to experience that might or might not be easily communicated: "Trust me, I'm an architect," only cuts it with some people.

But I digress. Suffice it to say that it worked. Being a dot-com project, there were tight deadlines and many, many changes. But thanks to the flexibility that the architecture introduced, we were able to accommodate them without massive changes to our code; and, in the end, we came in ahead of time and budget.

During this project, I also learned another lesson. Halfway through the build phase, we noticed that the presentation-layer team was lagging behind—due to the complexity that was introduced in screen flows in the later stages. Building both the flows and the screen was getting confusing, and the developers were tripping over each other.

So, I split the team in two and, to explain the new structure to them, I introduced the concept of a new layer into the presentation tier called the process layer. "You can't just add in a new layer!" they exclaimed. "I can and—oh, look—I just did," I replied.

But this was layering in name only. Most of the code remained the same, but segregation of responsibility was now clear, and it was easy to see where one layer stopped, where the other one started, and how they communicated in a consistent and repeatable way.

Thus, I learned lesson number four:

Layering is a tool—not only for architecture, but also as a method of helping teams understand the application and how it all hangs together.

Onwards and Upwards

The next few years saw more projects. Each one was Web-based. With each, I was able to refine the model, using my experiences with different problems and different teams. Each time, there was a balance to be struck between solving the business problem, the team to build the solution, and the architectural elegance of the resulting application. One thing that I have noticed is that technological progress has played an important part in striking this balance. As technology has evolved over the past 9 years, so has the productivity and power of the plebe.

With tools such as Intellisense and software factories, a small low-skilled team can do a lot more now than they could back then. Better code-compilation tools mean that, in some cases, you can add in many logical layers to simplify development with very little (if any) impact on code performance. As some facets of this equation become less important, others become more prominent. My last few projects have been large integration projects, and a large challenge with this type of project has been the end-to-end testing.

With a process integrated with four systems in turn, to test the last interface you had to go through the first three. This was difficult in assembly testing, but it was an absolute nightmare when it came to testing in the full end-to-end environment with test versions of all four systems.

To combat this problem, when the next development phase came around, I added in a new interface layer to disconnect the main application from the interfaces, such that this layer could be invoked directly by the testers. With direct access, the test team was able to perform 90 percent of the interface testing in the end-to-end environment within a few days, as opposed to taking weeks to get all the systems synchronized.


This last example demonstrates how an application can be layered in such a way that it accommodates requirements other than those of the business or the operational environment. As we step into the era of service orientation, this becomes even more important.

In this world, applications are expected to act as services and display a large degree of reusability. While this is the dream, in reality applications and services will continue to need to be tweaked or versioned to accommodate new consumers. By layering the application as in my last example, this is more easily achieved without affecting the main application.

With service orientation comes a change in nomenclature. Application layers are becoming service layers, and this change predicts a much larger change in the way in which applications are structured. Even now, it is common for a particular layer of an application to be provided by a completely different system as a service, and the lines continue to blur.

In the end, however, as a practicing architect, you should know the right tool for the right job. Application layering, in any form, is the Leatherman tool of patterns.

Critical-Thinking Questions

  • When drawing up your initial architecture, be honest with yourself about why you are segregating components, functionality, or features into layers. Is it to make the diagram look better, or just to conform to industry practice or what somebody else expects?
  • Take time to walk through the entire project life cycle of design, development, testing, and implementation on a whiteboard. Are there any aspects of the programmers or environment that could complicate things down the line? If so, you might want to cater to these eventualities in your initial design.
  • Are you asking too much of your team? Breaking the work down into bite-size chunks makes it easier to understand, code, review, and test.


About the author

David Jobling is a senior solution architect with Avanade. David has expertise in enterprise and application architecture across multiple industries, concentrating most recently in financial services on corporate banking and capital-market requirements and applications.

Prior to Avanade, David spent 4 years with Accenture as technical architect in their Dublin Solution Centre. Throughout his career, David has become skilled in many technologies, but he has gained recognition as a solution architect and thought leader.

This article was published in Skyscrapr, an online resource provided by Microsoft. To learn more about architecture and the architectural perspective, please visit