Export (0) Print
Expand All
18 out of 30 rated this helpful - Rate this topic

Pragmatic Architecture: Layering

 

Ted Neward

October 2006

Applies to:
   .NET Framework

Summary: Why, precisely, do we build systems using the n-tier architectural model, again? It's a basic article of faith in software that, when presented with a new project, we divide the system cleanly into three tiers: the presentation tier, the business-logic tier, and the data-access or resource tier. Doing things "just because they've always been done that way" deserves re-examination. (6 printed pages)

Some things make their way so solidly into the popular culture that nothing short of a catastrophic event will ever dislodge them, even if the reason for their inclusion has long since passed. Such examples are particularly easy to find in the legal world, where U.S. cities, states, and even the federal government have laws on the books that make no sense in our current era—or any other era, for that matter (rumor has it that, in a particular small town in Arizona, it is illegal to do the backstroke in the middle of a highway). It's a bit reminiscent of the old joke:

A man, recently married, watches his wife prepare a roast for dinner. To his amazement, just before placing the roast into the pan, she slices off about two inches of meat on either end and throws them away. When he expresses surprise at her actions, she replies, "It makes the roast taste better. Besides, my mother always did it that way." Curious, he calls his wife's mother and asks her if she also cuts the ends off of her roasts, and why. "Because it makes the roast taste better. Besides, my mother always did it that way." Determined to find out where this custom comes from, he calls his wife's grandmother, and when she agrees that she, too, cuts off the ends of her roasts when she prepares them, he asks why. Promptly, she replies, "Because my pan is too small to hold it all."

This obvious scenario of doing things "just because they've always been done that way" clearly deserves re-examination. Unfortunately, other situations are not equally as transparent.

Such as, why, precisely, do we build systems using the n-tier architectural model, again? It's a basic article of faith in software that, when presented with a new project, we divide the system cleanly into three tiers: the presentation tier, where we handle all user input and data-display issues; the business-logic tier, where we put all "business logic" (which is itself a fairly amorphous term that can mean pretty much whatever the speaker wants it to mean); and the data-access or resource tier, where we put all the code that retrieves, modifies, or stores data. And, when asked why we do it this way, many have no answer other than the generic, "Because my mentor (or last team lead, or the last book I read) always did it that way." Cross-tier communication carries with it a significant performance cost, to the tune of three to six orders of magnitude over a normal, in-proc method call—in essence, making this a comparison between a trip down to the grocery store versus a trip to Pluto (see Effective Enterprise Java, Item 17). Are we throwing away good meat… er, CPU cycles, just because that is the way Mom always did it?

After all, bear in mind that building applications in a tiered approach is not the only option available to developers. Consider, for example, something that UNIX developers have known for years, and that .NET developers are beginning to discover, thanks to the forthcoming Microsoft Windows PowerShell: that structuring small parts into pieces that can "feed" into one another can create some incredibly powerful composite tools, without having the commensurate complexity of maintaining a complex application. For example, creating a tool that can search through text files for a particular string sequence and replace it might seem like a trivial task to the average C# or Visual Basic developer, but it's even more trivial to the PowerShell-savvy, in that it's simply a concatenation of several "cmdlets": one to iterate through the files, another to search the files' contents and replace the desired text with the new text, and a third to write the new contents to disk. It's a beautiful model, particularly since each component can focus on a specific task (iteration, search, write), thus simplifying their maintenance and design.

Before there's a mad rush to judgment against the n-tier model, it deserves a chance at salvation. For starters, we can refine our use of the overly generic "n-tier" term a bit, and note the important distinction between tiers and layers. (See Martin Fowler's Patterns of Enterprise Application Architecture for a full treatise on the point.)

A layer is a logical separation of software, a basic separation of concerns at the developer level, so we can more easily partition the responsibilities of the system. This is further documented in [POSA1], where the Layers pattern states that using Layers "helps to structure applications that can be decomposed into groups of subtasks in which each group of subtasks is at a particular level of abstraction." In other words, it's classic separation of concerns: split the various tasks involved in an enterprise system—the retrieval of data, the storage of data, the execution of business rules against that data, the display of data, the collection of input, and so on—into components or subsections, so that we can more easily track what is happening where and when. Naturally, the most common division of labor is into presentation, logic, and data-access layers. Note, however, that we're not immediately making any presumption about where each of these layers will run—not yet.

A tier, on the other hand, is a physical layer of hardware, usually a computer of some form, on which some or all of our system can run. Traditional client/server computing—writing a program that executes SQL statements against a database running on a separate server—is a two-tier system. The World Wide Web we all enjoy on a daily basis is also built on the backs of a two-tier approach, where one tier, the client machine, sits in somebody's home or office and remotely accesses a second tier, sitting in a server room somewhere. And so on.

This might seem an irrelevant line of questioning to some. After all, isn't the presentation tier always going to sit on a client machine, the data-access tier on a database server, and the business-logic tier on a machine somewhere in between the two? Consider a closer examination of the "classic" three-tier Web-based application model:

  • The presentation layer in a Web-based application is a Web browser, and therefore not under our control. HTML that the browser will display has to be produced and sent to the browser, typically from within a code component of some form (ASP or ASP.NET) running within the server. This means that the "presentation" layer is now split across two tiers.
  • Similarly, the data-access layer is not entirely on the database tier, because the commands necessary to access and manipulate the data (that is to say, SQL) must be generated and sent from outside the database tier itself. This means that, like the presentation layer, data-access layer code is now being split across at least two tiers.

What we're left with is the realization that, even though it seems natural to consider the presentation layer to be running on the client tier, in reality this is only true for what we're now calling "rich-client" or "smart-client" applications. Beyond that, the connection between "tier" and "layer" is mostly accidental and definitely not the one-to-one mapping originally believed.

The principles and reasons behind layering our software designs are well-known and, for the most part, accepted across the software-architecture sphere. (If you're not convinced, or want to read more discussion on the subject, look at the "Design Patterns" column in the August 2006 MSDN Magazine, which talks in great detail about the Model-View-Presenter pattern, which describes the intent and reasoning behind separation of user interface from logic.) Tiering, however, remains an open discussion point; given the expense of pushing data around the network, it's worth ensuring that doing so is really necessary.

At one level, it's fairly easy to recognize why we would want at least two tiers, since we generally don't want to put server-class machines in front of users, for reasons of both cost efficiency and data centralization. Most n-tier discussions put a third tier in place, however, on which business components or logic is hosted; in the canonical Web-application diagram, sometimes a fourth tier is present, giving us a client tier, the Web-server tier, a business-logic tier, and the database tier. Why four? Why any more than two, for that matter?

Historically, two forces came together to create the n-tier approach. The first was the need for scalability: As the Internet grew and the Web became more end-user approachable, businesses realized that they could push their enterprise systems out to individual customers, and move much of the work historically done by internal systems (and call-center operatives) out of the company and on to the Web. For example, in 1980, a customer had to call a shipping company and ask a customer-service representative where a particular package was in transit. The customer-service rep would ask for the tracking number, and then use an internal software system to discover the package's location. In 2005, the same customer simply points their Web browser of choice at the shipping company's Web site, and types in the tracking number. The same back-end algorithms are searching through the same back-end data-storage systems, but now the input is being fed directly from the customer, instead of through an internal employee. The added "reach" of the enterprise system came with a commensurate cost, however: Where the internal system only had a few hundred users (the customer-service representatives), now the system has potentially hundreds of thousands of users (the customers). And here we reach a bottleneck, in that most database servers can support several hundred concurrent connections, but several hundred-thousand concurrent connections will bring the database to its knees pretty quickly.

As it turns out, however, an interesting property regarding these concurrent connections became apparent: For most client/server applications, the connection established against the database spent most (more than 95 percent) of its time idle, waiting for requests to carry out against the database. This meant that the bottleneck was in the number of connections, not in the work being performed. This implied that to increase the database's scalability, we needed somehow to increase the amount of work done over each of these connections. So, an interim tier was created, and clients connected to that interim tier to multiplex requests to the database. Put simply, if the database can only hold 100 connections, and each per-client connection is being used 1 percent of the time, we can increase the scalability of the database by having 100 clients connect to an intermediate server, which then uses 1 connection (100 percent of the time, 1 percent for each client) to act against the database. Voilà: a hundred-fold increase in scalability. Not bad.

That said, how many applications really need that kind of scalability? Certainly, bringing enterprise applications to your end-user customer base might, but large numbers of applications—Web-based or not—remain deployed entirely internally, where fewer than 100 (and sometimes fewer than 10) clients will access the system concurrently. Is there still a call for the n-tier approach in internal, small user-base applications?

Security factors come into play here. For an application running on an end-user's machine (be it Web-based or "rich-client"), it's not likely that any system administrator or security consultant will recommend that a database containing mission-critical data sit directly behind a firewall—accessed directly from machines operating outside the security perimeter, for example. Putting an intermediate machine in the way, with another firewall behind it, creates what is commonly called a demilitarized zone, or DMZ, where access can be further restricted to the database. Such a DMZ stiffens the security infrastructure significantly and reduces the likelihood of successful penetrations. This not only protects the data from theft, but also helps protect the servers (and, thus, the rest of the application or system) from successful denial-of-service attack.

A second factor, one that made n-tier systems attractive to a lot of large system owners, concerns deployment—the act of physically putting the software on a machine where clients can access it. In the traditional client/server environment, business logic intertwined with presentation logic and data-access logic led programmers to an uncomfortable realization: Every time a new update was required (such as changes to the way the business processes data, or a new view on that data was desired), the "fat client" sitting on the users' desktops had to be replaced and/or augmented with new code. That meant, at least at the time, that somebody—usually, whichever developer or system administrator was lowest on the totem pole—had to go around from machine to machine, installing the new code. Or the users were asked to download the latest code off the network, which, naturally, most users either ignored or did not do correctly. Neither scenario led anyone into a real sense of confidence in the wisdom of frequent releases. Deployments took time and, during that time, the system had to come to a halt, to avoid any sort of semantic data corruption due to mixed versions of the applications banging away at the database.

This deployment factor contributed significantly to the adoption rate of the n-tier model, and more specifically to the Web-based application. Now, instead of having to roll out code to individual user desktops, code could be deployed to the (single) Web server, and the end-user's browser would simply pick up the changes without any further work. In itself, deployment is not a reason to roll out an n-tier system; several alternatives, not available during the days of the traditional client/server applications, have now added to the list of deployment possibilities, including No-Touch Deployment (in .NET 1.x) and ClickOnce (in .NET 2.0) —not to mention the rising interest in AJAX and various hybrid combinations. In fact, it's become common to release a rich-client application that updates itself on startup, such as what we see with the iTunes software manager, Windows Media Player, or even the popular .NET development tool Reflector.

A third reason to consider n-tier was popularly cited, but not frequently implemented: the idea of the middle tier being a gathering point for presentation-agnostic logic that could be accessed by multiple presentation tiers. The canonical example of this is the combination intranet/extranet application, where internal employees use a WinForms (or, in the near future, Windows Presentation Foundation) application to access a middle-tier system that, in turn, accesses the database, whereas external customers (partners and/or customers) use an ASP.NET or, possibly, SharePoint-based Web site to do the same: access the middle tier, which in turn accesses the database.

This idea, while seemingly trivial in concept, turns out to be deceptively difficult to pull off in an architecturally solid manner. This is also where it becomes crucial to distinguish between tiers and layers: If there is a clear distinction between the presentation layer, the business-logic layer, and the middle tier, it becomes possible to embed the business-logic layer in the client tier (in the case of the rich-client application front-end) and achieve some significant performance savings by avoiding network access.

Designing this business-logic layer to be used in two different tiers can be tricky, however. It means that the business-logic layer must avoid any sort of assumptions about whether the presentation layer or the data-access layer is co-located in the same tier, and thus has to assume that neither one is. In particular, where this can kill an application is when performing data binding to business objects that are in actuality remote objects (by way of .NET Remoting) running on the middle-tier server. Now, every property access and every method call is a network traversal, and performance will drop faster than… well, faster than the credibility of an architect who builds a system that stinks.

Fortunately, architects the world over have begun to realize the perils of the "distributed-object" approach, and now the mantra of the day reads as "loose coupling" and "coarse-grained communication"—all under the aegis of the service-oriented approach to architecture. Like everything else in software development, service orientation has its own pitfalls, but that's another column for another month.

 

About the author

Ted Neward is an independent consultant specializing in high-scale enterprise systems, working with clients ranging in size from Fortune 500 corporations to small 10-person shops. He is an authority in Java and .NET technologies, particularly in the areas of Java/.NET integration (both in-process and by way of integration tools like Web services), back-end enterprise software systems, and virtual machine/execution engine plumbing.

He is the author or co-author of several books, including Effective Enterprise Java, C# In a Nutshell, SSCLI Essentials, and Server-Based Java Programming, as well as a contributor to several technology journals. Ted is also a Microsoft MVP for Architecture, BEA Technical Director, INETA speaker, Pluralsight trainer, frequent conference speaker, and a member of various JSRs. He lives in the Pacific Northwest with his wife, two sons, two cats, and eight PCs.

Reach him at ted@tedneward.com, or visit Ted's blog.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.