Real Frameworks for a Service-Oriented World


Joel Jeffery

February 2007

Applies to:
   Application Architecture
   Architecture Frameworks

Summary: The tools and practices discussed in this article about Capgemini tools should provide customers with better value and more predicable results with these tools and practices. (10 printed pages)


Open-Source Announcement
Capgemini Frameworks and Solution Accelerators
Special Thanks

Open-Source Announcement

Capgemini plans to open-source some of the tools mentioned in this article in Q2 2007.


The beginning of every custom software delivery project is the same:

  • We check the field color: green or brown? It's usually brown, let's face it.
  • We draw the same diagrams. How many ways are there of representing N-tier architecture?
  • We write the same stock code. Let's map objects to relational data. Let's marshal those objects between tiers.
  • We compartmentalize. Let's decide where to put business logic, validation rules, and common functions.
  • We then double-check that everything we did was according to best practice.

Wouldn't it be good if there was another way?

Capgemini Frameworks and Solution Accelerators

Well, there is another way.

At Capgemini, we accelerate the start up and implementation of all our projects, small and large scale, through the use of home-grown frameworks.

A recent show of hands at the Microsoft Architect Insight Conference showed that not enough of us are doing this. Please consider the rest of this paper a call to arms.

In this paper, I will discuss our enterprise grade Capgemini Integrated Architecture Framework (IAF), the Capgemini Development and Architectural Framework (CDAF) Architectural Reference Model, and the associated toolset for the Microsoft technology stack, including our Messaging and Service Framework (MeSH).

Even if you don't use our tools or our methodologies, we hope this article will at least be thought-provoking and get a few more of us thinking along these lines.

Integrated Architecture Framework

Comparable to the work of John Zachman and the ZIFA, or The Open Group's TOGAF, the Capgemini Integrated Architecture Framework, which has been around for more than a decade, is the backdrop to most of the projects we run. It gives a flexible structure to medium-scale and large-scale projects alike. It is composed of a collection of Enterprise and Solution Architectural artifacts and patterns arranged in a teleological fashion. It gives structure to every aspect of a project, from the business down to the physical tin.

IAF is concerned with artifacts and views over the architecture. Consider how basic building blocks such as services, contracts, components, and interaction models fit in to the following framework.


Figure 1. Integrated Architecture Framework v4.0, Copyright © 2000-2007 Capgemini


The Contextual layer of the IAF is concerned with the history and background of the problem. The context of an architecture looks to find out why things are currently the way they are, answering questions such as:

  • What business problem are we solving?
  • Why are we solving this business problem?
  • What is the scope of the problem?
  • What is the as-is architecture?
  • How will the new solution fit within the organization?


The Conceptual layer deals with the vision of what is going to be delivered and what the key elements are within the scope. Describing the functional and non-functional requirements, this layer is solution-independent and driven by benefits.


The Logical layer describes the future state of the architecture. It defines how the requirements can be delivered in a technology-neutral manner, giving both long-term direction and guidance for short-term decisions.


The Physical layer seeks to define what the architecture will be delivered with. Everything from paper documents in the Business and Information pillars, through to compiled assemblies in the Information Systems pillar and physical infrastructure in the Information Technology pillar.

Security and Governance

The Security and Governance dimensions form two further iterations over the Conceptual, Logical, and Physical layers. We do this because Security and Governance are both so wide-reaching that they deserve investigations of their own.

CDAF Architectural Reference Model

The CDAF Architectural Reference Model gives us a model architecture for the Information Systems pillar of the IAF. It is not groundbreaking or controversial. But it provides prescriptive guidance on how to split your application across logical layers and physical tiers. It tells the architect how to marshal information across boundaries. It tells the developer where to put code—presentation logic, business logic, validation functions and rules, data access and integration.

The key point is that each tier we implement has the same architecture. Figure 2 is used across each tier for which we develop software.


Figure 2. CDAF architectural reference model

Figure 3 shows the same architecture being applied within a familiar N-tier scenario.

Click here for larger image

Figure 3. CDAF architectural reference model, N-tier example (Click on the picture for a larger image)

And, lastly, Figure 4 shows how the same application architecture fits within an Enterprise Service Bus or other EAI approach.

Click here for larger image

Figure 4. CDAF architectural reference model, ESB example (Click on the picture for a larger image)

Presentation/Service Layer

This layer corresponds to the user interface of the application or the interface of an application. It is split into two additional layers.

(User) Interface

This layer contains all the UI-/Service-specific primitives. For instance, for a desktop application, this would contain references to drop-down lists, progress bars, menus, and all the UI primitives used by the application. This is the only layer in which those components may be manipulated.

(User) Interface Process

This layer is event driven and handles all the events trapped in the layer above in a non-UI specific way. No UI/Service primitives may be passed into this layer. As much of the UI/Service logic as possible is pushed down into this layer. This gives us two tangible results:

  1. If we want to change the UI technology at the top of the stack from, say, a desktop application to a browser-based application, although difficult, it should be tractable. The same applies to services.
  2. We get a much higher possible code coverage figure for our unit tests. Trying to implement unit tests in code at the UI level is notoriously difficult.

Business Layer

This layer contains the business logic of the application. Remembering that the same architecture is used across each tier in the application, it follows that the business layer would have different functionality on the desktop to a business layer inside a Web service. One way of deciding this is based upon the principle of locality of data. The more data you need to process to make business decisions, or the more up-to-the-minute you need the data, the closer to the data that logic has to be.

Business Process

This layer contains a minimum of one business process class per UML Use Case or business activity, depending upon the complexity of the Use Case. A Business Process cannot invoke another Business Process.

Business Components

This layer is a repository for shared business functionality and integration points to other systems; Business Components can be viewed as UML Use Case "Includes." Where business logic is dependent upon connected systems within an EAI, or even outside the enterprise, the code to call those systems goes here.

To illustrate the difference between Business Processes and Business Components, consider a trivial example of three business activities: Deposit, Withdraw, and Transfer. These exist in the Business Process Layer, as shown in Figure 5, as BPDeposit, BPWithdraw, and BPTransfer. These, in turn, map on to components in the Business Component Layer—BCDeposit and BCWithdraw. BPDeposit and BPWithdraw obviously invoke BCDeposit and BCWithdraw, respectively, to perform the real work. The odd-man-out process BPTransfer would therefore invoke components BCDeposit and BCWithdraw to move money from one account to another.


Figure 5. Business Processes and Business Components

Validation Components

A Validation Component is a specific, less general, instance of a Business Component (Figure 5) concerned solely with validation input.

Validation functionality is split into two parts: Validation Components and Validation Rules. Validation Rules—within the Cross Layer, covered later in this article—hold the values against which Validation Components perform the validation. For instance, a regular expression to validate an e-mail address format could be held as a string in the Validation Rules layer, whereas the functionality to perform the validation would go here. Both Validation Rules and Validation Components can be shared across tiers.

This gives us three important things:

  • Maintainability of rules separately from the rest of the code.
  • Reuse of rules in environments that have features that can take advantage of rules directly—for example, in a Web service you might validate an e-mail address by calling the IsEmailValid() function in the Validation Components layer, whereas in a Web page, you might use an ASP.NET Regular Expression Validator component and pass the value of the e-mail rule from the Validation Rules layer.
  • A consistent approach to applying the same rules across tiers; supporting client-side validation for performance and server-side validation for security.

Data Transfer Objects

Here we put our Business Entities. We use the Data Transfer Object (DTO) pattern from Martin Fowler. The DTO contains the structure of an instance of row data held with in an entity in the domain model. This can map on to a table or tables in the database through multitable inheritance, or a selection of rows. This layer can be referenced from all layers above and is used to move data from one layer to the next, and even between tiers.

Data Transfer Object Management

This layer contains all the logic to update and manipulate the DTOs. We use the Row Data Gateway pattern, also from Martin Fowler. DTO Management objects can also implement Martin Fowler's Unit of Work pattern.

Data Abstraction Layer

This layer is technology specific. It contains an implementation of data abstraction classes that will need to be rewritten only when targeting a new technology. In this example, we will talk about the Microsoft technology stack.

Business Entity Controller

In a Microsoft technology stack, the Business Entity Controller class provides a set of static and instance methods that you can use to execute a variety of different method against the IDBProvider interface for database interoperability. Currently, we have instances for both SQL Server and Oracle. Adding support for a different database is a simple matter of writing a new provider. The purpose of the Business Entity Controller is to remove all reliance from the Data Transfer Object/Business Entity classes on the SqlHelper and simplify the logic for Create, Retrieve, Update, and Delete (CRUD) operations. It also provides a mechanism based on delegates to allow non-atomic actions to be included as part of your main transaction. This mechanism also allows business-compensating transactions to be defined and fired automatically upon failure.

Business Controlled Transaction

The Business Controlled Transaction class is contained within the Business Entity Controller and provides an object that can be passed to the CRUD methods in the Business Entity Helper class to hold the method functions under a single transaction. In for projects that use ADO.NET 2.0, this is replaced by the TransactionContext object.

Cross Layer

The Cross Layer is a bucket for functionality that is used across many layers. The following is a non-exhaustive list showing what we commonly expect in a project.

Validation Rules

As discussed earlier, here we put the values—strings, regular expressions, maxima, minima—that make up the meat of a Validation Component.


The Security layer contains all the custom classes required to implement security for each tier. This will be different for each project depending upon security requirements. An example would be utility functions to check whether a user is in a specified role or functions to create a security token to be passed between tiers.


Communication-specific functionality and utility functions go in this layer. Examples from recent projects include cryptography, utility functions that use .NET Remoting to pass data between two different AppDomains, and using Winsock to perform UDP broadcasting and multicasting in a push environment.

Logging and Exception Handling

In a Microsoft technology stack, we use the Microsoft Exception Management and Logging Application Blocks from the Enterprise Library. We wrapper these, providing a simple, configurable tracing and exception-handling capability. We use refactoring tools to help developers remember to include calls to the tracing framework at the header and footer of each method or property, and a home-grown tool to add the calls to a source tree after the fact.

CDAF Toolset for Microsoft.NET

The CDAF also includes tooling to help speed all this up.

Firstly, the layers in the Architectural Reference Model have been captured as a set of reusable, templated project documents and a set of Enterprise Templates for Visual Studio 2005. At the start of an implementation or coding phase this gives us an immediate head start by providing both the formal documentation and skeleton projects for Visual Studio.NET.

Secondly, and perhaps more importantly, the CDAF provides accelerators (the "CDAF Toolset") in the form of a Domain-Specific Language (DSL) for Visual Studio.NET 2005, and a flexible, template-driven, code generator. The DSL lets you specify a domain model from which the generator can build scripts to create your database. Alternatively, with a few mouse clicks, one can be reverse-engineered from an existing database. It is this toolset we are open-sourcing later this year.


Out of the box, it comes with templates to generate stored procedures for CRUD and Data Transfer Objects and their associated Manager classes for each entity. The templates are commented and presented according to our internal naming conventions and guidelines. It also builds unit tests for each class it generates.

Templates can be defined using either C# or Visual Basic.NET. We are currently adding support for more .NET languages—J# and X++ are on the road map. The output of the templates can, of course, be in any language and output file names can be parameterized.

The CDAF toolset is pretty mature and is now in its third year. Later this year will see templates for the presentation tier targeting both desktop and Web applications.

Data Abstraction

The data abstraction layer generated by CDAF is based upon two things: the templates, which are all editable, and the CDAF Schema. As shown in Figure 6, the Schema has two main nodes: Domain and Database.


Figure 6. CDAF Schema top-level nodes

The Database node describes the underlying database tables and their relationships and constraints. The Domain node describes the business entities that will eventually become Data Transfer Objects. The toolset allows you to prepopulate the CDAF Schema by sucking in the schema from a specified database.

This will give you a deterministic, one-to-one mapping between database entities. The problem with this is that your canonical data format now exactly reflects your database. In other words, if you expose your Data Transfer Objects at a tier boundary—say, your Web services—then your database structure is potentially visible to the whole world. The good news is that you are free to change the Domain part of the CDAF Schema to a more useful and abstracted model, although this is no substitute for contract-first service design, and indeed this is the preferred model within the CDAF Architectural Reference Model.

In addition, the Domain subtree of the CDAF Schema supports multitable inheritance, and multiple views on the same tables. This means that, if desirable, you can generate Data Transfer Objects that are made up from a subset of fields from two or more database tables. You can also have the same database table populating more than one Data Transfer Object.

Visual Studio 2005 Integration

Figure 7 shows an example: The code generated is pictured in the top-left corner of the graphic, an XML representation of the domain model in the top-right corner, the template library in the bottom-left corner, and the template property window in the bottom-right corner.

Click here for larger image

Figure 7. CDAF toolset screen shot (Click on the picture for a larger image)


MeSH sits at the top of the CDAF Architecture Reference Model for services.

Its purpose is to abstract away from the developer the complexity of the service transport—one of WSE, WCF, or a custom solution. MeSH implements the Pipes and Filters architectural style (Gregor Hohpe and Bobby Woolf), following the Chain of Responsibility pattern (from "Design Patterns," 1995, by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides—the Gang of Four—ISBN 0201633612). It allows the developer to write Filters to handle incoming and outgoing messages. Filters are components that implement a very simple interface, and can be composed at deploy time by way of application.config files.


The purpose of the Chain of Responsibility pattern is to allow the sender of a request to be decoupled from the receiver by allowing multiple objects to handle the request. The message is passed from one Filter to the next until it is handled. The onus of ensuring that the message is passed down the Chain is upon the Pipe infrastructure, and is taken away from the developer. This means that there is less room for a developer to cause a blockage in the Pipe through incorrect coding.

Figure 8 shows three commonly used Filters implemented on some of our most recent projects and a series of four messages going through the Pipeline.


Figure 8. Real-life Pipes and Filters


In Figure 9, we have a code snippet showing a simple Filter.

Click here for larger image

Figure 9. MeSH sample Filter (Click on the picture for a larger image)

Line 4 shows us subclassing the AbstractMessageFilter abstract class. At line 9, we override the ProcessMessage method. The Pipe invokes this method to pass the message for processing into the Filter. Line 21 would be replaced with the actual work of the Filter. In this simple example, we invoke the ValidateMessageAgainstSchema() method. At line 31, ProcessNextFilter() is called, to place the message back on the Pipe. It is the responsibility of the Pipe to pass the message on to the next Filter.


The Filters are assembled at runtime when the Pipe is instantiated. The Filters used and their order is specified in .config files such as the simplified one shown in Figure 10.


Figure 10. MeSH deploy-time configuration

Line 1 establishes a name for the Pipe and its endpoint. Line 2 shows an example Filter. The order in which the Filters are chained corresponds to their ordinal position in the XML configuration file. The Filter has an action attribute that specifies whether a Filter should be used for all messages, or only a subset of those going into the Pipe. Lines 3 and 6 show how to specify the assembly and class that implements the Filter.


The IAF gives us direction as Enterprise and Solution architects and helps us bridge the gap between architecture and design.

The CDAF Architectural reference model gives us more detailed guidance in the Information Systems space.

The CDAF toolset helps us by focusing our efforts on getting the domain model right, and by generating hundreds of thousands of lines of code that we would otherwise have to write by hand.

MeSH adds value by abstracting away the detail of messaging from the developer, enabling reuse and deploy-time flexibility.

Together, all of these methods mean that we can dedicate a higher proportion of time to ensuring we get the business processes right, and that we can do more with less. It means we get to skip to the fun part, while our customers get better value and more predictable results.

The CDAF toolset is planned to be released as an open-source project in April 2007. Why are we doing this? It's our way of saying, "Come on in, the water's lovely!"

Special Thanks

Special thanks go to those kind folks who reviewed this article:

  • Drew Jones (Capemini)
  • Graham Stevens
  • David Helps (SunGuard Vivista)
  • Richard Fawsitt (Charteris)


About the author

Joel Jeffery joined Capgemini in 2003. He is an architect with over 20 years' experience in IT. Joel has operated as architect and technical design authority on many major engagements in the Microsoft space. His favorite toys are the .NET Framework and Microsoft BizTalk. You can reach him at