Chapter 2: Messaging and Services

“SOA is not something you buy,

it's something you do.”

– Jason Bloomberg



Reader ROI
Understanding Services
A Services Lifecycle
SOA Scenarios
SOA and the End User
SOA Case Study: Commonwealth Bank of Australia

Reader ROI

Readers of this Chapter will build upon the concepts introduced in Chapter One, specifically focusing on the Communications architectural capability.


Figure 1. Recurring Architectural Capabilities

The Messaging and Services architectural capability focuses on the concept of service orientation and how different types of services are used to implement SOA. This Chapter also touches upon the role of the user – specifically how a user will interact with services within a SOA.

Topics discussed in this Chapter include:

  • A maturity model for SOA
  • A services lifecycle
  • Sample SOA scenarios
  • The role of the user within a SOA

The concepts covered in this Chapter are not necessarily new. SOA Maturity Models and Service Lifecycles have been published by a broad variety of vendors, consultants and industry analysts. Like SOA itself, there is no single Maturity Model or Service Lifecycle that everyone agrees upon. Readers should review several of these efforts and draw from them the aspects that best fit your organizational needs.


This Chapter consists of work from the following individuals: Mark Baciak (Service Lifecycle), Atanu Bannerjee (OBA), Shy Cohen (Service Taxonomy), William Oellermann (Enterprise SOA Maturity Model), and Brenton Webster (SOA Case Study).

Understanding Services

A SOA Maturity Model (another one??)

There is an abundance of SOA Maturity Models available from vendors, consulting firms, analysts and book authors. Most of these Maturity Models are either based upon or inspired by the Software Engineering Institute’s (now retired) Capability Maturity Model (CMM). A recent search on the terms “SOA Maturity Model” returned almost 10,000 relevant hits (including articles on what to look for in a Maturity Model). Given the intense interest and variety of maturity models available why introduce another? Unlike other Maturity Models, the one discussed here doesn’t attempt to simply apply service orientation to the CMM. ESOMM (Enterprise Service Orientation Maturity Model) borrows CMM’s notion of capability-driven maturity models and applies these principles to service-orientation paradigms, essentially building a road map from scratch. Unlike CMMI, the ESOMM Maturity Model doesn’t focus on processes because the focus is on IT capabilities, not organizational readiness or adoption. While there are some conceptual similarities with CMM, ESOMM is a decidedly different application of the maturity model concept. The layers, perspectives, and capabilities defined in the ESOMM are designed as a road map to support services—not any specific service with any specific use, implementation, or application, but any service, or more specifically, any set of services.

Developing an enterprise-wide SOA strategy is not a trivial undertaking and should not be treated as a short-term or one-time effort. SOA is an attempt to enable a higher level of agility for the entire organization to facilitate an expedient response to the needs of the business and customers. There are many aspects of SOA that will be much more critical in the near- instead of long-term, so it is important to align your group’s efforts accordingly. To do so successfully, the development of a prioritized roadmap should be a high priority to plan for success.

An ESOMM can provide a capability-based, technology-agnostic model to help identify an organization’s current level of maturity, the short- and long-term objectives, and the opportunity areas for improvement.

Figure 2 provides an overview of ESOMM. ESOMM defines 4 horizontal maturity layers, 3 vertical perspectives, and 27 capabilities necessary for supporting SOA.

Figure 2. An Enterprise Service Orientation Maturity Model (ESOMM)

Most organizations will not naturally completely implement all of one layer’s capabilities before moving up the model, but jumping too far ahead can be risky. This becomes very apparent as you recognize that some poor decisions concerning key building block capabilities can severely impact your ability to mature at higher layers.

Capabilities are categorized into one of three perspectives: Implementation, Consumption, and Administration. Implementation capabilities target the development and deployment of Web services from the provider’s perspective. Consumption capabilities are those that cater to the consumers of your services, making them easier to implement and therefore more widely and successfully adopted. Administration capabilities include those that facilitate the operational and governance aspects of Web services across the organization. While an organization can choose to focus on a single perspective, the overall maturity, and hence value, of your SOA depends on the appropriate level of attention to all three.

ESOMM is a tool that can be used to:

  • Simplify the complexity of a massively complex and distributed solution.
  • Identify and discuss an organization’s adoption of service orientation.
  • Understand the natural evolution of service adoption (such as skipping certain capabilities at lower levels comes with certain risk).
  • Provide a cohesive and comprehensive service orientation plan for customers based on their objectives.
  • Align an organization’s activities and priorities with a distinct level of value providing specific benefits through specific capabilities.

In a SOA assessment, each capability is assessed independently as a level of strength, adequacy, or weakness. A level of strength demonstrates that the organization is well positioned to safely grow their use of Web services without fearing pitfalls in this area down the road. A level of adequacy signals that an organization has sufficient capabilities to meet today’s needs, but is vulnerable to a growth in Web services that could cause problems in that area. A weakness represents a deficiency that is a problem for today’s use of Web services and could be a serious detriment in the future. Any capabilities that are obviously not part of the organization’s objectives and appear to have minimal impact in the near term for the organization are classified as not applicable. These areas may quickly become weaknesses if business or technical objectives were to change or accelerate.

As in the CMM, individual capability levels drive an overall layer grade ranging from 1 to 5. A grade is assessed based on the average of the capability maturities within the layer. A rating of 5 represents a mastery of that level with little or no room for improvement. A 4 represents a solid foundation within an area that can be successfully built upon. A 3 is assigned when good efforts are established in that layer, but extending efforts to the next layer carries some risks. 2 means that only initial efforts have been made in the area and much more work is needed to extend to the next layer. A grade of 1 represents no effort to address the capabilities at that layer.

Now that we've had a chance to briefly review the components of ESOMM, you are hopefully already thinking about how it can be applied to your organization. To be clear, applying this model should not be looked at as a one-time activity or short-term process. Instead, the model is best leveraged as a working plan that can be modified over time as the usage of services and your experience grows.

Unfortunately, the downside of using the term maturity with a model is that people will immediately want to know what layer their organization is at to get a sense of their status or identity. As it happens, there is no appropriate way to answer the question, "what layer is my organization?" Instead of giving an overall grade based on one of the layers, we take the approach of giving each layer its own level of maturity, ranging from one to five, based on half-point increments.

ESOMM is intended to be leveraged as a road map, more so than as an SOA “readiness” or assessment tool. While it is important to know where you are, getting an exact bearing is less important than identifying the capabilities you need to address to continue advancing the value of service enablement in your organization. As long as you are willing to ask yourself some hard questions in an objective manner across all the relevant groups, you should be able to get a good understanding for your current challenges. If you apply the strategy and objectives of your organization, you should be able to identify which capabilities you will need to address in the near, short, and long term.

ESOMM is one of many possible maturity models that can be used to assess the capabilities of the enterprise in adopting and implementing SOA. It is terribly difficult to have a concise, constructive conversation about a service-enabled enterprise without some common agreement on capabilities and maturity – ESOMM is one of many maturity models that attempt to address this issue. Unlike other maturity models, however, ESOMM can also be leveraged as a prioritized roadmap to help enterprises identify the capabilities necessary for a successful implementation. The goal of ESOMM and other maturity models is to empower your organization with the tools and information needed for a successful SOA implementation.

In a way, SOA does render the infrastructure more complex because new capabilities will need to be developed that did not exist before, such as registry and repository. In some sense, it can be compared to the construction of a highway network—broad, safe roads are more expensive to build, and the users need to upgrade their means of transportation to make the most of that infrastructure, but the cost per trip (in time and safety) is driven down. Until the network reaches a critical mass, drivers still need to be prepared to go “off road” to reach their destination.

Many enterprise applications were not developed with SOA in mind, and are either incapable of leveraging an SOA infrastructure, or will need to be upgraded to take advantage of it. However, with more/new applications being created consistently, there is a great opportunity to drive down new interoperability costs as the various technology vendors enable their products for SOA.

The biggest challenge for SOA is convincing the application owners to invest more today to achieve those promised long term savings.

While maturity models like ESOMM can help clarify the capabilities necessary for SOA, the types of services your organization will need usually remains unanswered. A simple service taxonomy can assist you in better understanding the breadth of services that will typically exist within a SOA.

A Service Taxonomy

As we examine service types we notice two main types of services: those who are infrastructural in nature and provide common facilities that would not be considered part of the application, and those who are part of the application and provide the application’s building blocks.

Software applications utilize a variety of common facilities ranging from the low-level services offered by the Operating System such as the memory management and I/O handling, to the high-level runtime-environment-specific facilities such as the C Runtime Library (RTL), the Java Platform, or the .NET Framework. Solutions built using a SOA make use of common facilities as well, such as a service-authoring framework (for example, Windows Communication Foundation) and a set of Services that are part of the supporting distributed computing infrastructure. We will name this set of services Bus Services.

Bus Services further divide into Communication Services which provide message transfer facilities such as message-routing and publish-subscribe mechanisms, and Utility Services which provide capabilities unrelated to message transfer such as service-discovery and federated security.

The efficiency of software applications development is further increased through reuse of coarse grained, high-level building blocks. The RAD programming environments that sprang up in the Component Oriented era (such as Delphi or Visual Basic) provided the ability to quickly and easily compose the functionality and capabilities provided by existing building blocks with application specific code to create new applications. Examples of such components range from the more generic GUI constructs and database access abstractions, to more specific facilities such as charting or event-logging. Composite applications in a SOA also use building blocks of this nature in their composition model. We will name these building blocks Application Services.

Application services further divide into Entity Services which expose and allow the manipulation of business entities, Capability Services and Activity Services which implement the functional building blocks of the application (sometimes referred to as components or modules), and Process Services which compose and orchestrate Capability and Activity Services to implement business processes.

The following diagram shows a possible composition of services in the abovementioned service categories.

Figure 3. Bus Services

Bus Services are common facilities that do not add any explicit business value, but rather are a required infrastructure for the implementation of any business process in a SOA. Bus Services are typically purchased or centrally built components that serve multiple applications and are thus typically centrally managed.

Communication Services

Communication Services transport messages into, out of, and within the system without being concerned with the content of the messages. For example, a Bridge may move messages back and forth across a network barrier (i.e. bridging two otherwise-disconnected networks) or across a protocol barrier (e.g. moving queued messages between WebSphere MQ and MSMQ). Examples of Communication Services include relays, publish-subscribe systems, routers, queues, and gateways.

Communication Services to not hold any application state, but in many cases they are configured to work in concert with the applications that use them. A particular application may need to instruct or configure a Communication Service on how to move the messages flowing inside that application such that inter-component communication is made possible in a loosely coupled architecture. For example, a content-based router may require the application to provide routing instructions such that the router will know where to forward messages to. Another example may be a publish-subscribe server which will deliver messages to registered subscribers based on a filter that can be applied to the message’s content. This filter will be set by the application. In both cases the Communication Service does not process the content of the message but rather (optionally) uses parts of it as instructed by the application in advance for determining where it should go.

In addition to application-specific requirements, restrictions imposed by security, regulatory, or other sources of constraints may dictate that in order to use the facilities offered by a particular Communication Service users will need to possess certain permissions. These permissions can be set at the application scope (i.e. allowing an application to use the service regardless of the specific user who is using the application), at the user scope (i.e. allowing a specific user to use the service regardless of the application that the user is using), or at both scopes (i.e. allowing the specific user to access the service while running a specific application). For example, a publish-subscribe service may be configured to restrict access to specific topics by only allowing specific users to subscribe to them.

Other application-level facilities that may be offered by Communication Services pertain to monitoring, diagnostics, and business activity monitoring (BAM). Communication Services may provide statistical information about the application such as an analysis of message traffic patterns (e.g. how many messages are flowing through a bridge per second), error rate reports (e.g. how many SOAP faults are being sent through a router per day), or business-level performance indicators (e.g. how many purchase orders are coming in through a partner’s gateway). Although they may be specific to a particular application, these capabilities are not different than the configuration settings used to control message flow. This information is typically provided by a generic feature of the Communication Service, which oftentimes needs to be configured by the application. The statistical information being provided typically needs to be consumed by a specific part of the application that knows what to do with it (e.g. raise a security alert at the data center, or update a BAM-related chart on the CFO’s computer screen).

Utility Services

Utility Services provide generic, application-agnostic services that deal with aspects other than transporting application messages. Like Communication Services, the functionality they offer is part of the base infrastructure of a SOA and is unrelated to any application-specific logic or business process. For example, a Discovery service may be used by components in a loosely coupled composite-application to discover other components of the application based on some specified criteria (e.g. a service being deployed into a pre-production environment may look for another service which implements a certain interface that the first component needs and that is also deployed in the pre-production environment). Examples of Utility Services include security and identity services (e.g. an Identity Federation Service or a Security Token Service), discovery services (e.g. a UDDI server), and message transformation services.

As in the case of Communication Services, Utility Services may too be instructed or configured by a particular application on how to perform an operation on their behalf. For example, a Message Transformation service may transform messages from one message schema to another message schema based on a transformation mapping that is provided by the application using the Message Transformation service.

Although Utility Services do not hold any application state, the state of a Utility Service may be affected by system state changes. For example, a new user being added to the application may require an update to the credential settings in the Security Token Service. Unlike in the case of Communication Services, client services directly interact with the Utility Services who process and (if needed) respond to the messages that the clients send to them.

Users of Utility Services may require a permission to be configured for them in order to use the service, be it at the application, user, or the application-user scope. For example, a Discovery service may only serve domain-authenticated users (i.e. users who have valid credentials issues by a Windows domain controller).

Like Communication Services, Utility Services may provide application-level facilities for monitoring, diagnostics, BAM, etc. These may include statistical information about usage patterns (e.g. how many users from another organization authenticated using a federated identity), business-impacting error rates (e.g. how many message format transformations of purchase orders failed due to badly formatted incoming messages), etc. As with Communication Services, these facilities are typically generic features of the Utility Service and they need to be configured and consumed by the particular solution in which they are utilized.

Application Services

Application Services are services which take part in the implementation of a business process. They provide an explicit business value, and exist on a spectrum which starts with generic services that are used in any composite-application in the organization on one end, ends with specialized services that are part of a single composite-application on the other end, and has services that may be used by two or more applications in between.

Entity Services

Entity Services unlock and surface the business entities in the system. They can be thought of as the data-centric components ("nouns") of the business process: employee, customer, sales-order, etc. Examples of Entity Services include services like a Customers Service that manages the customers’ information, an Orders Service that tracks and manages the orders that customers placed, etc.

Entity Services abstract data stores (e.g. SQL Server, Active Directory, etc.) and expose the information stored in one or more data stores in the system through a service interface. Therefore, it is fair to say that Entity Services manage the persistent state of the system. In some cases, the information being managed transcends a specific system and is used in several or even all the systems in the organization.

It is very common for Entity Services to support a CRUD interface at the entity level, and add additional domain-specific operations needed to address the problem-domain and support the application’s features and use-cases. An example for a domain-specific operation is a Customers service that exposes a method called FindCustomerByLocation which can locate a customer ID given the customer’s address.

The information that Entity Services manage typically exists for a time span that is longer than that of a single business process. The information that Entity Services expose is typically structured, as opposed to the relational or hierarchical data stores which are being fronted by the service. For example, a service may aggregate the information stored in several database tables or even several separate databases and project that information as a single customer entity.

In some cases, typically for convenience reasons, Entity Service implementers choose to expose the underlying data as DataSets rather than strongly-schematized XML data. Even though DataSets are not entities in the strict sense, those services are still considered Entity Services for classification purposes.

Users of Entity Services may require a permission to be configured for them in order to use the service, be it at the application, user, or the application-user scope. These permissions may apply restrictions on data access and/or changes at the “row” (entity) or “column” (entity element) level. An example for “column” level restriction would be an HR application might have access to both the social security and home address elements of the employee entity while a check-printing service may only have access to the home address element. An example for “row” level restriction would be an expense report application which lets managers see and approve expense reports for employees that report to them, but not for employees who do not report to them.

Error compensation in Entity Services is mostly limited to seeking alternative data sources, if at all. For example, if an Entity Service fails to access a local database it may try to reach out to a remote copy of the database to obtain the information needed. To support system-state consistency, Entity Services typically support tightly-coupled distributed atomic transactions. Services that support distributed atomic transactions participate in transactions that are flowed to them by callers and subject any state changes in the underlying data store to the outcome of these distributed atomic transactions. To allow for a lower degree of state-change coupling, Entity Services may provide support for the more loosely-coupled reservation pattern, either in addition to or instead of supporting distributed atomic transactions.

Entity Services are often built in-house as a wrapper over an existing database. These services are typically implemented by writing code to map database records to entities and exposing them on a service interface, or by using a software factory to generate that mapping code and service interface. The Web Services Software Factory from Microsoft’s Patterns & Practices group is an example of such a software factory. In some cases, the database (e.g. SQL Server) or data-centric application (e.g. SAP) will natively provide facilities that enable access to the data through a service interface, eliminating the need to generate and maintain a separate Entity Service.

Entity Services are often used in more than one composite-application and thus they are typically centrally managed.

Capability Services

Capability Services implement the business-level capabilities of the organization, and represent the action-centric building blocks (or "atomic verbs") which make up the organization’s business processes. A few examples of Capability Services include third-party interfacing services such as a Credit Card Processing service that can be used for communication with an external payment gateway in any composite-application where payments are made by credit card, a value-add building block like a Rating Service that can process and calculate user ratings for anything that can be rated in any application that utilizes ratings (e.g. usefulness of a help page, a book, a vendor, etc.), or a communication service like an Email Gateway Service that can be used in any composite-application that requires the sending of emails to customers or employees. Capability Services can be further divided by the type of service that they provide (e.g. third-party interfacing, value-add building block, or communication service), but this further distinction is out of scope for this discussion.

Capability Services expose a service interface specific to the capability they represent. In some cases, an existing (legacy) or newly acquired business capability may not comply with the organization’s way of exposing capabilities as services, or even may not expose a service interface at all. In these cases the capability is typically wrapped with a thin service layer that exposes the capability’s API as a service interface that adheres to the organization’s way of exposing capabilities. For example, some credit card processing service companies present an HTML-based API that requires the user to fill a web-based form. A capability like that would be wrapped by an in-house-created-and-managed-façade-service that will provide easy programmatic access to the capability. The façade service is opaque, and masks the actual nature of the capability that’s behind it to the point where the underlying capability can be replaced without changing the service interface used to access it. Therefore, the façade service is considered to be the Capability Service, and the underlying capability becomes merely an implementation detail of the façade service.

Capability Services do not typically directly manage application state; to make state changes in the application they utilize Entity Services. If a Capability Service does manage state, that state is typically transient and lasts for a duration of time that is shorter than the time needed to complete the business process that this Capability Service partakes in. For example, a Capability Service that provides package shipping price quotes might record the fact that requests for quotes were sent to the shipping providers out until the responses come back, thereafter erasing that record. In addition, a Capability Service that is implemented as a workflow will manage the durable, transient execution state for all the currently running instances of that workflow. While most of the capabilities are “stateless”, there are obviously capabilities such as event logging that naturally manage and encapsulate state.

Users of Capability Services may require a permission to be configured for them in order to use the service, be it at the application, user, or the application-user scope. Access to a Capability Service is typically granted at the application level. Per-user permissions are typically managed by the Process Services that make use of the Capability Services to simplify access management and prevent mid-process access failures.

Error compensation in Capability Services is limited to the scope of meeting the capability’s Service Level Expectation (SLE) and Service Level Agreements (SLA). For example, the Email Gateway Service may silently queue up an email notification for deferred delivery if there’s a problem with the mail service, and send it at a later time, when email connectivity is restored. A Shipping Service which usually compares the rates and delivery times of 4 vendors (e.g. FedEx, UPS, DHL, and a local in-town currier service) may compensate for a vendor’s unavailability by ignoring the failure and continuing with the comparison of the rates that it was able to secure as long as it received at least 2 quotes. These examples come to illustrate that failures may result in lower performance. This degradation can be expressed in terms of latency (as in the case of the Customer Emailing Service), the quality of the service (e.g. the Shipping Service would only be comparing the best of 2 quotes instead of 4), and many other aspects, and therefore needs to be described in the SLE and SLA for the service.

Capability Services may support distributed atomic transactions and/or the reservation pattern. Most of the Capability Services do not manage resources whose state needs to be managed using atomic transactions, but a Capability Service may flow an atomic transaction that it is included in to the Entity Services that it uses. Capability Services are also used to implement a reservation pattern over Entity Services that do not support that pattern, and to a much lesser extent over other Capability Services that do not support that pattern.

Capability Services can be developed and managed in-house, purchased from a third party and managed in-house, or “leased” from an external vendor and consumed as SaaS that is externally developed, maintained, and managed.

When developed in-house, Capability Services may be implemented using imperative code or a declarative workflow. If implemented as a workflow, a Capability Service is typically modeled as a short-running (atomic, non-episodic) business-activity. Long running business-activities, where things may fail or require compensation typically fall into the Process Service category.

A Capability Service is almost always used by multiple composite-applications, and is thus typically centrally managed.

Activity Services

Activity Services implement the business-level capabilities or some other action-centric business logic elements (“building blocks”) that are unique to a particular application. The main difference between Activity Services and Capability Services is the scope in which they are used. While Capability Services are an organizational resource, Activity Services are used in a much smaller scope, such as a single composite-application or a single solution (comprising of several applications). Over the course of time and with enough reuse across the organization, an Activity Service may evolve into a Capability Service.

Activity Services are typically created to facilitate the decomposition of a complicated process or to enable reuse of a particular unit-of-functionality in several places in a particular Process Service or even across different Process Services in the application. The forces driving for the creation of Activity Services can stem from a variety of sources, such as organizational forces, security requirements, regulatory requirements, etc. An example of an Activity Service create in a decomposition scenario is a Vacation Eligibility Confirmation Service that due to security requirements separates a particular part of a vacation authorization application’s behavior such that that part could run behind the safety of the HR department’s firewall and access the HR department’s protected databases to validate vacation eligibility. An example of an Activity Service used for sharing functionality would be a Blacklist Service that provides information on a customer’s blacklist status such that this information can be used by several Process Services within a solution.

Like Capability Services, Activity Services expose a service interface specific to the capability they represent. It is possible for an Activity Services to wrap an existing unit of functionality, especially in transition cases where an existing system with existing implemented functionality is being updated to or included in a SOA-based solution.

Like Capability Services, Activity Services do not typically directly manage application state, and if they do manage state that state is transient and exists for a period of time that is shorter than the lifespan of the business process that the service partakes in. However, due to their slightly larger granularity and the cases where Activity Services are used to wrap an existing system, it is more likely than an Activity Services will manage and encapsulate application state.

Users of Activity Services may require a permission to be configured for them in order to use the service, be it at the application, user, or the application-user scope. Like in the case of Capability Services, access to an Activity Service is typically granted at the application level and managed for each user by the Process Services that are using the Activity Service.

Activity Services have the same characteristics for error compensation and transaction use as Capability Services.

Activity Services are typically developed and managed in-house, and may be implemented as imperative code or a declarative workflow. Like in the case of a Capability Service, if implemented as a workflow an Activity Service is typically modeled as a short-running business-activity.

Activity Services are typically used by a single application or solution and are therefore typically managed individually (for example, at a departmental level). If an Activity Service evolves into a Capability Service, the management of the service is typically transitions to a central management facility.

Process Services

Process Services tie together the data-centric and action-centric building blocks to implement the business processes of the organization. They compose the functionality offered by Activity Services, Capability Services, and Entity Services and tie them together with business logic that lives inside the Process Service to create the blueprint that defines the operation of the business. An example of a Process Service is a Purchase Order Processing service that receives a purchase order, verifies it, checks the Customer Blacklist Service to make sure that the customer is OK to work with, checks the customer’s credit with the Credit Verification Service, adds the order to the order-list managed by the Orders (Entity) Service, reserves the goods from the Inventory (Entity) Service, secures the payment via the Payment Processing Service, confirms the reservation made with the Inventory (Entity) Service, schedules the shipment with the Shipping Service, notifies the customer of the successful completion of the order and the ETA of the goods via the Email Gateway Service, and finally marks the order as completed in the order-list.

Process Services may be composed into the workflows of other Process Services but will not be re-categorized as Capability or Activity Services due to their long-running nature.

Since Process Services implement the business processes of the organization, they are often fronted with a user interface that initiates, controls, and monitors the process. The service interface that these services expose is typically geared towards consumption by an end user application, and provides the right level of granularity required to satisfy the use cases that the user facing front-end implements. Monitoring the business process will at times require a separate monitoring interface that exposes BAM information. For example, the Order Processing Service may report the number of pending, in-process, and completed orders, and some statistical information about them (median time spent processing and order, average order size, etc.).

Process Services typically manage the application state related to a particular process for the duration of that process. For example, the Purchase Order Processing service will manage the state of the order until it completes. In addition, a Process Service will maintain and track the current step in the business process. For example, a Process Service implemented as a workflow will hold the execution state for all the currently running workflow instances.

Users of Process Services may require a permission to be configured for them in order to use the service, be it at the application, user, or the application-user scope. Access to a Process Service is typically granted at the user level.

Process Services very rarely support participating in a distributed atomic transaction since they provide support for long-running business activities (a.k.a. long-running transactions) where error compensation happens at the business logic level and compensation may involve human workflows. Process Services may utilize distributed atomic transactions when calling into the services they use.

Process Services are typically developed and manages in-house since they capture the value-add essence of the organization, the “secret sauce” that defines the way in which the organization does its business. Process Services are designed to enable process agility (i.e. to be easily updatable) and the process that they implement is typically episodic in nature (i.e. the execution comprises of short bursts of activity spaced by long waits for external activities to complete). Therefore, Process Services are best implemented as declarative workflows implemented using an integration server (such as BizTalk Server) or a workflow framework (such as Windows Workflow Foundation).

Process Services are typically used by a single application and can therefore be managed individually (for example, at a departmental level). In some cases a reusable business process may become a commodity that can be offered or consumed as SaaS

When designing business software, we should remind ourselves that the objective is delivering agile systems in support of the business; not service orientation (SO). Rather, SO is the approach by which we an enable business and technology agility, and is not an end in itself. This must particularly be borne in mind with references to Web services. Achieving the agility that so often accompanies Web services is not just a consequence of adopting Web service protocols in the deployment of systems, but also of following good design principles. In this article, we consider several principles of good service architecture and design from the perspective of their impact on agility and adaptability.

A Services Lifecycle

Now that we have examined the types of services that may exist within a SOA, a more holistic look at services is needed. A Services Lifecycle can be used to understand the activities, processes and resources necessary for designing, building, deploying and ultimately retiring the services that comprise a SOA.

A Service comes to life conceptually as a result of the rationalization of a business process and decomposition and mapping of that business process into the existing IT assets as wells the new IT assets to fill the gaps. The new IT assets once identified will be budgeted and planned for SDLC activities that result in deployable services (assuming that our goal is to create reusable IT assets). Following are various important activities that happen (not necessarily in this strict order) during the life time of a service from the service provider perspective:

Figure 4. A Services Lifecycle

Service Analysis

Service Analysis is the rationalization of business and technical capabilities with the express notion of enabling them via services. Other aspects such as SLAs, localization / globalization, and basic service contracts will be established for future use in the life cycle.

Service Development

Rationalization of contracts (XML Schemas) and designing new contracts will be one of the primary activities in this phase. Object libraries supporting the service implementation will be acquired or designed. Security policies, trust boundaries, authentication/authorization, data privacy, instrumentation, WSDL, etc. will be the outcome of this phase. Distributing WSDL or service consumer proxies will be strategized during this phase.

Services will be developed using the selected IDE, Web services stack and the language of choice.

Service Testing

Services will be unit, smoke, functional and load tested to ensure that all the service consumer scenarios and SLA ranges are met.

Service Provisioning

Service metadata as identified in the “Service Consumption” will be deployed into the directory. This will be associated with a deployment record into a repository that models deployment environment. Supported SLA policies will be an important metadata for successful operation of a service. Service gets a production endpoint in an appropriately designed production infrastructure. Support teams will be trained and appropriate processes for support among various roles (business versus IT) will be established. Access to service consoles and reports will be authorized to these roles.

Service Operation

This is the most important activity as the ROI will be realized through the operation of the services in production. The management infrastructure will do the following:

  • Service Virtualization
  • Service Metering (client usage metering and resource metering)
  • Dynamic discovery of service endpoints
  • Uptime and performance management
  • Enforce security policies (authentication, authorization, data privacy, etc.)
  • Enforce SLAs based on the provisioning relationship
  • Generate business as well as technology alerts for a streamlined operation of the service
  • Provide administrative interfaces for various roles
  • Generate logs and audit trails for non-repudiation
  • Dynamical provisioning (additional instances of the service as necessary)
  • Monitor transactions and generate commit/rollback statistics
  • Integrate well with the systems management tools
  • Service, contract and metadata versioning
  • Enforce service decommissioning policies
  • Monetization hooks
  • Reporting

Service Consumption

This activity is equally applicable to service consumers and providers as providers may consume services as well. During this activity, services will be discovered to understand the following:

  • Service security policies
  • Supported SLA policies
  • Service semantics (from the lifecycle collateral attached to the service definition)
  • Service dependencies
  • Service provisioning (will be requested by the consumer)
  • Pre and post-conditions for service invocation
  • Service development schematics (proxies, samples, etc.)
  • Service descriptor artifacts
  • Service impact analysis
  • Other documentation (machine readable as well as for human consumption)

During this activity, service consumers will be authorized to discover the service and its metadata. SLAs will be pruned to meet the desired level of availability based on the negotiated contract.

Service Change Management

Service like any IT application asset will go through several iterations during its lifetime. Service contracts will change, service security as well as SLA policies will change, the implementation will change, and the technology platform may change. Some of the above changes may be breaking changes. So, the management infrastructure has to be resilient for all the mutations by providing necessary deployment support across all the above changing dimensions.

Service Decommission

As a result of a change in the business strategy or as a result of better alternatives or as a result of waning consumer interest, a service may be decided for decommissioning. Management infrastructure should be able to enforce retirement policies by gracefully servicing the consumers until the last request.

SOA Scenarios

There are a number of business and technical scenarios for which SOA delivers a clear benefit. This section lists several of the most commonly used scenarios for SOA (this is not a comprehensive list).

Information Integration

The Information Integration scenario is sometimes referred to as “the single view of the customer problem”. The complete description of a customer might be spread across a dozen business applications and databases. This information is rarely completely in sync, and aggregating this information for optimal customer (or partner or employee) interaction is poorly supported. Information integration services are an effective means for both presenting your application portfolio with a unified view of these key entities, and for ensuring the consistency of the information across all of your back-end systems. Information integration projects can run from the tactical to the broadly strategic; incrementally re-engineering information access and management across the enterprise. This scenario is frequently associated with the following industry acronyms (each of which are sometimes used interchangeably):

  • MDM: Master Data Management is an approach for providing and maintaining a consistent view of the organization’s core business entities (not just customers).
  • EII: Enterprise Information Integration is broader than MDM, using data abstraction to address the challenges typically associated with data heterogeneity and context.
  • CDI: Customer Data Integration is the combination of the technology, processes and services needed to create and maintain a complete view of the customer across multiple channels, business lines and enterprises. CDI is typically associated with CRM systems.

Chapter Four discusses the Information Integration scenario in greater detail.

Legacy Integration

The Legacy Integration scenario focuses on the tactical use of services to preserve existing investments in business applications, while extending the functionality of the capabilities upon which they deliver. For example, a service might add support to comply with new regulations in front of an existing ERP package. Applications would be engineered to exchange messages with the service, which would extract the compliance-relevant data and then communicate the request to the ERP package.

Process Governance

Process Governance is a far broader than either Information or Legacy Integration. In a Process Governance scenario, "header" elements are used to communicate key business metadata; from the turnaround time on customer requests to the identity of the approvers for specific business decisions. This metadata is captured by a utility service (as discussed previously), for real-time and/or aggregated analysis. "Service native" processes would include this information in SOAP headers, while non-native applications would need to be re-engineered to transmit the metadata as a message to the governance server.

Consistent Access

Consistent Access is a more technical and subtly different scenario than any of the scenarios previously discussed. This scenario enables a services layer to ensure consistent enforcement of a variety of operational requirements when a diverse set of applications needs to connect to a critical back-end resource. By mandating that all access be routed through a service facade, an organization might enforce consistent access authorization, cost distribution and load management.

Resource Virtualization

A Resource Virtualization scenario can be utilized to help enforce loose coupling between resources and consumers, effectively insulating consumers from the implementation details of the targeted resources. Typical examples of Resource Virtualization may include:

  • Context-sensitive and content-sensitive routing of requests, such as sending a real-estate inquiry to the agent in the specified geography who specializes in farm properties.
  • Routing of requests to partitioned information stores (without requiring the requestor to understand partitioning schemes).
  • Load balancing requests across available resources; from customer service representatives to streaming video feeds.

Process Externalization

Process Externalization scenarios utilize Web services to help securely negotiate common processes such as payroll processing, employee expense reimbursement, and logistical support. Cell phone service providers and Internet portals frequently use Web services to aggregate content while customer-facing organizations may use services to build composite offers (such as travel packages that include airfare and rental cars). The key to successful process externalization on today's technology stack is to manage your own expectations; compromise your requirements to the limits of the existing technologies so that you don't spend your profits or savings on building infrastructure services that you will replace in a few years' time.

Other Scenarios

There are far too many SOA scenarios to document them all. The scenarios discussed above represent some of the more common scenarios in which we have seen SOA succeed. Another common scenario is human interaction. How can services within a SOA interact with end users?

SOA and the End User


Figure 5. Comparing Systems and Users

Enterprise Application Integration (EAI) typically deals with system-to-system integration, ignoring computer-human interactions. System-to-system interactions tend to be very structured in terms of both process and data – for example, an Order to Cash process will use well-defined processes (Process PO) and business documents (Purchase Order). System-to-system interactions rarely mirror the real world. Processes tend to follow what is traditionally termed the “happy path,” since exceptions are poorly or rarely handled effectively. How people work in the real world is much different – processes are ad-hoc and may change frequently within a given time period. The data we work with is equally unstructured since it may take the form of Office documents, videos, sound files, and other formats. The intersection of these two worlds represents the intersection of system-to-system interactions and human workflows (we will discuss this topic in greater detail in Chapter Three). Applications that effectively support these human workflows can be difficult to design and develop. The vast majority of end users today use Microsoft Office for email and information work. Microsoft Office represents the results of many years of research and investments geared towards effectively supporting human workflows (in terms of both data and processes). Microsoft Office 2007 has evolved to become a first-class integration platform, providing a familiar user experience for service consumption, enabling:

  • Models for many business concepts including business entities, business events and event-driven business rules, task assignment and fulfillment, modeled workflows, and many others.
  • An application lifecycle model with customization and versioning, self-contained zero-touch deployment, and management through the whole lifecycle.
  • Tools support for creating models, composing applications and managing through the lifecycle, based upon Visual Studio Tools for Office (VSTO) and other platform tools.

Microsoft Office 2007 supports simple event handling support for common line of business (LOB) events, including workflow synchronization events. Since most requirements for integration of LOB content with Office revolve around surfacing business entities such as Account, Invoice, Quote and Product, Office Business Entities are exposed in a way that creates unique value for the knowledge worker. Entity relationship models are hardly new. The design of business applications has traditionally been based on business entities along with the business rules and logic associated with the data. Office Business Entities (OBEs) have a few special aspects:

  • OBEs can be turned into Office native content, maintaining the link to the LOB sources while being subject to all the rules and features of the Office application to which the content is native. For instance a list of parts in a product entity can be inserted as a table in Word, while maintaining the ability to refresh and drill into the data in the LOB application source. This is a kind of self-describing smart tag in that it does not have to be recognized, it comes with its special behavior already built into the content.
  • OBEs enable business entities to be treated as first-class citizens of the Office world. OBEs will be used offline, attached to e-mails, created, correlated, shared and edited in collaborative environments like SharePoint and Groove with user control over the commitment of their data to LOB data sources. Most of the knowledge work in a business happens in applications like Excel, Word, and Outlook, before, after and aside from creating the data of record—for instance a quote to be created or updated must be worked on by many people before being committed to the system of record. Business work is like a 3-dimensional creative and collaborative world from which the transactional applications capture a 2-dimensional projection of committed data. With the Office-friendly behavior of OBEs, the full lifecycle of business data usage can maintain coherence without resorting to awkward and error-prone cut-and-paste transitions between the document world and the business application world.
  • OBEs are coupled with reusable UI experiences that can be used in rapid application development (RAD) to quickly produce context-driven business solutions. UI parts can be associated with OBE views, which can be used in a drag-and-drop design experiences to surface LOB data within Office. Relationships between UI Parts can be navigated dynamically using links, creating a web-like experience around business entities. The client runtime also provides a declarative programming model that allows user experience to be driven by standard Office context events (such as item open) with the experience tailored by parameters such as role and locale.
  • The content of OBEs can be bound to the content of Office entities, without becoming a part of it. This is easiest to observe in Outlook items, where, for instance, a contact item in Outlook can be bound to a customer contact in a CRM system. Both the Outlook entity and the CRM entity exist independently, each with its own identity and behavior, but some of their properties (such as address) are conceptually shared by being bound to each other and synchronized automatically. Thus a hybrid Outlook/CRM entity is created conceptually with correlation and data synchronization rather than data sharing. This becomes visible in Outlook as extended data and user experience for such hybrid contacts; surfacing some CRM contact data and behavior as extensions of the Outlook contact UX. Hybrid entities create a deep but non-invasive association. The use of hybrid Office/LOB entities is most interesting for Outlook items today because Outlook items possess a firm identity which is needed for correlation with OBEs. As document sharing occurs in more controlled SharePoint/Groove environments as part of processes like document assembly, based for instance on Word templates such as “contract” or “RFP”, more Office entities will gain stable identities and become available for two-way correlation with OBEs.

LOB entities in many cases are fragmented into data silos and the data in these silos is often of questionable quality. OBEs can mask these problems while creating a rich user experience linked deeply to the real work context of the knowledge worker.

What are Composite Applications?

A composite application is a collection of software assets that have been assembled to provide a business capability. These assets are artifacts that can be deployed independently, enable composition, and leverage specific platform capabilities.


Figure 6. High-level representation of a composite application

In the past, an enterprise's software assets were usually a set of independent business applications that were monolithic and poorly-integrated with each other. However, to get the business benefits of composition, an enterprise must treat its software assets in a more granular manner, and different tiers of architecture will require different kinds of assets such as presentation assets, application assets, and data assets. For example, a Web service might be an application asset, an OLAP cube might be a data asset, and a particular data-entry screen might be a presentation asset.

An inventory of software assets by itself does not enable composite applications. This requires a platform with capabilities for composition—that is, a platform that provides the ability to deploy assets separately from each other, and in combination with each other. In other words, these assets must be components, and the platform must provide containers.

Containers provided by the platform need to be of different types, which map to the different tiers in the architecture. Enterprise architectures are usually decomposed into three tiers: presentation, application (or business logic), and data. So the platform needs to provide containers for these. However the 3-tier architecture assumes structured business processes and data, where all requirements are made known during the process of designing and building the system. By their very nature, composite applications presume that composition of solutions can occur after assets have been built and deployed – and so need to explicitly account for people-to-people interactions between information workers that are essential to get any business process complete. Usually these interactions are not captured by structured processes, or traditional business applications, and therefore it is critical to add a fourth tier - the productivity tier – to account for these human interactions. This is shown in Figure 7.


Figure 7. The four tiers of a composite application

Traditional discussions around the architecture of business applications tend to focus on the application tier as being the connection between people and data. Typically, however, the application tier contains structured business logic; and this holds for discussions around Service Oriented Architectures (SOAs), Enterprise Service Buses (ESBs), Service Component Architectures (SCAs), or most other architectural perspectives in the industry today – including first-generation discussions around composite applications. However, building a composite application requires a mindset that not only is the productivity tier a critical element of the stack, but also contains the most business value.

To expand on the comparison between composite applications and SOA, both of them target flexibility and modularization. However, SOA provides flexibility at just one tier: the structured business logic in the middle tier. Composite applications target flexibility at all four tiers. That said, a composite application is a great way to surface information out of an SOA, and having line-of-business (LOB) applications exposed as services makes it easier to build support for cross-functional processes into a composite application.

Therefore to design a composite application, a solutions architect must:

  • Choose a composition stack – Pick one or more containers from each tier, and a set of components types that are deployable into those containers.
  • Choose components – Define the repository of assets that must be built from this set of component types, based on business needs.
  • Specify the composite application – Define the ways in which those assets will be connected, to provide a particular cross-functional process. The platform should enable these connections to be loosely-coupled.

Then after deployment, users will have the opportunity to personalize both assets and connections, as the composition stack should enable this through loose coupling and extensibility mechanisms.


Figure 8. A Composite Application Architecture

What does a Composite Application Look Like?

A figurative representation of a composite application is shown in Figure 8, which shows a very abstract representation of an enterprise solution, deconstructed along the lines of Figure 7.

At the top are information workers, who access business information and documents through portals that are role specific views into the enterprise. They create specific documents during the course of business activities, and these activities are part of larger business processes. These processes coordinate the activities of people and systems. The activities of systems are controlled through process specific business rules that invoke back end LOB applications and resources through service interfaces. The activities of people plug into the process through events that are raised when documents specific to the process are created, or modified. Then business rules are applied to the content of those documents, to extract information, transform it, and transfer it to the next stage of the process.


Figure 9. Deconstructing an enterprise application

Today most line-of-business applications (LOB) are a collection of resources, hard-coded business processes, and inflexible user interfaces. However based on the previous section, it is clear that enterprise solutions need to be broken down into a collection of granular assets that can be assembled into composite applications. A high-level approach for doing this to any business process is listed below:

  • Decompose the solution for a business process into software assets corresponding to the elements shown in Table 1 below.
  • Package all assets corresponding to a given business process into a “process pack” for redistribution and deployment. This would contain metadata and software components, and solution templates that combine them. The process pack would also contain service interface definitions that would enable connections to other IT systems. These connections would be enabled by implementing the service interfaces, for example. to connect to LOB applications and data. The goal is to be able to easily layer a standardized business process onto any heterogeneous IT landscape.
  • Deploy the process pack onto a platform that provides containers for the types of assets that the solution has been decomposed into. The platform should provide capabilities for rapid customization, personalization, reconfiguration, and assembly of assets.
  • Connect the assets within the process pack, to existing LOB systems, and other enterprise resources by implementing the services interfaces. These connections could be made using Web services technologies, other kinds of custom adapters, or potentially even Internet protocols like RSS.

List of application assets for composition



Business activities

Business rules


Interfaces to connect to back end systems (Web service APIs)

UI Screens

Data connections




Table 1. List of application assets for composition

Expected Benefits of Composition, and How to Achieve Them

Deployment of enterprise applications should be tied to business benefits in the Triple-A sense (agility, adaptability, alignment). These benefits need to be demonstrated from two perspectives:

  • The Solution Provider Perspective (or Development Perspective) – This is the perspective of the organization that builds an enterprise application. This might be an Independent Software Vendor (ISV), or a Systems Integrator (SI), or even an in-house IT department. The solution provider perspective is concerned primarily with benefits gained in activities relating to designing, implementing, and deploying enterprise applications.
  • The Solution Consumer Perspective (or User Perspective) – This is the perspective of the organization that uses an enterprise application. Typically this is the business unit that commissioned the enterprise application. The solution consumer perspective is concerned primarily with benefits gained by the business after the solution has gone into production.

The benefits of composition that can be reasonably expected in each of these two perspectives are listed here, along with some high-level best practices to achieve these expected benefits.


In this Chapter we examined the concept of SOA from the perspective of services: architectural and organizational maturity, service types, service lifecycles, scenarios and the role of users and composite applications within a SOA initiative.

Service oriented architecture (SOA) is a design approach to organizing existing IT assets such that the heterogeneous array of distributed, complex systems and applications can be transformed into a network of integrated, simplified and highly flexible resources. A well-executed SOA project aligns IT resources more directly with business goals, helping organizations to build stronger connections with customers and suppliers, providing more accurate and more readily available business intelligence with which to make better decisions, and helping businesses streamline business processes and information sharing for improved employee productivity. The net result is an increase in organizational agility.

In an SOA, the concept of applications will still exist, especially when one considers an enterprise’s IT investments. However, the concept of one vendor supplying a complete “SOA solution” with a monolithic set of products is being replaced with a “best-of-breed” approach, enabling customers to adopt a capabilities-based approach to implementing their architectural requirements.

Organizations should remain focused on solving their business problems and avoid being distracted by integration trends and buzzwords. SOA should be a means for making the business more agile, not the end goal. Designing and implementing SOA should be an incremental process with rapid deployments and ROI realization. SOA should not be a top-down, multi-year “boil-the-ocean” effort – these types of projects rarely succeed because they are unable to keep up with the shifting needs of the organization.

Users will also undergo a transformation in the way they work with applications. Depending on the type of application, a user could either be exposed to specific tasks of a process, e.g. working in the context of a document workflow in Microsoft Office SharePoint Server, or, an application might encapsulate a business process internally and let a user start the process, but not interact with it during its execution.

Chapter 3 provides a more detailed discussion of the Workflow and Process recurring architectural capability.

SOA Case Study: Commonwealth Bank of Australia

This case study describes how the Commonwealth Bank of Australia designed, developed, and implemented its CommSee application – a relationship banking solution, custom-built by the Commonwealth Bank of Australia using Microsoft® .NET technologies. This solution was developed as a Microsoft® Windows® Forms–based Smart Client that consumes standards-based .NET Web services. The Web services are responsible for orchestrating data from a variety of back-end data sources including mainframes, databases, and various backend legacy systems. At the time of this writing, CommSee has been successfully deployed to 30,000 users at more than 1,700 sites across Australia. After considering the business needs that CommSee was designed to address, this case study will examine the solution architecture, technical project details, and best practices employed in this major software development effort.

Figure 10. CBA Architectural Overview

The entire Case Study is available online at

See other SOA case studies at