Export (0) Print
Expand All

Deploying Composite Applications in the Enterprise

 

Atanu Banerjee

Microsoft Corporation

December 2006

Contents

Introduction
Step 1: Build Team-Collaboration Sites to Host Local Documents and Processes
Step 2: Connect Multiple Departments
Step 3: Connect Business Processes to Line-of-Business Applications
Step 4: Add Data Connections for Cross-Functional Processes
Step 5: Connect Business Processes to Edge Systems
Suggested Reading

Introduction

This white paper will drill down into architectural patterns for deploying composite applications in the enterprise. These patterns will then be leveraged in subsequent chapters to provide guidance that is specific to particular industry verticals.

One approach to deploying a composite application in the enterprise is as follows:

  1. Build team-collaboration sites to host local documents and processes.
  2. Connect multiple departments.
  3. Connect business processes to line-of-business (LOB) applications.
  4. Add data connections for cross-functional processes.
  5. Connect business processes to systems "at the edge."

The best way to do this is to follow a pragmatic approach. This means that the application should be deployed to support a single business process, and the goal should not be to try to re-architect all of the back-end systems in one big bang. This assumption is implicit to the rest of this white paper.

Step 1: Build Team-Collaboration Sites to Host Local Documents and Processes

Microsoft Office SharePoint sites should be set up at the departmental level to enable team collaboration, as shown in Figure 1. These sites will have document libraries to store in-process documents. Information workers within the team will have their own personalized pages, customized from the templates available on the team site.

Note that this is a logical view of the architecture, as all of these departmental sites would not typically be running on separate servers. Instead, multiple sites could be running on a single server in line with Figure 2, and the physical architecture (that is, the deployment landscape for Office SharePoint servers) would be chosen based on other factors such as load, availability, and the teams' geographic dispersion.

In-process documents are stored in document libraries and are associated with workflows that get invoked whenever a document is created or edited. Such a workflow might run validation rules on documents; apply approval policies and actions to the data; cleanse, validate, or filter the data contained within; or update back-end systems.

In addition to business-process workflows, in-process documents undergo a life cycle of their own—from authoring and collaboration, through management and publication, to archiving or destruction. Whenever a document reaches one of these stages, the appropriate workflow can be set to be triggered, such as for managing the archival process.

Bb220801.dplcompappsentrpr01(en-us,MSDN.10).gif

Figure 1. Departmental sites for team collaboration, with folders for shared documents and tasks

Pattern: Segment user personas and work scenarios, and select user-interface technologies appropriately.

The user experience can cover a wide range of work scenarios. These work scenarios typically string together a set of activities in which a user is interacting with a particular type of document, screen, or other type of interface. However, it is hard to build and maintain all of the individual user interfaces that make up day-to-day work scenarios in a way that these can all be personalized by different users. Segment user personas and work scenarios, and choose user-interface technologies appropriately for each segment. For example:

  • Processors and synthesizers of business documents
    • Capabilities required: Support for document-centric workflows and processes.
    • Proposed user experience: Provide document processing with Microsoft Office client–based applications. Provide server-side storage of these documents to avoid proliferation of document copies (using Office SharePoint, for example), and process the information stored within these documents (such as Open XML) in server-side application logic (workflows deployed into Office SharePoint). In effect, Office SharePoint becomes the role-based access point into the enterprise.
    • There should be support for different types of collaboration—both ad-hoc (informal document sharing) and structured approval processes. To facilitate server-side processing, make an association between the content within the document and XML schemas of business-entity models that server-side application logic can access. For example:
    •     Target data entry with Office InfoPath forms.
    •     Target data analysis with Office Excel spreadsheets.
    •     Target proposals with Office Word documents.
  • Business decision makers
    • Capabilities required: Support for business intelligence and insight.
    • Proposed user experience: Ability to search for and view business performance metrics in the form of spreadsheets, reports, and dashboards through browser-based interfaces.
  • Users of high-volume data-entry systems (power users)
    • Capabilities required: High volumes of data entries, keyboard shortcuts, and rich navigation.
    • Proposed user experience: Thick clients.

Step 2: Connect Multiple Departments

As shown in Figure 2, business-process models coordinate activities both within a team and across departments. Within a department, business processes can be modeled using Windows Workflow Foundation (WWF) activities that are deployed into the Office SharePoint server supporting that department. Coordination of activities across departments can be accomplished using collaboration processes that manage both the life cycle of individual business entities (orders) and the life cycle of business processes (procure-to-pay).

Click here for larger image

Figure 2. Business-process models coordinating activities within a team, as well as across departments (Click on the picture for a larger image)

The interdepartmental business processes could be located either centrally in IT data centers or closer to the information workers within the departmental servers. For example, long-running workflows running centrally might be running on Microsoft BizTalk Server, whereas departmental processes would be running in WWF workflows within the Office SharePoint server.

Pattern: Use workflow to model business processes that coordinate the activities of automated services and humans.

To gain flexibility, we must externalize our business-process workflows from our enterprise applications' business services. Figure 3 shows what this might look like.

Bb220801.dplcompappsentrpr03(en-us,MSDN.10).gif

Figure 3. Separating workflows from services

A natural question to ask is whether LOB systems already installed (such as an ERP system) have all the necessary business processes integrated. If so, is there any need for further externalization of business processes in this manner? It is not always easy to synchronize business processes across multiple systems. For most organizations, data pertaining to business processes is kept in multiple systems. Either these are legacy systems or they were acquired through business acquisitions. Also, the business will frequently want to introduce new business models, such as reselling sub-assemblies to new channel partners or selling new modules assembled from components procured from around the world. In many cases, the existing LOB systems will not handle these new business models out of the box, and there will be a lot of custom development required.

The solution to this is to decompose business processes as units of work, and represent these as activities. This decomposition need not be top-down (activities can be added as needed), and over time you will build out a library of these software assets. Also, these assets should be self contained—with encapsulated logic and associated metadata required to configure their behavior. Examples of such activities could be:

  • Document-centric. Send notification e-mail, move document to approved document library, publish document.
  • Domain-specific. Expedite purchase order, generate lead, create production order.

Workflows then become reconfigurable organization of those work units, or activities. The natural representation of data flow between activities in a workflow is an XML document. This mirrors the real world, where activities represent units of work being performed by different people or systems, and documents are being exchanged between them. Because 2007 Microsoft Office System documents use the Open XML document format, you can create workflows that closely mirror the real world.

Pattern: Segment business processes, and model their workflows differently.

Most business processes can be segmented into strategic, tactical, and operational levels.

Click here for larger image

Figure 4. Hierarchy of enterprise business processes (Click on the picture for a larger image)

Each of these different kinds of business processes has different technology and architectural needs. Enterprise solutions at different levels in this hierarchy have different functional requirements, and also make use of different technical capabilities of the underlying platform. For example:

  • Strategic solutions will require more analytics and business intelligence.
  • Operational solutions will require more real-time integration (such as through messaging and collaboration), and instrumentation and processing closer to the edge of the network (such as through RFID services and smart devices).
  • Tactical solutions will need both business-process management tools and reporting tools to manage operational workflows.

While business processes in all three of these levels will require workflows to be modeled and then managed, the types of workflow model for each level might be different. For example, operational processes might be best modeled using state machines, where outside events or entities are in control and processes can be cancelled or modified at any time. Strategic processes might be best modeled using sequential workflows, where the workflow is in control and there are well-defined sequences of activity. Tactical processes might require a mix of sequential workflows and state machines. In some cases, it might be better to use business rules instead of either sequential workflows or state machines, especially when the set of actions is determined by the interaction of a number of independent rules. For both tactical and strategic processes, consider using long-running and state-driven workflows.

Figure 5 illustrates that workflows can organize activities in two ways: sequentially, as in a flow chart, or as a state diagram.

Bb220801.dplcompappsentrpr05(en-us,MSDN.10).gif

Figure 5. Types of workflow

The primary difference between these two styles is how control gets passed from one activity to another. For sequential workflows, this is determined by the workflow itself. However, for state machines, this control gets passed based on events that are triggered externally to the workflow, although the state machine controls the set of choices.

Pattern: Workflows should be decentralized.

In a world of globalization, specialization, and the virtual enterprise, there is increasing pressure on IT to enable decentralized decision making and empower the individual information worker. In such a world, there are no useful, omniscient views of business processes within an organization. It would take too long to document these, and it would add little value. That is why traditional BPM technologies have not worked out very well, as they emphasize modeling of business process to a high level of detail and use these models to drive end-to-end flows of information across applications, systems, and information-worker activities.

However, workflow and business-process modeling technologies can be extremely useful for building composite solutions—if used appropriately. Workflows should be fluid—easy to assemble and easy to change. Workflows should be local; ownership should live close to the activities that are being coordinated. It is all right for small snippets of process to be modeled independently of each other. Another important need is for built-in flexibility in the process. This does not mean attempting to model every possible decision point and every possible outcome. Instead, it means empowering information workers to choose the business process, or workflow, in which they must participate, depending upon their current situation. This could be achieved by allowing users to initiate a workflow explicitly from their client applications (as is possible with Microsoft Office client applications such as Office Excel) or by controlling the flow of documents on the server (as is possible through Office SharePoint).

In practice, what does it mean to say that workflows must be decentralized? Consider the following levels of business process, and imagine workflows running at each of these levels and operating on data models that are aggregated to that level.

Business processes that operate on the following data streams, when moving from the factory floor to the center of the enterprise:

  • Smart devices on the factory floor (sensors, actuators, controllers, operator terminals)
  • Edge servers that process (filter, aggregate, transform) data streams from smart devices on the factory floor
  • Factory execution systems (MES, SCADA, Adv. Proc. Control)
  • Factory business applications (planning, scheduling)
  • Enterprise business applications (ERP, CRM, SCM)

Business processes that relate to the following levels of decision making:

  • Corporate decision making
  • Divisional decision making
  • Factory decision making
  • Workgroup decision making

Step 3: Connect Business Processes to Line-of-Business Applications

Service orientation is one of the ways to expose current business applications into an OBA, as shown in Figure 6.

Click here for larger image

Figure 6. Provision OBAs in departmental sites for team collaboration, coordinate activities with business-process models, and connect to LOB systems through a services backbone. (Click on the picture for a larger image)

In that sense, SOA promotes modularization in the application tier—which is a basic requirement for composition—and is why OBAs and SOA are complementary solutions. This allows the assembly of new cross-functional business applications that extend beyond the boundaries of current applications.

SOA is not the only way to connect LOBs to OBAs. A services backbone can also be exposed using other integration technologies, such as custom adapters.

Pattern: Wrap existing IT systems with a services layer mapping to specific capabilities to be surfaced into the composite application.

Service orientation enables reuse of existing IT assets by wrapping them into modular services that can be plugged into any business process that you design. The goals for doing this should be:

  • Connect into what is already there. Layer business-process management, collaborative workflows, and reporting on top of existing IT assets.
  • Extract more value from what is already there. Enable existing applications to be reused in new ways.
  • Extend and evolve what we already have. Create IT support for new cross-functional business processes that extend beyond the boundaries of what the existing applications were designed to do.

For years, software development has focused on how best to reuse the code that we write. Ultimately, people and businesses want long-term return on their short-term investments in code. One way to do this is the following, as described by Pat Helland in his 2003 presentation called "Thoughts on Data and Process":

  1. Cleave applications apart into services.
    • Disentangle data.
  2. Separate functionality.
  3. Add messaging between.
  4. Wrap applications with services.
    • Decide the functionality to wrap. Often, a subset of functionality is OK, depending on the scope of the business process.
  5. Create messaging.
  6. Tap into business logic or user interface of application.
  7. Build new solutions with services.

Pattern: Follow the four tenets of service orientation while designing services.

The four tenets of service orientation are the following, as outlined by Don Box in the article "Code Name Indigo: A Guide to Developing and Running Connected Systems with Indigo" from MSDN Magazine (January 2004):

  • Boundaries are explicit. Services must interact across service boundaries, but crossing service boundaries might be costly. Therefore, it is important to know your boundaries. Keep service surface areas small, avoid RPC interfaces, and avoid leaking implementation details outside a service boundary.
  • Services are autonomous. Ideally, services should be stable, but business needs change frequently. The space between services changes more frequently than the service boundaries themselves. Avoid assumptions on the environment into which the services are deployed. Design, deploy, and manage services independently from one another. Communicate only through contract-driven messages and policies.
  • Services share schema and contract, not class. Service consumers will rely upon a service's contract to invoke and interact with a service, and will insist that the contract remain stable over time. However, changing business needs will force change upon a service. Therefore, service interaction should be based solely upon a service's policies, schema, and contract-based behaviors. Ensure stability of services (public data, message-exchange patterns, policies). Form explicit contracts that are designed for extensibility. Version services when change occurs. Avoid blurring the line between public and private data representations.
  • Service compatibility is determined based on policy. Services must be compatible with each other—not just structural compatibility (public data, message-exchange patterns), but also compatible with other semantic capabilities that are expressed through configurable capabilities and service levels. Operational requirements for service providers should be manifested in the form of machine-readable policy expressions.

Pattern: Segment the services that you plan to create; not all services are created equal.

What kinds of services should you create? One way of organizing services is the following:

  1. Infrastructure services
  2. Data services—Simple atomic operations on an entity
  3. Activity services—Coordinate data services for business-process execution
  4. Process services—Long-running business processes, possibly complex workflow and human interaction
  5. Event services—Notify subscribers of events

It is not necessary to create all of these types of services. A better way would be to expose services incrementally, in the context of well-defined projects that create business value. Figure 7 shows a set of sample business services that could be deployed in the enterprise. However, with the rise of Software as a Service (SaaS), you cannot assume that services will be co-located or developed in-house. Also, it might be necessary to build a hierarchy of services through aggregation and composition.

Click here for larger image

Figure 7. Sample business services that could be deployed in an enterprise (Click on the picture for a larger image)

Step 4: Add Data Connections for Cross-Functional Processes

The BDC (see Figure 8) can be used to connect back-end data stores to the 2007 Microsoft Office System in order to surface data into Office SharePoint lists and Web parts. This makes it possible to build composite applications for cross-functional processes in the Office SharePoint portal, using a combination of BDC, Office SharePoint lists, and Workflow. For example, the BDC can be used to define entities that have a parent-child relationship (such as order header and order details), and Office SharePoint lists could display them. It would be possible by following the parent-child relationships to drill down from the list displaying header information to the corresponding details information. Furthermore, actions can be modeled in BDC metadata. This means that these actions can be surfaced as menu items on the Office SharePoint list, and selecting a drop-down item from this menu will mean that context from the currently selected row will be passed into the URL defined for the action.

Bb220801.dplcompappsentrpr08(en-us,MSDN.10).gif

Figure 8. Adding data connections for cross-functional processes

Pattern: Break away from the model of monolithic data stores, and adopt a model of federated data.

Over the last 30 years, enterprises have moved towards data consolidation. That is they have moved from multiple data-processing systems to OLTP databases to single ERP instances. However, it is difficult ever to get to a single ERP instance and maintain it. Some of the reasons for this lie in the normal way in which business is conducted nowadays. For example, mergers, acquisitions, globalization, matrixed organizational structures, and outsourcing all have a tendency to pull data apart. A combination of service orientation and the BDC provides a way to manage this distribution of data. SOA discussions typically revolve around messaging, but there are important issues around how data is used by services such as: How does data flow between services? How are messages defined? What data is shared? How is data inside a service different from data outside a service? How is data represented inside and outside services?

Pattern: Treat data inside services differently from data outside services.

Typically, data outside services should just be messages that are passing from one service to another; while data inside a service is private to that service, bounded by transactions, and encapsulated by service code. Usually, transactions take a database from one consistent state to another. However, ensuring data consistency across services is hard. For example, services deal with data in the present, but data on the outside exists in the past. Maintaining shared collections of data across services can be hugely challenging, and distributed transactions are not the answer for a loosely coupled and service-oriented world. Pat Helland looks at this problem in the article "Data on the Outside vs. Data on the Inside," in the MSDN Library, and studies the implication that data must be segmented into different types and treated differently. For example:

  • Reference (or Master) data (such as a product catalog)—This data can be used to create service requests in a format interpretable by all parties. This data can be replicated and cached multiple times, because it doesn't have to be 100 percent consistent. For example, it might be OK to order parts in April from a March catalog. However, reference data does need to have a stable identifier, so that it can be referenced by a service request (for example, Item 23 in the March 2005 catalog). These entities are usually not dependent on other entities (such as through foreign key relationships). Typically, updates to this data are infrequent and tightly controlled to prevent synchronization problems arising from replication.
  • Resource data (SKUs, inventory)—These entities have a very long lifetime. The format and update of these entities is private to a service that owns it. Updates to this data are very frequent, such as updates to the stock on hand for an item. Typically, this kind of data has dependencies (such as through foreign key relationships) to reference data.
  • Activity data (orders, itineraries)—These entities have a life cycle that is bounded by business activities, such as order life cycle from creation to fulfillment to payment receipt. However, these life cycles can be extended by reporting needs, although expiration policies must be put into place. The format of these entities should be private to the service that owns it. Typically, this kind of data has dependencies (such as through foreign key relationships) to reference data.
  • Service-interaction data (PO request forms)—These are the messages that travel between services. It is important to ensure guaranteed delivery of these messages. XML is a good representation for such data, and this data is implicitly immutable.

Pattern: Use entity aggregation to create an authoritative service for managing the state of a shared data collection.

In most enterprises, data is spread across multiple data stores and across geographic and organizational boundaries. It is common to see data-integration approaches (both real-time and batch) being used to move data between these disparate stores, in an effort to maintain data consistency (see Figure 9). However, data integration often is applied in an inconsistent fashion, which leads to duplication, and often it might not be clear which system contains the single version of the truth for a particular entity. This problem is particularly common for reference (master) data. For example, it is not uncommon to have duplicates of product, vendor, or customer data. Therefore, what the enterprise needs is less data integration and more data synchronization. The goal should be to synchronize data among multiple enterprise systems to ensure a single system of record for any data entity.

There is a need to create an authoritative service to manage the state of a shared collection of data and distribute a recent version to requesting (consuming) services. This service would provide a holistic view of an entity and its relationships with other entities, enforce business rules on access to that entity, and interact with the system of record for that entity.

Ideally, entity-aggregation services could hide complexity in a processing layer that applied straight through processing (STP) to queries. However, as Ramkumar Kothandaraman points out in the article "SOA Challenges: Entity Aggregation," in the MSDN Library, STP is not adequate to handle complex cases.

Click here for larger image

Figure 9. Data architecture before and after implementing entity-aggregation services (Click on the picture for a larger image)

Based on complexity, an entity-aggregation service could be implemented using STP, or with partial or even full data replication. In each of these cases, there are design issues to be considered around schema reconciliation, ownership determination, instance reconciliation, CRUD semantics, and life-cycle issues. In the case in which attributes of an entity are being maintained in multiple systems, there are multiple systems of record to update when change occurs. Also, the schema for an entity should include enterprise-wide standard fields, but might also require extensions used by special applications.

Pattern: Align enterprise-data models for effective performance management.

You cannot measure business performance, gain business insight, and convert data into information and information into actionable plans without consistent, aligned enterprise data. The goal should be instrumentation of the enterprise at the operational, tactical, and strategic levels to enable effective performance management, as shown in Figure 10.

For example, a manufacturing corporation might have multiple facilities and be organized into multiple divisions or business units. Executives will want to measure the performance of the enterprise. They will want visibility into performance at the corporate, division, and factory levels. They must ensure that service levels are high, costs are low, and the business is growing. To do this, they need IT systems that are integrated all the way from the plant to the enterprise. However, more than just connectivity, it is necessary to have data models and data systems at each level that synchronize vertically, so that meaningful management decisions can flow down and meaningful business insight can flow up. To further complicate matters, these data models must be aligned across multiple functional areas. These might include sales, marketing, distribution, procurement, inventory, financials, operations, and product engineering.

Click here for larger image

Figure 10. Data must be aligned between strategic, tactical, and operational processes. (Click on the picture for a larger image)

Unfortunately, in many enterprises, data models are not aligned all the way up through the organization. This is the plant-to-enterprise (P2E) problem. This problem has also been described as data not being aligned from "top floor to shop floor."

Pattern: Manage the life cycle of individual entities.

The goal should be to manage the life cycle of a single entity:

  • Product data record—All the way from new product introduction through to obsolescence
  • Customer order—All the way from order creation, through delivery, to financial settlement
  • Enterprise event, or exception—All the way from the event being raised, to assignment to a person, to root-cause analysis, and finally to resolution

For example, some initial steps in a new product introduction might be as follows (although a complete product life cycle is a lot more complex).

  1. Engineering design:
    • A manufacturer might design a new item, collaborating with suppliers to do so.
  2. Preliminary planning:
    1. Sales forecasts for the new product, based on projected demand.
    2. Product configuration decisions.
    3. Planning for how the new product must flow through the supply chain.
  3. Setting up the item for sale at retail outlets:
    1. Getting the item entered into the retailer's system; not just physical attributes of the product, but also profiling and program setup, based on the profile.
    2. Preliminary forecast and replenishment collaboration between manufacturer and retailer, in the context of the program.

Now, as an individual business entity goes through its life cycle, different business processes are affected, different events are generated, and different roles are involved. Typically, this might lead to many different kinds of cross-functional (or collaborative) business processes that require assembly of composite applications, which require data connections to the back end.

Step 5: Connect Business Processes to Edge Systems

Often, the scope of an OBA is not contained within an organization. For example, an OBA might support a business process that must consume a service that is offered by a hosting provider (SaaS scenario). Alternatively, the OBA might have to support a business process that offers a service to another organization. This is especially common in supply-chain management scenarios in which trading partners are involved. Here, there must be a way to transfer documents from one information worker to a counterpart in the trading-partner organization. This process must be secure, reliable, asynchronous, and transparent.

Click here for larger image

Figure 11. Connecting the OBA to systems at "the edge" (Click on the picture for a larger image)

One way of putting together an end-to-end architecture for this is shown in Figure 11. Message brokers have been set up at the edge of the organization to send and receive messages and documents from trading partners. Messages from different trading partners can potentially be received in multiple message formats and delivered over multiple channels, such as Web services, EDI, e-mail, RosettaNet, and so on. Furthermore, messages can be exchanged in a variety of different patterns: one-way, asynchronous two-way, or synchronous two-way messaging. These message brokers must handle each combination of these message-interchange patterns and message formats.

After the message is received by the message broker, it is processed into the single canonical format that is required for downstream services, and the transformed message is persisted to a message queue to decouple public processes from private ones. Next, the message is retrieved from the queue by a routing service that examines the message and routes it to the intended recipient.

But before the document reaches its intended recipient, it might have to be preprocessed by enterprise application services, such as LOB applications or BPM orchestrations. Business rules might be applied to messages, to ensure validity and enforce corporate policies. The result of all this processing is a document with enough information that it can be processed by a human who can make a quick decision. For example, a purchase-order request from a customer might be fed into an order-promising service, and the response from this service might be used to generate an XML document that corresponds to an Office InfoPath form with a candidate PO Confirmation. Next, this generated form might be placed into an Office SharePoint forms library for an information worker in the sales department to approve.

After the information worker has reviewed the proposed confirmation and made any necessary changes, the worker submits the form. This kicks off the workflow for the return trip, which updates the LOB systems and then posts the information from the form as an XML document into a queue for outbound messages as a response to the original request. The message broker then converts back into the format used by the trading partner.

There are multiple ways to implement the message brokers at the edge, such as BizTalk Server. This would provide a scalable and manageable solution that would also come with standards-based accelerators and adapters, such as the RosettaNet accelerator for trading-partner collaboration. The queues that decouple internal and external processes could be implemented using Microsoft SQL Service Broker 2005.

Pattern: Employ federated identity technologies to share identities and entitlements across organizational boundaries.

IT organizations face two conflicting needs. They must:

  1. Enable collaboration across organizational boundaries and networks—for example, collaboration to support trends, such as globalization and outsourcing, and new business models, such as virtual enterprises.
  2. Protect organizational assets by providing ever-tighter network security.

Trying to accommodate both needs has led to a proliferation of passwords. This leads to two problems: lost productivity and a greater security risk through a larger attack surface. For example, some industry estimates show enterprise customers averaging 12 external user IDs and passwords to manage, requiring between 15 to 20 minutes per day. This not only saps user productivity, but also opens security risks, as users who are unable to memorize so many passwords jot them down on paper, which can be lost or seen by others.

IT architectures must enable federation of identity, which means delegation of identity-related functions like authentication, authorization, or profile management to partners in line with previously established relationships of trust. This requires identity-management technologies to share digital identity and entitlement rights (or "claims") securely across security and enterprise boundaries, such as Active Directory Federation Services.

Pattern: When federating identity, think of identity as a set of claims, instead of just a combination of user name and password, along with a set of accompanying attributes.

The traditional model of identity has been the following:

  • A digital identity is a combination of identifier, credentials, and attributes. Attributes can include a set of core attributes and a collection of context-specific attributes.
  • Directory services are stores for digital identities. Along with digital identities, a directory service will also store the entitlements (rights and privileges) associated with each digital identity, in line with the security policies of the organization.
  • An identity provider (IP) creates (or issues) digital identities by making entries into a directory service. A relying party (RP) consumes digital identities by authenticating users against the entries in the directory service.

The traditional model works when identity providers and relying parties are members of a single organization (or domain), as it is easier for identity providers and relying parties to have a shared understanding of identity models, processes, and technologies. Federated identity requires that an organization be able to consume identities that are issued by other organizations. This means that an administrator in an organization can control resources that users in that organization can access—both within the organization and at partner organizations. It also enables an administrator to configure resources that users in other organizations can access. Thus, the idea of a unique identity model for each user becomes less meaningful when federating identities across organizations.

In the article "Microsoft's Vision for an Identity Metasystem," in the MSDN Library, Kim Cameron points out that the new model of identity is as follows:

  • A digital identity is a set of claims that one party makes about a subject.
  • A claim is an assertion of the truth of something.
  • A subject is the person or object that is being described across organizational boundaries.
  • Claims are typically packaged into a security token that can travel across process and machine boundaries.
  • A digital identity is issued by an identity provider (IP) and consumed by a relying party (RP). There can be multiple digital identities issued to a single subject, issued by multiple IPs, each making a different set of claims.

Therefore, a claim is an expression of a right to access a protected resource or operation—much like a key. Access to a service or resource is determined by comparing the claims that a user has to the set of claims that the user requires. A claim is implemented as a structure that contains the name of a claim type, the type of right that is being claimed, and finally the name of a resource. For instance, the statement "an entity can read c:\temp.txt file" might be modeled as follows:

ClaimType=File
Right=Read
Resource=C:\temp.txt

In addition, a claim might also describe a property that an entity possesses. For instance, the statement "an entity's name is John Smith" might be modeled as follows:

ClaimType=Name
Right=PosessProperty
Resource=John Smith

The implications are that, in federated scenarios, a digital identity should become analogous to a driver license, as follows:

  • Identity provider: Directorate of Motor Vehicles
  • Claim: Ability to drive
  • Relying party: Rental-car agency that accepts the claim made by the DMV that the bearer can drive a car

Suggested Reading

The following articles provide more information on the topics in this chapter.

References on Service Orientation and Messaging

  1. Service Orientation and Its Role in Your Connected Systems Strategy (Mike Burner)
  2. Service-Orientated Architecture: Considerations for Agile Systems (Lawrence Wilkes and Richard Veryard)
  3. Principles of Service Design: Service Patterns and Anti-Patterns (John Evdemon)
  4. Razorbills: What and How of Service Consumption (Maarten Mullender)
  5. Principles of Service Design: Service Versioning (John Evdemon)
  6. Dealing with the Melted-Cheese Effect: Contracts (Maarten Mullender)
  7. An Introduction to the Web Services Architecture and Its Specifications (Luis Felipe Cabrera, Christopher Kurt, Don Box)
  8. Messaging Patterns in Service-Oriented Architecture, Part 1 and Part 2 (Soumen Chatterjee):
    Part 1
    Part 2
  9. Metropolis: Envisioning the Service-Oriented Enterprise (Pat Helland)

References on Workflow and Process

  1. Build Applications on a Workflow Platform (David Green)
  2. Developer Introduction to Workflows for Windows SharePoint Services 3.0 and SharePoint Server 2007 (Andrew May)
  3. Of People, Processes, and Programs (Barry Briggs)
  4. Simplify Development with the Declarative Model of Windows Workflow Foundation (Don Box, Dharma Shukla)
  5. Understanding BPM Servers (David Chappell)

References on Integrated User Experience

  1. Choosing the Right Presentation Layer Architecture (David Hill)
  2. Metadata-Driven User Interfaces (John deVadoss)

References on Federated Data

  1. Data on the Outside vs. Data on the Inside (Pat Helland)
  2. SOA Challenges: Entity Aggregation (Ramkumar K.)

References on Identity and Access

  1. Identity and Access Management (Fred Chong)
  2. Microsoft Identity and Access Management Series
  3. Microsoft's Vision for an Identity Metasystem (Kim Cameron)
  4. Web Service Security Patterns
  5. The Laws of Identity (Kim Cameron)
Show:
© 2014 Microsoft