Architecting Industry Standards for Service Orientation
Summary: Many messages that flow among businesses are formatted to specific industry standards. As we move toward Web services and service orientation the payloads can add value to the messaging infrastructure in a flexible, interoperable, composable way. (19 printed pages)
On any given day, you'll find analysts and engineers from different companies together in some location discussing tag names, data formats, element groupings, and business messages in an attempt to define industry standards. And in this process, they determine the fate of enterprise message description for any particular industry or group. These consortia gain general technical agreement on data types, messaging, or data models, yet the industry can still struggle with how to relate these efforts to a cohesive service oriented architecture (SOA) where the industry standard comprises the data on the service orientation backbone.
There are a few great examples out there, and I hope more on a daily basis. But for the most part, industry data standards are either used in haphazard non-standard architectures, one-off pilots, or simply to preserve a legacy infrastructure. And the technology marches on. Platform technology vendors like Microsoft have been developing advanced Web services specifications for better platform and message interoperability. These standards are aimed at furthering the already breakthrough work in loosely coupled messaging to incorporate aspects that were often taken for granted in a tightly coupled object-based architecture.
Because volumes have been and will be written about service-orientation, I'm not going to go into depth here. (See http://msdn.microsoft.com/architecture/overview/series/ for a good primer on service orientation and its associated principles.) However, there are a few salient points to emphasize about service-orientation as it pertains to industry standards. The basic concept of service-orientation describes methods to expose and consume discreet operations or services that can be composed into a larger architecture for an enterprise application. Some of the additional service-oriented building blocks you have to architect are elements like security, routing, eventing and other core mechanisms that are critical elements in enterprise application building.
There are four core tenets of service-orientation to keep in mind when looking at industry standards-based implementations:
- Service boundaries are explicit. Services are discreet units of operation. They perform a task and that task alone. Each service should be constructed to be able to stand alone with no external dependencies on other services. This is what is meant by loose coupling.
- Services are autonomous. Services are not to be confused with application workflow or business process. Those might incorporate multiple services, but a service operates independently from other services. Standards like BPEL4WS are being developed to orchestrate processes made of services. Products like Microsoft BizTalk Server 2004 also provide orchestration that may leverage many services and be a service itself. But each service should still be built with reuse in mind, and therefore be autonomous.
- Services share schema and contract, not class. This is a key message for industry standards. Many standards try to act like object technologies, mandating an "object's" time to live or describing when applications should delete or cancel or act in a certain way. Services, however, behave differently, sharing only a schema and contract, meaning that once the schema instance is sent, the service implementation or internals find out what to do with the data before sending back what is wanted for each contract. Service consumers and providers act independently of the internal implementations.
- Service compatibility is determined based on policy. This grossly overlooked standard enforcement mechanism is something that most standards have not tried to tackle at this time, even though so many wrangle with the concept of version compatibility, security considerations, and message granularity, incorporating quasi-policy structures into their standard, essentially creating non-standard policy enforcement. Instead, all of these elements can be shared by way of WS-Policy statements.
These guiding principles will be important when considering mechanisms for implementing messaging standards into a service-oriented implementation. It is entirely possible to build industry standards into your SOA, but to do so you must think about a few architectural elements:
- Mechanisms to construct the message payload.
- Interoperability levels found in standards.
- Using policy to handle version compatibility and data granularity.
- User SOAP headers to embed standards-based processing elements.
Make no mistake, service orientation is moving forward. The core tenets are rooted in sound principles inherited and extended from successful architectures. Even these architectures struggled with the proper way to embed messaging standards that took advantage of the lightweight potential of service messaging. Service orientation as a corollary to the object technologies of the past is natively message and document-based providing broad support for a variety of protocols and message formats. It also takes advantage of the speed and efficiency of the internal binary and object technologies to provide implementation and performance scalability.
Starting at the coarsest layer we have the basic message. Forget for a moment all the infrastructure elements that can be included in a basic message. We will get to those later, but focusing in on the business message, what we find is a myriad of constructions and assumptions about message formation. Experience in industry standards has borne out four key standards architectures:
- Large and Bulky. This is a single schema construction. Often this single schema is not given a namespace, making it largely incompatible in composable service-based architectures. Composable architectures are those that are assembled from common architectural pieces and are supported by development tools and platforms. Even worse, with this method there are often issues using this large schema with parsing technology as it can potentially have thousands of optional elements, types, aggregates and collections of data. Many individual messages in this one schema probably do not relate to each other. Those that do use common elements may do so by reference making versioning difficult, not to mention making code generation and parsing virtually impossible because of sheer number of elements, size of the XML schema or the resultant object classes.
- Service Message Grouping. This technique is similar in construction and possibly size to "Large and Bulky," however the grouping of service messages into functional groups is much more logical. These groups may be based on a business grouping or in some cases on a functional grouping like Updating, New, or Cancel. However, individual messages can have several uses, based on various workflow or scenarios, so this grouping might be artificially imposed and cause issues on implementation across message groupings. It also may impose artificial boundaries on implementations that would prefer another way to segment services, such as a "Payment" grouping that may contain features needed in a "Settlement" grouping.
- Message Granularity. This matches perhaps most closely with service orientation, having message schemas broken down into single messages. Based on a naming taxonomy, these can be linked into request/response pairs or just left as single messages. The downside to this is that usually there is something else needed to sequence messages in a business workflow, like Microsoft BizTalk Server 2004. Also because each pair or single message would be in its own namespace, there is conceivably a large number of namespaces to manage versions. For some large standards which have thousands of messages that could be quite an undertaking.
- Bits and Pieces. Most standards have some semblance of this concept of a data dictionary. But, most don't know how to use it to build composable service messages in real time or under a more flexible discovery-and-consumption methodology. In many cases these data dictionaries are internally referenced parts for use by "Large and Bulky" architectures, but not exposed for "sanctioned" use externally. Many observers and passive users of standards, however, use this industry standard data dictionary repository to build a unified custom enterprise data dictionary, so we address formal messaging using these elements below.
These four categories all have merits and value, or at the least quite a lot of existing content and infrastructure to build on. One thing is clear across all of them, that they represent an enormous amount work from committees and working groups to define data pertinent to businesses.
Constructing a standards-based message does introduce complexity into an already confusing services and messaging environment. But, such complexity need not be the bottleneck to reaching service orientation using industry standards. In considering standards-based message composition, it is also worth noting some best practices or patterns for defining message interfaces or signatures. One such practice is to a use a comprehensive data dictionary the root of complex schema.
Figure 1. Standards composition levels
This example shows what is generally considered poor design for services. It is unclear where this service would be used and contains no context for the main data of "customer" in the definition.
Web Service Signature 1: AddCustomer <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="urn:MyCompany.MyServices" xmlns:svc="urn:MyCompany.MyService" xmlns:MyServices="urn:MyCompany.MyServices" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"> <import namespace="urn:MyCompany.MyServices" location="http://MyCompany.com/MyServices.AddCustomer.xsd" /> <types> type information removed </types> <message name="AddCustomerRequest"> <part name="parameters" element="svc:AddCustomerRq" /> </message> <message name="AddCustomerResponse"> <part name="parameters" element="svc:AddCustomerRs" /> </message> <portType name="AddServiceSoap"> <operation name="AddCustomer"> <input message="MyService:AddCustomerRequest" /> <output message="MyService:AddCustomerResponse" /> </operation> </portType> <binding name="AddService" type="MyService:AddServiceSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document" /> <operation name="AddCustomer"> <soap:operation soapAction="AddCustomer" style="document" /> <input> <soap:body use="literal" /> </input> <output> <soap:body use="literal" /> </output> </operation> </binding> </definitions>
Code Sample 1. A poorly factored service
In this case the input and output types would be XML documents defined in WSDL type declarations. This represents a poorly named message to use with industry standards. To find context requires an inference between the attached schema and the method signature. Moreover the schema document requires inspection before taking action. Even with that inspection, it may still not be clear what action would be likely taken on the data. The interface name does not give the service consumer enough information about its possible uses and could leave data or a transaction in an incomplete state at the time of implementation. This is commonly referred to as CRUD (Create, Read, Update, and Delete) factoring and is considered a poor design technique for a service architecture. (See CRUD, Only When You Can Afford It for a deeper discussion of CRUD.) The coarse naming convention also makes the type or purpose of the XML document that would be inserted as a parameter ambiguous.
The following example builds on the concept of the first, but uses a more granular message definition to help the service consumer understand what will happen with the service and data when invoked.
Web Service Signature 2: AddSecondaryBorrowerToMortgage <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="urn:MyCompany.MyServices" xmlns:svc="urn:MyCompany.MyService" xmlns:MyServices="urn:MyCompany.MyServices" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"> <import namespace="urn:MyCompany.MyServices" location="http://MyCompany.com/MyServices/AddBorrower/Payload.xsd" /> <types> ... ...type information removed </types> <message name="AddSecondaryBorrowerToMortgage"> <part name="parameters" element="svc:SecondaryBorrowerMortgageRq" /> </message> <message name="AddCustomerResponse"> <part name="parameters" element="svc:SecondaryBorrowerMortgageRs" /> </message> <portType name="AddServicesSoap"> <operation name="AddSecondaryBorrowerToMortgage"> <input message="MyService:AddSecondaryBorrowerToMortgage" /> <output message="MyService:AddCustomerResponse" /> </operation> </portType> <binding name="AddService" type="MyService:AddServicesSoap"> <soap:binding transport=http://schemas.xmlsoap.org/soap/http style="document" /> <operation name=" AddSecondaryBorrowerToMortgage"> <soap:operation soapAction=" AddSecondaryBorrowerToMortgage" style="document" /> <input> <soap:body use="literal" /> </input> <output> <soap:body use="literal" /> </output> </operation> </binding> </definitions>
Code Sample 2. An accurately factored message
This is a little more satisfactory as it more clearly defines the purpose. The method signature, AddSecondaryBorrowerToMortgage, adds detail to the instance document that would be part of the message. Additional interface naming construction could give even more guidance on embedding the payload.
In both cases the input and output parameters are XML-based objects and documents that are easily serialized to and from, or parsed on available parsing objects. The output need not always be an XML document, as in some simple message scenarios, a simple acknowledgement through HTTP may be enough. But, more often than not, the return will be some form of XML document, like a response pairing, or another status acknowledgement containing response data or status codes.
Ideally the method signature and the XML data should work in concert, the one complementing the other. This doesn't mean that they should be identically named; it means that the data implied by the action should be consistent. Also remember the tenets of service orientation - that services are autonomous. In the first case, AddCustomer, has no boundaries around it. To what is the customer being added? To answer that question, you would have to parse the XML instance, but even then context is not guaranteed. What is more likely in that scenario is that the XML instance will be bloated in order to contain all the data needed to give context to the method. In some cases the XML document itself would contain action instructions outside the service definition or internal code implementation. Or worse, the interface would need pre and post-intervention to process the message, thus breaking the SOA tenet of autonomy.
The second example allows the message to contain only the data that it needs to perform the action. In this case, just the details about the secondary borrower and which mortgage to add that person to would be sent as part of the message. It's not really important how the port types are defined, as interrogating the service will yield the method signature as the key interface. Port and operation configurations are largely enterprise-specific for transaction systems and workflow processors. They also are closely tied to the messaging infrastructure, which must be flexible to span both industry standards messaging and company-specific internal message processing
Now that you have your message interface named properly, you can start to construct your standard messages as the payload. We will get to headers and processing instructions later, but for now let us look at some ways to construct a message starting with the basic building blocks of the XML document, the data dictionary.
Most data dictionaries consist of a namespace that contains type definitions of varying size, shape and function. They are often not inclusive of elements that could form an instance document alone. There are usually two main types of definitions. These consist of simple types and complex types. The simple types use base XML data types and restrict them for a business purpose. For those familiar with standards that cover multiple lines of business, it's no surprise that gaining consensus around those properties can be tedious. But, it is possible to construct the simple types with the right scope so that new types are consistent with a need that is not already filled with an existing type. Complex types are those that are essentially entities that are assembled from simple types. Sometimes a standard is structured where one complex type can reference another (for example, a savings account extending a base account type). But, for a simple data dictionary, this should be discouraged. By using entity extension and reference you get closer to an object model or monolithic schema as already suggested. That new exercise may produce something closer to an internal database or application object model, which is something best left to each enterprise or application as a competitive advantage and not for a standards effort.
Below are common things you might find as simple and complex types. You see the AcctAlias_Type here as a character element restricted to 32 characters. That same element is used as one element in a complex type called the AcctProperties_Type, which also contains other simple types. By using this sort of construction, any type of building block can be built in the data dictionary, creating a large vat of usable and composable entities that can then be integrated with some larger message sets, as we will see.
<xsd:simpleType name="AcctAlias_Type"> <xsd:restriction base="NC"> <xsd:maxLength value="32"/> </xsd:restriction> </xsd:simpleType> <xsd:complexType name="AcctProperties_Type"> <xsd:sequence> <xsd:element name="AcctAlias" type="AcctAlias_Type"/> <xsd:element name="AcctNum" type ="AcctNum_Type"/> <xsd:element name="AcctBal" type ="AcctBal_Type" minOccurs="0"/> <xsd:element name="AcctOpenDt" type ="AcctOpenDt_Type" minOccurs="0"/> <xsd:element name="Desc" type ="Desc_Type" minOccurs="0"/> </xsd:sequence> </xsd:complexType>
Code Sample 3. Data dictionary segment
This data dictionary construct can be refined even further to provide naming consistent with a naming guideline that identifies the standard being applied. This also allows identifying the standard version or showing other metadata. For instance, AcctProperties_Type could become StandardX_AcctProperties_Type_15 to reflect a standard and version number for that entity. This is an interesting way build a message which references several versions for compatibility, but that may also want to use similar entities from different versions for backward/forward compatibility. This could also be done through attribution on the entity, but that could cause some namespace collision when doing the composition mentioned above.
After this exercise, you have a complete (and probably fairly large) schema that is composed of many data types, both complex and simple. From those types, you can form almost any permutation of "object" or super-complex entity. This might best be metaphorically described using automobiles. Your data dictionary needed to construct an automobile would consist of simple types like tire width, wheel diameter, tread type which could then be used to assemble a complex type called "Wheel." From there, another schema can be assembled call "Car" which would use the complex type for tire and relate that to those for the Chassis, Engine Block, Gas Tank, and so on. In so doing you would build a car. You would identify also through the data dictionary elements the make and model of the car, but relating those individual parts proximally and hierarchically could also determine just what make/model/year that car was. The same base data dictionary could also easily construct a "Tractor" by constructing the elements in a way and with the base types that are needed by a tractor. The power that a data dictionary allows for composing the final product, also gives the flexibility of defining customized cars and aftermarket modifications. Industry message exchanges are much the same way, in that one company may have a business need calling for custom data elements or relationships, and yet still want those based on standard data types that are widely understood. In our analogy, the car manufacturer could communicate with a vendor to buy wheels, engines, or other subassemblies defined by complex data types.
From that comprehensive schema, you will be able to create a larger message instance. This could be a request/response message pair for asynchronous messaging, or it could simply be a one-way message. Regardless, these messages are the payload to the already well defined and described services. In these messages you define nesting and object-like hierarchies that relate complex and simple types to a business message context. Coupled with the "activity" that is outlined in the Web service interface definition, this provides the "what" and the "how" that is needed to do that unit of work. The "what" is the method signature; in the case above this is the AddSecondaryBorrowerToMortgage. This tells the consumer of the service what is going to happen, and hints at what might be in the payload of the message. The "how" is the payload itself. In our earlier case, this would have data about the borrower and identify which mortgage needed the addition. At the least it needs to provide the base data needed to process the request, but may contain more as the flexibility of the message construction and the base data on which it is built supports those business application customizations.
The following simple example is a mechanism for defining the payload for the above method signature. Obviously it is simplified. What is important is that the elements listed below inherit from base types. The inherited elements are aptly named to designate a function and be in line with the method signature.
<?xml version="1.0" encoding="utf-16"?> <xs:schema targetNamespace="http://MyXMLObject.Payload" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://MyXMLObject.Payload" xmlns:dd="http://MyXMLObject.DataDictionary"> <xs:import namespace="http://MyXMLObject.DataDictionary" schemaLocation=".\datadictionary.xsd"/> <xs:element name="Borrower"> <xs:complexType mixed="false"> <xs:complexContent mixed="false"> <xs:extension base="dd:CustomerType"/> </xs:complexContent> </xs:complexType> </xs:element> <xs:element name="Mortgage"> <xs:complexType mixed="false"> <xs:complexContent mixed="false"> <xs:extension base="dd:AccountType"/> </xs:complexContent> </xs:complexType> </xs:element> </xs:schema>
Code Sample 4. Inherited base types
Where the base types are defined in the following way:
<?xml version="1.0" encoding="utf-16"?> <xs:schema targetNamespace="http://MyXMLObject.DataDictionary" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://MyXMLObject.DataDictionary"> <xs:complexType name="CustomerType"> <xs:sequence> <xs:element name="CustID" type="xs:ID"/> <xs:element name="FirstName" type="xs:string"/> <xs:element name="LastName" type="xs:string"/> <xs:element name="PhoneNumber" type="xs:string"/> <xs:element name="Email" type="xs:string"/> <xs:element name="Age" type="xs:integer"/> </xs:sequence> </xs:complexType> <xs:complexType name="AccountType"> <xs:sequence> <xs:element name="AccountNumber" type="xs:double"/> <xs:element name="AccountType" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:schema>
Code Sample 5. Defining the payload to the method signature
One might argue that by using this approach, the Mortgage element should be parent to the Borrower or the converse. First, is up to the schema architect to implement those rules, supported by a flexible and composable data dictionary. Second, the Web service method signature alludes to those rules being embedded in the application code. Indeed this is a code issue in dealing with the schema. If the method signature and description says this will add the borrower to the mortgage, and asks for an industry standards–composed schema containing those two main elements, it is logical to assume that will occur in the code with the same degree of confidence as in another method. By uncoupling these two entities in this schema, the same instance could be used for other methods where the same data are needed. This multi-use property can optimize standards development and a messaging infrastructure.
There is a key concept to capture in these examples. That is to not include extraneous data elements in the target schema for the service. If this schema had included multiple Mortgage elements, it would be unclear to the consumer of the schema which one to use or which one to add the Secondary Borrower to. The schema included with the service should match the service intent and action and allow only the necessary data to cross the wire for that transaction. This streamlines and simplifies construction, but also it allows faster messaging.
There are many valid ways to include the global type elements in the Payload schema. The way depicted above references them through an import statement that is supported by many parsing tools, including BizTalk Server 2004, which would most likely be building the resultant message or involved in the application workflow. The benefit of using the <xs:import> statement is that it references a data dictionary for global types. If you were to include these as complex types (or simple types) directly in the same Payload schema, then those would have to be copy/pasted into other schemas, or other schemas would reference the Payload schema to get access to these types. Both practices are poor schema management models and the second creates unnecessary bloat in the referencing schema. Unfortunately this is a common practice in industry schemas. By separating the two, the dependency between the payload and the type definition is kept loosely coupled and becomes an implementation concern of the payload instance either in the standards process or as part of a custom implementation.
There are probably a few readers that believe that I have only addressed the first basics of standard schema construction, and they are right. It is a topic for a much larger paper that will talk about other mechanisms to build complex standards schemas that includes also the architecture for extension, redefinition, and custom extensions. Needless to say that is a complex topic that deserves a dedicated space. It does not nullify the fact that a comprehensive data dictionary as described before should be the root of that complex schema.
At this point you have all the needed building blocks to factor a Web service with the appropriately named service method descriptor. Using a rich data dictionary of types to assemble standards-based messages as part of the Web service method signature simplifies creating the correctly architected payload in XML. The next step is to find out just how the Web service wrapper can help in furthering proper standards usage.
Recent developments in advanced Web service technology have led most importantly to using the Simple Object Access Protocol (SOAP) header for processing statements that have bearing on the payload and the behavior of the Web service which is described. The header also contains settings and types of algorithms for elements such as WS-Security.
We can also use elements from other Web service specifications in the header to call for types of processing for industry standards. Using these specifications in connection with industry standards can maintain version control as part of the Web service and schema namespace locations and "should understand" attributes to the Web service payload as a whole.
As the current vertical standards meet the steamroller of horizontal advanced Web services standards, there will be ongoing development of "profiles" for standards in which certain behaviors of Web services are recommended or mandated as part of the standards implementation. This work is underway with some standards bodies including those in healthcare, financial services, government, retail and manufacturing. Until that time and even after, it is possible to mandate behaviors at the developed service level using the Web Services Policy Framework (WS-Policy). (See Web Services Policy Framework (WS-Policy).)
The core goal of WS-Policy as stated in the specification abstract is to "describe the policies of a Web service," and it can be used to "describe a broad range of service requirements and capabilities" of a Web service. What that means to industry data standards is fairly widely interpreted. Certainly the payload of a Web service is something that could be considered to be a capability of a Web service. For the sake of drawing a line in the sand, I would like to explore a few critical policy statements that industry standards–based Web services should investigate.
Web service security and integrity
Security is a fairly obvious construct to mandate in an industry standard. But, varied requirements of applications beg that the security be defined by policy largely depending on the application space, or the business need. For instance, simple e-commerce scenarios use a username and password to pass credentials with requests for information, some use smart cards, biometrics, though a transaction at an ATM will use a combination of personal ID number (PIN) and magnetic stripe data to secure messaging communications. To structure the correct message to security policy correlation, you must relate a service binding to a policy statement about security required for that service. By doing this at an industry standard level you arrive at an industry standard profile for secure service definition. By doing this at a service level you insure adequate security controls for each individual service.
Each service creates a policy assertion for the security requirements of that service. These policy assertions are easily parsed in and out of .NET objects as a part of the Web Services Enhancements toolkit. As part of a standards-based service, you should only assert one security assertion for each service. One of the key benefits of the Web services infrastructure is the inclusion of a security infrastructure, which negates the need for that infrastructure to exist in the payload of the message itself. In the absence of a standardized security infrastructure, many current industry standards previously included username and password, and perhaps other encryption or security scheme information in the payload of their message. These are currently in the process of moving that information outside the message and ensuring compatibility of those elements with the Web services specifications, such as the WS-Security and WS-Addressing specifications.
Message versioning is just as important to the functionality of the service as the payload content itself. But, it is inefficient to need to open the payload and parse top-level elements to simply discover that the message payload is not compatible with a desired implementation. Rather, it is more helpful to allow the consumer of the service to not only discover and be mandated the versions of document payloads, but also be able to compose the payload by referencing the available namespaces. This can be done by way of policy statements as well using the <wsp:Policy> element and asserting custom attributes with the version of the payload needed as part of the service.
<wsp:Policy xmlns:foo="http://mynamespace.com/policyschema.xsd" xml:base="http://mynamespace.com/policies" wsu:id="VERSION" > <foo:Version InputVersion="1.4.5" URI="http://mynamespace.com/myservicename/schema.xsd" /> </ foo:Version> </ wsp:Policy>
Code Sample 6. Service policy definition
When inspected, this Policy statement attached to the service informs the service consumer which version of the specification or custom schema instance to use. Also in the example above, the attribute gives the location of the schema URI so that a service consumer could interrogate that schema during development time. One notable value to WS-Policy files as part of a standards service infrastructure is that elements can be changed and redeployed as policies without affecting service implementation or consumers that are bound to that service. In this case, the version of the schema could change and the implementation change, but the service interface remains the same. The consumer would see the version change as a new policy assertion (perhaps backward compatible through a Policy Alternative for a time) and be able to code accordingly before breaking their implementation. The same function can exist for changes in security requirements. By creating a custom policy file, you can implement a few attributes on the assertions, including expirations for versions or other data that would explain how the service operates.
By using policy statements, you get a flexible way to provide added information about the service and mandate a behavior for consumers interacting with the service. These assertions, especially those about schema version, should be identical to the schema that is input as a parameter. The use of WS-Policy helps to put more constraints around consuming the Web service. Most importantly it helps lock down a standards-based infrastructure for using standards in a service-oriented environment.
Often I find myself locked in a "riveting" conversation with standards purists about the levels at which interoperability can occur with messages developed by standards groups. Remember my first classification of message types. Those that hold the position that large and bulky is better are usually those that oppose an architecture that provides flexible and composable schema architecture. According to them, the immutable message and coarse wrapping of that technology is the only way to reach interoperability.
There are two levels of interoperability. The first is the understanding of the message. This has already been discussed in a variety of composition mechanisms based on a rich data dictionary. This is then composed into payload schemas with structures and namespaces dictated by the provider of the services.
The second level of interoperability occurs at the service level using Web service standards. This level then has two key sublevels, the message transmission protocols level and the message content level.
Figure 2. Levels of interoperability in standards
First, before going too much further into interoperability methods, the source for interoperability at Microsoft can be found at http://msdn.microsoft.com/architecture/ and there you will find the authoritative links and information to interoperability methods to key platforms. But much more than just platform interoperability, messaging through standards using the methods above, allows interoperability among platforms, applications, and vendors.
Platforms are the most common things to interoperate between. XML, SOAP, and WSDL as a backbone to Web services are supported in several dozen messaging stacks. Because horizontal specifications are so widely adopted by platform vendors, each stack acts as a black box in processing the messages. They each do this with varying degrees of fidelity, but can commonly be relied on to behave consistently.
However, it is in messaging interoperability among application silos and software application vendors that most of the benefits of using industry standards are found. Because the industry standards are domain-specific and business-specific this usually means that application vendors must choose to support them. But not only must they support the industry messaging standards, the must also support Web service usage.
Application software vendors and custom application developers can smooth interaction between these applications using Web services and industry standards, by following certain best practices. These apply to architects as they build and implement industry standards with Web services and service orientation in mind. They are provided here as a summary.
Support standards through message composition
- Compose granular messages. Use a data dictionary to build discreet and granular messages that will leverage a namespace to align the data payload to the service and data.
- Avoid payload bloat. Do not include more or less information than is required for the service instance. Using the flexibility of the data dictionary allows the architect to create only the data that is needed to process the message.
- Create service-to-message correlation. As already shown, you should create a tight logical coupling between the message payload and the service method signature. The service definition and the data therein should align. By inspecting a service definition, a client should be able to guess at the data that would be required or sent.
- Use strong naming techniques. In complex and simple types as well as in schema use strong naming with industry standard and version. This will assist in multi-standards message composition as well as versioning compatibility.
- Use <import> of global types. Centralize global simple and complex types in a data dictionary avoiding copy/paste standards definition.
- Avoid schema bloat. Yes there are many fancy things that you can do with XML, there are schema tricks and architectures and hierarchies to use. And there is probably a vast amount of data that you would like to support or that is part of some (often legacy) industry standard. Some of those will be useful in your schema architecture. But don't overdo it. Include only what you need to process as part of that specific service. Also keep the hierarchy and schema architecture simple as most of the time application internals will rip it up anyway. A schema is just data, which cannot act on itself, so making it as legible as possible to a service consumer is important. Remember that consumers to your service need to not only understand the schema, but also need to be able to parse it with readily available technology. Sadly, there are some industry standard schemas that are unimaginably difficult to parse in common development tools.
Support standards through infrastructure
- Support industry standards. Although this seems like a fairly pedantic observation, many software vendors or developers do not instinctively think to support or include industry messaging standards in their products. Truly this is a forcing function brought on by integration or implementation with other solutions needing this integration. By factoring this into the project and development early you can not only save cost down the road, but also provide leadership in creating the messaging infrastructure your product performs in.
- Use WS-Policy statements to enforce compatibility. Use the WS-Policy Framework to enforce versioning and namespace compatibility for the exposed service. This can also assist in multi-standards message creation.
- Support the XML Schema discovery and Web Service Proxy Model. There are many ways to code parameters for a Web service. The most flexible way to allow the standards to be seen through discovery and in schema format is to use a serialized schema namespace object and Web service proxy to be the input and return parameters of your Web service method. Also by providing this URI for the schema in your policy file you give added help to a developer (on any platform) parsing that schema. It is not good practice to use a simple SOAP document input parameter because that does not allow the developer of the service to interrogate the schema and create proxy classes for the elements.
- Follow interoperability guidelines for services. Support mainstream Web services stacks, industry standards, schema discovery and Web services proxies.
- Support a mainstream Web services stack. . Messaging is a key component to enterprise architecture. Not only do you need to be able to send and receive messages, but in many cases this needs to be done quickly and reliably. This is not an area to try to eke out a few pennies of savings by using some freeware tool you find online. By building on a mainstream platform you assure application and platform interoperability.
Using these methods and the practices defined in the MSDN Interoperability Center, you will successfully be able to transmit messages from one platform and application to the next.
Through uniquely identifying the standard using this naming convention (or a similar construct) there is now the ability to certify messages "compliant" with a standard data dictionary. These certifications would be attached as policy statements for each segment or standard that was included in the desired schema. Later these certifications could be used in design/discovery time to both inspect a service and determine its usage based on these compliance elements.
However, there is still much work to do. One area of interest is creating industry standards amalgams. No, I'm not referring to forming new consortia or working groups. What I'm referring to is the ability to take types of many different standards and cobble them together into a message that can service several application silos. This is increasingly important in diversified corporations that share common data or object models and wish to streamline their operations.
One way that this can be done is to create standards types as strongly named with a unique identifier attached to each one. This would allow the standard name and the version to be tracked through the key value pair found in the name and GUID. Using one of the examples above, this would be accomplished as follows:
<?xml version="1.0" encoding="utf-16"?> <xs:schema targetNamespace="http://MyXMLObject.DataDictionary" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://MyXMLObject.Standard_Foo_12_1_104_DataDictionary" etc.> <xs:complexType name=" Standard_Foo_12_1_104_CustomerType" id="<GUID here>"> Removed... </xs:complexType> <xs:complexType name=" Standard_Bar_1_4_AccountType" id="<GUID here>"> Removed... </xs:complexType> <xs:complexType name=" Standard_Foo_12_1_103_PayeeType" id="<GUID here>"> Removed... </xs:complexType> </xs:schema>
Code Sample 7. Implementing strongly named standards types
Figure 3. Composing message using multiple standards
This need only be done at the data dictionary level for each complex and simple type, and would then be referenced through the name and the ID attribute being used for metadata. Through importation of the namespaces of multiple standards, you can construct a conglomerate standard schema with the hybrid elements. One of the other benefits is that this allows you to build a schema that may incorporate different versions of the same functional element. Though such schema is slightly larger or complex, this allows a level of backward or multi-version compatibility for a service that exposes this schema. It could also be used in WS-Policy statements with a new statement for each version and standard needed as part of that service.
If we can assume that service orientation is going to be a core part of our architectural palette for years to come, then we have to find a way to build and incorporate industry standards into that architecture. Granular construction of service messaging and tight correlation among service signature, metadata, and schema instance provide instant benefits when constructing a flexible and high-performance, standards-based Web service. The key to interoperability is in the ability to process the incoming and outgoing messages effectively. Clear type definition and composition from a comprehensive data dictionary gives architects the ability to compose nearly any message in their service-oriented architecture. Since they are based in the industry standards, those messages will be compliant.
Standards purists will harangue me with epithets, but the fact remains that in order to remove application silos through messaging interoperability you must use fast and flexible standards-based payloads in a service-oriented environment. This can only happen as service providers and schema architects are able to assemble multipurpose standards-based schemas. You can accomplish this through more flexible schema composition and importation methods. With these methods, the future of industry standard messaging and Web service architecture become closely aligned.