Enterprise Integration and Distributed Computing: A Ubiquitous Phenomenon
Sandeep J. Alur
Summary: In this era of cloud computing, an Enterprise is put through a series of iterative renovations to its IT assets. The innovations from technology vendors like Microsoft, Google, and Amazon have a major impact and influence on the Enterprises that strive to compete. Between all this, the ‘Hype Cycle’ is something we cannot ignore. Enterprises are left with no choice but to compete. This phenomenon remains central to an enterprise, and definitely has an impact on the way they position their IT. This is one of the external factors influencing IT. Between all these, ‘Enterprise Integration’ still remains as the ubiquitous core challenge. This paper addresses some of the challenges faced by an Enterprise in the area of ‘Distributed Computing’ and suggests certain recommendations and good practices to overcome the challenges.
‘Cloud Computing’ is the current buzz word that seems to have gotten the attention of the entire computing industry. This has been the era of consumers, and in a consumer-driven market it is of central importance to provide on-demand services, be it financial transactions or requests for information. The entire Enterprise community has undergone an overhaul over the years in thinking of offering ‘services’ in lieu of software. We have come a long way from cross-process communication to cloud computing, and ‘Distributed Computing’ has enabled this transition. Incredible innovation has taken place in the area of distributed computing and the technology evolution cannot be sidetracked. Technologies have been instrumental in keeping pace with business, and the distributed computing capabilities of a technology platform enabled this progress.
We have come a long way from COM/DCOM to Windows Communication Foundation (WCF), which undoubtedly is one of the greatest innovations in distributed computing. In today’s fast-paced world in which acquisitions are a means to growth and expansion, Enterprises face enormous challenges with the acquired systems en-route to business integration. One aspect that stands out is the ‘Heterogeneity’ of such an environment.
Heterogeneity of an Enterprise introduces a considerable number of challenges, and with growing business demands the complexity multiplies. This paper introduces a variety of challenges that an Enterprise faces in a distributed-computing environment, and provides recommendations/good practices for overcoming those challenges. The following list presents a few of the challenges that are considered:
- Identity and Access Management
- Concurrency and Load Balancing
- Security (Controlling access to Service Operation)
- Message Exchange Models
- Distributed Computing on Mobile Devices
Furthermore, we will discuss the reality of adopting such architectural patterns as service-oriented architecture (SOA) or an enterprise service bus (ESB) in the context of distributed computing; they vary to a greater extent, for example, in a developed market versus an emerging market (environmental factors). Challenges may be same across the board, but the way we deal with them has a greater impact on their successful adoption.
The challenges addressed in this paper are the ones that we perceive to be present in a majority of Enterprises. Below is an elaboration of such challenges, and observed good practices to overcome these challenges.
· Identity and Access Management
User Identity is the culmination of an end user’s presence in an application. What follows the identification of a user is control of the user’s action on each of the features and capabilities of the application. In today’s distributed application environment, achieving Single Sign-on (SSO) sometimes poses challenges beyond what we would imagine. Consider a scenario in which an end user is expected to access various applications in an Enterprise, but has to navigate through the security gateway of each application. This implies that the end user has to remember more than one identity (and password), as indicated in Figure 1.
This scenario is typical; most of us are used to encountering it in a distributed application environment. The ideal scenario is to have a single identity that provides access to any and all of the systems the end user accesses. This is a definite architectural and implementation challenge that an enterprise faces with its legacy of applications. The challenge gets complex in a distributed scenario, in which the identity is expected by all the integrating systems (represented in Figure 1) across the system and machine boundary. From the mission criticality perspective, it is important that each end user's identity is captured and logged for various purposes (auditing, compliance, troubleshoot during a crisis, and so on). One technology that is under-utilized is using Microsoft Windows Active Directory as an identity store.
Access Management is a second layer of defense, after authentication, to control access to various features of the system. The general practice is to persist user privileges local to the application for better control and management.
Recommendations and Good Practices
In this context, here are a few of the recommended practices that Enterprises can adopt to overcome the challenges in the area of Identity and Access Management:
- Active Directory (AD): AD is the much sought after infrastructure for defining and managing user identities. As indicated in Figure 1, often the association does not exist between the application and AD; that is, AD would not be leveraged as a central Identity Management storage at all. On a Windows environment, the easiest way to take away the burden of Identity Management from applications is to use Directory Services.
The recommended approach is to centralize definitions of user identity and access control groups. By doing so, applications can query a central repository, instead of individual stores, for authentication as well as for authorization. Also, Enterprises that are in the process of putting together an AD infrastructure can leverage a miniature version of AD called Active Directory Lightweight Directory Services (AD LDS). With AD LDS, applications can quickly leverage the capability of Directory Services with reduced infrastructure costs. Applications desiring to use identity stores can leverage AD LDS without affecting the AD Directory Services (AD DS). Most of all, AD LDS provides flexibility in installation and integration without the overhead costs of a Domain Controller.
- Microsoft Windows Live Identity (ID): When a user base is very large, it is not viable for an Enterprise to take the onus of user management. The desired route is to outsource the entire management of users to an external entity that is trusted and reliable. This is where Live ID (formerly known as Microsoft Passport) comes into play, by providing a platform that facilitates management of user identity and enables authentication of a user from an application.
The advantage of this platform is that it takes the entire task of managing user identities out of the application space. Enterprises can rely on the Windows Live ID Software Development Kit (SDK) to integrate Live ID authentication into their application space. Authorization that is private to an application can remain within the application space, but from a manageability perspective Enterprises can position a central Access Management’ solution in which every application would touch down post-authentication. This amounts to an application that is hosted on-premise leveraging certain cloud services like Windows Live ID. This is a classic example of Enterprises resorting to capabilities and entities (cloud) outside the operational boundary.
· Load Balancing and Concurrency
Enterprises strive to position a highly available (HA-High Availability) infrastructure. As businesses grow, leading to increased traffic on their applications, load balancing and concurrency are two crucial considerations that must be dealt with and closely observed. While it is true that the infrastructure should be scalable, it is often the application architecture that should extend the flexibility to attain HA.
The two kinds of load balancing are software and hardware. Messaging infrastructures that expose end-points to enable integration need to scale out in order to address the HA aspect. While it is imperative that having redundant servers leads to HA, it is the load that needs to be distributed across the redundant servers to achieve optimal performance.
‘Software Load Balancing’ is the load distribution the software performs when it routes calls equally among the participating servers. Tools such as Amber Point (mentioned in the previous section) perform software load balancing and are efficient for basic infrastructure. Software-based load balancing is cost effective compared to hardware driven load balancing at times, when certain packages exist that can also manage traffic. As an infrastructure grows, however, software load balancing may not be the most efficient way to balance traffic. The load addressed at the network layer is more efficient than tackling traffic at the software and service entry.
Here is where the need for hardware load balancing is realized. Hardware-based load balancing is definitely more robust than software-based load balancing, and it has the capability to work with any operating system or platform. The agnostic nature of this solution makes it preferable to software load balancing for an Enterprise-specific solution. There can also be a combination of hardware and software load balancing in a solution. (The load is balanced across data centers (and servers within a data center) by means of a physical unit, and further routing is performed by a software load balancer).
In figure 2, numbers 1 and 2 indicate hardware load balancing and number 3 indicates software load balancing. Tools like AmberPoint are expected to perform activity number 3. Instead of positioning a physical unit between the Web servers and the ‘Services’ stack, software load balancing is an apt fit, in addition to taking control of the overall data movement between the servers.
In addition, from a cloud computing perspective, it does not matter if the services are hosted on-premise or off-premise since the communication and routing happen over HTTP.
Concurrency is the nature of computing and, in today’s consumer driven market, application infrastructures should support high concurrency. Here is where the capacity of a specific hardware/machine reaches saturation and hence demands the need for redundant machines. Application scale-out models complement high concurrency. Application architecture also plays a critical role in extending flexibility to address scale-out aspects.
Distributed architecture is preferred to address aspects such as performance and scalability. It is important for the entire architecture and development team to be aware of such scenarios so that the system architecture and design takes into account the behavior of the end product.
From a cloud computing perspective, companies like Amazon, Google, and Microsoft are putting together enormous numbers of infrastructures that deliver a high scale and overwhelming load. Here is where the infrastructure we describe is a boon for Enterprises that can host applications (off premise): they can be at peace instead of thinking about the capital expenditure necessary to put together a hardware infrastructure to support it across the board. What the infrastructure promises is an on-demand ‘Scale Out’ to meet the demands of tomorrow.
Security is the first and foremost concern when it comes to integration in a distributed environment. Compliance with regulatory policies is another important factor that drives the creation of a robust security infrastructure for message exchange. Be it point-to-point (PTP) or mediated integration, security of the transport layer, as well as of the actual messages, is of central importance.
Another aspect of security to consider is how to control access to service operations. (Service in this context can be WCF Services and Web services that encapsulate operations). In an SOA world, service operation has business relevance; controlling access at the operation level is therefore a must.
In a distributed-computing environment, since the message crosses machine boundary, Transport Layer Security (TLS) as well as message level security needs attention. Below are some of the recommendations for enabling secure communications between entities:
- Internet Protocol Security (IPSec) is a protocol that operates at the network layer. In a distributed-computing environment, we can secure the data exchange between two participating peers (machines) by means of message encryption at the network layer. IPSec has two modes of operations: one that encrypts the data (Transport Model), and another that encrypts the entire IP packet (Tunnel). IPSec ensures that communication is restricted to only between the participating machines, and it therefore increases the manageability of a distributed-computing environment.
- Web Services Security (WS-Security) is the most discussed security model for Web services. From the early days of the Microsoft .NET Framework 1.x to now (.NET Framework 3.5 SP1), we have greatly simplified implementation of the .NET Framework. In a distributed-computing environment, when we've set the communication to happen via Web services between the participating Web servers, a client certificates (X509 certificate) driven model is one of the recommended WS-Security options, as shown in figure 3.
With this approach, the invoking application presents a certificate as an identity and establishes trust before exchanging messages. What is important for Enterprises here is to position a process to create client certificates for applications that intend to collaborate with the services stack. Once this process is in place, positioning a robust security model for the application to service interaction will be smoother.
- Partial Message Encryption is critical. At times, it seems that encrypting the entire message is insignificant, but situations arise in which some elements of the message contain critical data that needs to be encrypted. It is highly recommended to adopt partial message encryption to ensure data confidentiality.
- Operation level security: In a distributed-computing environment, we need to control the end points defined for integration. Though such control qualifies as an ‘Identity and Access’ issue, controlling access at each of the end points is also a challenge that must be considered from a security standpoint. Let us define the end points as ‘Service,’ and the activities that can be performed as part of functional invocation as ‘Operations’ (Figure 4 - the equivalent of Web service and Web methods). Review the metadata (WSDL) of the service in order to understand which operations and methods are available.
When it comes to security for the messaging infrastructure, building a custom solution will prove to be a daunting task. Adopting a custom of the shelf (COTS) product will be more valuable in managing the entire infrastructure (see the next section for more details). Partner products like ‘AmberPoint’ and ‘SOA Software’ can achieve most of the recommendations discussed here.
A detailed run down from a ‘Governance’ perspective is covered in the next section. However, from a security standpoint, the recommended practice is to instigate a fine-grained control, at the end points, of access to each of the service operations (from a mission critical perspective).
Service governance, or service management, is the commonly used term in the field of messaging infrastructure governance. Having positioned an SOA/ESB infrastructure, it is important to check and maintain the health of the services being deployed. In addition, in a distributed-computing environment in which messages converge or are disbursed, based on the needs of the end user and application, it is critical that we keep an eye on the following considerations:
- Security and
- Authentication and authorization
- Message security/encryption
- Centralized monitoring of messages (alerts/exceptions)
- Non-functional aspects:
- Response time
- Infrastructure-health related:
- Server availability
- Application availability
- Database availability
Enabling all of these is a necessity for an Enterprise as it positions a messaging infrastructure or ESB. Here is where tools such as Microsoft System Center Operations Manager (SCOM), AmberPoint, and SOA Software come in handy. These tools are sophisticated enough to address all of the listed governance needs, and qualify as the foundation services when implementing SOA/ESB. A solution without the listed aspects is less manageable and only gets more complex with more and more services qualifying to be hosted under the messaging infrastructure.
From the application perspective, we always get to know the real behavior or performance of a solution once we put it into production. On the other hand, performance testing is one phase that would give a feel of reality in terms of performance. However, the recommended approach is to take the route of ‘Build for Performance’:
- Performance aspects should be tested during unit development itself; do not wait for the performance testing phase to discover anomalies.
- Development teams should ensure that the services built perform as per the Enterprise standards. The central need is to have a blueprint of the host environment during development.
- It is important that the developers understand the runtime behavior, test and diagnose for functional accuracy, and fine-tune the performance of the service.
- The tools that are expected to manage the production should be put to use during development, as well.
- A version of AmberPoint called AmberPoint Express is shipped along with Microsoft Visual Studio 2005 so that developers can leverage and get a feel for the performance during developments. The same is available for download free of charge, and works with Visual Studio 2008. Such practices bring much desired agility into the overall development of the solution.
· Message Exchange Models
Often the debate occurs about the choice of synchronous and asynchronous communication. Enabling an application for either of the options depends on the following:
- Technology limitation on the source/consumer system
- Size and amount of data
- Criticality of the data being accessed
Usually in a distributed-computing environment, the source systems dictate terms to the consuming systems. That is, if an interaction is expected to happen with an Enterprise Resource Planning (ERP) back office, interfaces exposed by an ERP system will become the rule of thumb and consumers have to conform to the communication as well as to the message protocol. Having said that, making changes to the source sometimes makes more sense than making changes to the consuming applications.
Other important aspects to consider are ‘Push’ and ‘Pull.’ ‘Push’ is a mechanism by which a publisher will broadcast information to all the subscribers of a service, while ‘Pull’ refers to a consumer request for information from the publishing service. Both of them can operate either in synchronous or asynchronous modes.
It is important to understand the dynamics of the systems being considered for integration before deciding on the communication aspects. Challenges arise when there are multiple stakeholders involved in deciding on the integration mode. As indicated in the figure 5, there are multiple options available, but choosing the right one is crucial. The source system should also be capable of providing multiple interface options, depending on the consumer need.
Following are some of the aspects that should be kept in mind when deciding on synchronous or asynchronous communication:
- When Performance is the most important criteria, a synchronous call will yield faster results because of the active nature of the connection. The consumer invokes a service at the server and waits for the response while the program control is at the same stage as the call. There is less latency in such a communication, and it is preferred when a response is expected immediately following the call.
- When the expectation is real-time data exchange and the data size is small, synchronous communication is preferred. Small data packets lead to less overhead on the network and complements high ‘Throughput’ requirements.
- When multiple calls to various systems in the backend are shielded in a Transaction, it is desirable to make synchronous calls. Synchronous calls within a transaction are manageable and easily tracked for failure or exceptions.
- End of the day Batch Processes are common in the computing arena. Typically these are the ones that operate in silo as a back office operation and perform the expected tasks. They are either scheduled to function at a specified time or interval, or they are invoked by an application.
- Large Enterprises often have applications built on legacy software, which would be performing a significant portion of back office operations. Exchanging messages with such applications in a synchronous mode is a challenge introduced by the state of the legacy technologies. Under such circumstances, exchanging messages by means of file drop or via queues is looked upon as a generic trend.
- Exchanging a Large Volume of Data in a synchronous mode is not a feasible option. Connectivity and bandwidth issues pop up, leading to the failure of such transactions. Asynchronous message exchange comes in handy, wherein an asynchronous program can take the onus of pulling or pushing relevant data over a specific time period, without affecting the overall functioning of the application.
Another model that is making the rounds from a cloud computing perspective is ‘Internet Service Bus (ISB).’ Similar to the currently in vogue ESB, ISB has all the benefits of Identity, and Connectivity to on/off premise-hosted services coupled with workflow capabilities. This is a very interesting philosophy that brings onto the cloud the likes of Service Orientation in an on-premise scenario. The message exchange modes that we discussed previously would be a reality in the cloud with ISB.
· Distributed Computing on Mobile Devices
Enterprise mobility has become a prominent concept that complements the concept of ‘Anytime-Anywhere-Any device’ access. As businesses expand beyond boundaries, people expect data to be made available to them via the most widely used communication device – handheld/mobile. Enterprises expect a similar computing experience in the office as well as when on the move. ‘Field Force’ (operating outside the purview of office premises) personnel are today equipped with handheld devices (smart phones) that provide them access to Enterprise data. Having positioned a service infrastructure, the communication between the consumer (in this case the application residing on the handheld) and the services is expected to happen over HTTP protocol.
Field force personnel may operate in geographies they may not have connectivity, a situation we refer to as ‘Occasionally Connected Systems’ (see figure 6). Below are the three distinct characteristics that are expected in an occasionally connected system:
- Offline computing capability. (Provide application accessibility at times when connectivity does not exist.)
- Ability to access a cloud service. (From a mobility perspective, services hosted on/off premise can be termed as a cloud service.)
- Ability to synchronize data with the back office as and when connectivity is restored. (Push and pull of information from the device.)
Distributed computing through a handheld/mobile device has to be considered with caution because of the limited computing power on these devices. Below are a few of the pointers we have to keep in mind when conceptualizing the solution:
- ‘Service Infrastructure’ has to expose lightweight services as specific endpoints that transmit small packets of data between the device and the server.
- Reduce the overhead of encryption/decryption while exchanging data.
- Leverage technologies like synchronization framework, which is built to provide agility to the entire paradigm of applications that qualify under the umbrella of ‘Occasionally Connected Systems.’
There is an enormous drive to extend Enterprise-computing capability onto mobile devices, and synchronization framework is a definite innovation that would fuel the next generation of field force applications.
SOA/ESB has been the means to address some of the Enterprise integration challenges, and to make systems collaborate in a heterogeneous environment. Interoperability has been a challenge for Enterprises, with a very prominent footprint of legacy systems. Having a robust messaging infrastructure that keeps the business ticking is the lifeline of any business. The magnitude and extent of automation in this area varies from Enterprise to Enterprise, and this aspect indicates much about the maturity of an Enterprise. Typically, there are two kinds of Enterprises:
a. Early adopters, or "trendsetters"
b. Late adopters, or "trend followers"
Early adopters are the ones who take a plunge with a specific vision and intent. They are the ones who push technology and innovation in the market. They strive to stay on top and to compete. To quote an example, Wal-Mart was in the news as a trendsetter in making the retail community realize the relevance of Radio Frequency Identification (RFID) technology during early 2000. Constructs like SOA/ESB have been around for a decade now, and Enterprises that have succeeded are the ones that had a clear vision and set out goals.
On the other hand, trend followers ride the hype-wave based on certain evidence they witness in their competitive landscape. Here, there would be a definite intent in terms of what they want to achieve, but the major driving factor would be to ‘catch up with the competition.’ A lot depends on the maturity of an organization to think ahead of time and implement certain overhaul activities to attain operational efficiency.
Irrespective of these factors, on the whole the distributed computing landscape enabled by patterns like SOA has not lived up to the expectations of Enterprises. Does this indicate any flaw in the pattern itself? Absolutely not. Reasons for the failure to energize the distributed computing landscape of an Enterprise with patterns like SOA can be numerous. Burton Group calls it ‘SOA Fatigue, which is a result of SOA Initiatives not achieving their promised potential in the expected timeframe.’
When it comes to integration, there are two options (irrespective of synchronous/asynchronous communication):
a. PTP integration
b. Mediated integration
PTP integration is the first option while addressing integration challenges. At times, it is considered to be the most efficient one. Message exchange can happen in a proprietary protocol or open standards. Resorting to open standards gives much desired agility, but comes with its own set of drawbacks. For an example, systems chosen for integration (source and consumer systems) may both be on the .NET Platform. The first choice in the context of distributed computing is Transmission Control Protocol (TCP)-based communication (netTcpBinding-enabled interaction in WCF). However, if the source is on .NET and the consumer is on another platform or technology, then the recommended option is open standards (httpBinding/wsHttpBinding-enabled integration in WCF). In this context, the option with netTcpBinding gives higher performance.
Mediated Integration is suggested when an Enterprise envisions the need for a common messaging infrastructure to overcome the inefficiencies of increasing PTP integrations. PTP beyond a certain boundary becomes unmanageable and shows up like an entangled bunch of wires. At this point, Enterprises feel the need for a manageable environment that qualifies as a one stop shop for all message exchange; this infrastructure cannot be realized on day one. This is the evolution that an Enterprise goes through in positioning a messaging infrastructure by adopting certain popular architectural patters like SOA/ESB. Mediated integration takes away the core responsibility of protocol/message format compliance of both source and the consumer. The onus is on the Message infrastructure to cater to the needs of source as well as consumer.
Enterprise integration is a result of the growing need that arises out of Enterprises. While the phenomenon is considered global, so are the challenges. The intensity of these challenges varies from Enterprise to Enterprise, thus bringing out unique patterns of need versus solution. With enterprises becoming more and more heterogeneous, distributed computing is one area that is extremely critical. Enterprises take the steps in putting together infrastructures that support Enterprise-wide message flow across the systems.
In addition, cloud computing is expected to introduce a considerable amount of flexibility to Enterprises that face challenges in the area of putting together a hosting infrastructure that complements their Enterprise Application needs. There seem to be no boundary from a distributed computing perspective for Enterprises that embrace off-premise infrastructure.
The Enterprise integration challenges, though global, should be dealt with in the context of the technology landscape of individual Enterprises.
Software Plus Services
Burton Group Report: Addressing SOA Fatigue, Version 1.0, Aug 12, 2008
Active Directory Lightweight Directory Services
About the Author
Sandeep J. Alur works as an Enterprise Architect Advisor with Developer and Platform Evangelism Group at Microsoft Corporation in India. Enterprise Integration is one of his favorite topics and he blogs quite regularly on such topics (http://blogs.msdn.com/sandeepalur). If you have any questions or suggestions, do write to him at email@example.com.