Guidelines for Application Integration
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
Summary: This chapter discusses the security and operations considerations you need to examine to ensure that your application integration environment continues to operate in a reliable and secure fashion.
As you consider the design of your application integration environment, it is very important to examine the security requirements of your organization. A well-designed application integration environment rapidly becomes an integral part of your organization. Therefore, security vulnerabilities in application integration have the potential to cause wide-ranging problems.
Similarly, your operational practices must be well-defined so that your application integration environment can continue to operate effectively and reliably over time. This chapter covers both the security and operations considerations you need to examine to ensure that your application integration environment continues to operate in a reliable and secure fashion.
Because different applications have different security requirements and features, it can be quite a challenge to ensure that your application integration environment functions properly without compromising your security requirements. Security is particularly important in application integration because a breach in an integration service may result in security breaches in other integrated systems.
The first step toward effective security in any environment is creating a written security policy. Many factors can affect the security policy, including the value of the assets you are protecting, the threats that your environment faces, and the vulnerabilities that are currently present.
The security policy should form the basis of any security measures you take in your organization. Before you make modifications to the environment, you should ensure that they are consistent with your security policy. You should also examine the policy itself periodically to determine whether it needs to be redefined in the wake of new business requirements and to verify that the procedures and standards that implement the policy adhere to industry best practices.
Note This chapter examines only security policy that is related to integration technologies. Your policy, however, should cover all aspects of IT security, including physical security.
From an integration standpoint, your security policy should define:
- A mechanism for evaluating and classifying threats. Your evaluation mechanism should ensure that consistent and relevant information is gathered about any threat. This information helps you classify threats to ensure that high-level threats are not ignored and that low-level threats are not unnecessarily escalated.
- A mechanism for acting on threats. This should ensure that the appropriate people are involved in dealing with a threat and that they are equipped to deal with it in an appropriate manner.
- A boundary for information security. Most application integration environments include some form of integration with other organizations or individuals outside the boundary of your own IT environment. It is therefore very important to define the type of information that must be protected within the organization, between business partners, and from the public. You also must determine how that information should be protected in each of the different cases.
- A plan for communication and enforcement. One of the main problems in maintaining good security in an organization is ensuring that individuals are aware of the security policy. It is therefore vital that the security requirements (as defined in the policy) are clearly communicated to your employees, including both IT staff and the rest of the organization. You also need ways of ensuring that people comply with the requirements, including clearly defined disciplinary procedures where appropriate.
- General security guidelines. If you make the security guidelines in your security policy too specific, you run the risk of them quickly becoming obsolete. However, if they are too generic, you run the risk of making them too vague. Therefore, you need to maintain a delicate balance in defining an effective and lasting policy. Ensuring compliance to security guidelines across an entire enterprise can be problematic when different systems are being used. However, it is possible to provide some generic non-technology-specific guidelines that are unlikely to age too quickly. A good example of such a guideline is one that specifies a minimum level of encryption for public key/private key pairs in your environment. Of course, even a guideline such as this is likely to change as technology advances.
- Reference to other documents. Your policy should reference more specific security policy documents and an Incident Response Plan. Linking to other documents enables you to define more specific requirements that are likely to change frequently without having to modify the main security policy in your organization.
- A mechanism for modifying security policies. As your environment evolves and new threats and vulnerabilities emerge, you must modify your policies to make sure that they continue to reflect the requirements of your organization. Therefore, your security policy must define the mechanism for security policy change and must identify the people responsible for making changes.
A number of basic capabilities are usually required to ensure effective security in an application integration environment. Table 3.1 shows these security capabilities.
Table 3.1: Security Capabilities
|Authorization||Determines whether a particular connection attempt should be allowed.|
|Authentication||Verifies credentials when an application attempts to make a connection.|
|Information Protection||Prevents unauthorized users from easily viewing or tampering with information.|
|Identity Management||Manages multiple sets of credentials and maps them to the correct application.|
|Nonrepudiation||Uses digital signatures to verify identity.|
|Profile Management||Manages principal profiles.|
|Security Context Management||Determines how credentials are provided to applications.|
For more information about each of these capabilities, see the Appendix, "Application Integration Capabilities."
Defining Your Security Requirements
The precise security requirements of your application integration environment depend on a number of factors, including the following:
- Security requirements of your organization
- Business requirements for application integration
- Technical requirements for application integration
- Capabilities of the applications you are integrating
- Platforms on which applications are running
- Budgetary constraints
As a starting point for defining your security requirements, you should perform a risk analysis. Doing so allows you to identify the threats and vulnerabilities that you face and identify the countermeasures you can deploy to keep risk at an appropriate level.
The following paragraphs discuss security requirements that are common to many application integration environments.
Multiple forms of authentication are available to you when integrating applications, and these forms provide varying levels of security. For example, HTTP Basic Authentication passes a user name and password back and forth in each request. The user name and password are not encrypted (just encoded), so capturing network traffic could potentially give an attacker easy access to the information.
HTTP Basic Authentication is not appropriate for most environments, although it can be used in some cases where the channel itself is secured (for example, when using Secure Sockets Layer, or SSL). However, because different operating systems implement different authentication protocols, one of the challenges of application integration is finding a secure form of authentication that is used on each of the platforms you need to support.
One common authentication protocol is Kerberos, because it is supported by both UNIX and the Microsoft® Windows® operating system (Windows 2000 and later). Kerberos is a network authentication protocol, defined by the Internet Engineering Task Force (IETF), that relies on public key cryptography. However, the use of Kerberos for authentication with third parties requires trust relationships to be established and access to the Kerberos key distribution center. The Kerberos key distribution center is where the private keys of principals (users or systems) are kept for encrypting information.
One way of increasing the security of authentication is to use multifactorial authentication. This form of authentication is increasingly used in situations where people interact with systems and is commonly referred to as something you know, something you have, something you are, defined as follows:
- Something you know—for example, a password or PIN number
- Something you have—for example, a smart card
- Something you are—for example, the unique patterns of your iris (the colored part of your eye)
Requiring two or more of these factors for authentication can dramatically increase the security of any environment. However, this form of authentication is not commonly used in system-to-system authentication.
In some cases, applications require authentication that is not integrated with the operating system. In such cases, you should investigate separately how authentication occurs (is the password sent over the network?) and how the password is stored at the target application (is it maintained in a plain text file?). In circumstances where the requesting application must present a password, you should also ensure that the requesting application stores the password securely.
One other important consideration is whether to use the same password across multiple applications. If you change the password in one location, how will that change be reflected in other locations? If you are using different passwords for each application, how can you ensure that the password information is kept current on each system? In complicated situations such as these, your application integration environment may need the Identity Management capability, which enables credentials for multiple applications to be associated with a single identity.
Two methods of authorization are most commonly used. One method is to perform authorization based on the user entity obtained through authentication. The other is to perform authorization based on user roles.
Authorization based on user identities is becoming less common in many systems and applications available today because it is more cumbersome to manage, particularly in large-scale implementations. In many cases, when a user changes a job role, or begins work on a new project, he or she requires access to an entirely new group of systems. You may need to do a lot of administrative work to give the user permissions that correspond to the new job role.
Role-based authorization works around this problem by allowing you to decouple user identities from the roles the users assume and resources or services they can access. A role is a category or set of users who share the same security privileges. For example, imagine that today Jane is a credit manager who is allowed to view customer credit details, but next month Jane will be transferring to another department as a human resources manager. With role-based authorization, the credit manager role will still be allowed to access the resource, but Jane will not be able to access the resource when she commences her new role.
From an application integration perspective, authorization can occur at three different levels:
- System. System-level authorization is the more commonly used type of authorization, where the system protects resources. A typical example of resources is files stored in the file system that are protected by the operating system. System-level authorization is also commonly used for network shares.
- Functional. Functional-level authorization protects resources based on functional ability, which usually ties authorization to specific applications or services. For example, an integration application may expose two services named GetCustomerDetailsInternal and GetCustomerDetailsExternal. The former service can only be called by systems located within the organization (internal systems). The latter can be called by internal systems as well as external systems (possibly from a business partner).
- Data. Data-level authorization provides the lowest granular level of authorization. This capability is usually tied very closely to the business logic of the service. The integration application from the previous example may merge the GetCustomerDetails services into a single smarter service. When internal systems call the resulting service, it provides additional data that it excludes when external systems call the service.
In many cases, these different types of authorization are implemented using different technologies and are located in different layers of the systems. For example, the system-level authorization may be performed and maintained by the operating system; the functional authorization level may be performed and maintained by an integration product; and the data authorization may be custom-coded, because it is usually very closely linked with the business logic and requires detailed knowledge of the data to be protected. However, even though the three types of authorization levels may be implemented in different layers of the systems, all of the principals and access control information may be placed within a single security directory.
Often systems issue unique tokens or tickets after authentication and authorization has been performed. The idea behind token-based security is to allow the system to quickly recognize and trust the requester, which reduces the authentication and authorization overhead. There are a number of different ways to implement tokens, but most of the standard authentication protocols provide token-based security after the initial authentication and authorization. Token-based mechanisms have the advantage that the actual requester's credentials are not always passed around the network.
When you use security tokens, reply attacks can be a problem. These attacks involve the use of specialized software that is able to capture network data packets. The captured packets are then modified and replayed. To protect against reply attacks, you can use rolling tokens, which are changed or renewed within a short interval of time. Limiting the time available to capture, modify, and replay packets greatly reduces the chance of network capture and replay. Kerberos specifically addresses the prevention of replay attacks.
Security Context Management
When implementing security context management, you need to determine whether to use impersonation, consolidation, or both. Impersonation is not commonly implemented in application integration scenarios because of problems with maintaining user identities across the different applications being integrated. In such scenarios, a matching user identity must be present in each of the systems. However, with the increasing availability and capability of identity management systems, this requirement may change. Using exact user credentials to access the various systems provides better ability to track the service requester's request from various systems and allows authorization to occur at the systems that need to implement it. Figure 3.1 shows impersonation using an identity management system.
Figure 3.1. Using impersonation for security context management
If you use consolidation for security context management, a single identity is used to identify the requester to each application. This allows all requesters to have the same level of authority. Composed applications commonly use consolidation because it provides simple user management to the existing systems and a better opportunity for connection pooling capabilities. Figure 3.2 shows consolidation being used.
Figure 3.2. Using consolidation for security context management
You must determine when it is appropriate to use encryption, hashing, and obfuscation in your application integration environment, and how to implement them.
Encryption is often used in an application integration environment to protect application data as it passes across the network. It may also be used to protect user names and passwords if they have to be passed as plain text.
The two main implementation choices for encryption are secret key encryption and public key encryption. One of the biggest issues with secret key encryption is the requirement for the sender and recipient of the data to have the same secret key. The more parties involved in the communication chain, the more places the secret key has to be distributed and the greater the risk of jeopardizing the key. The encrypted data is usually tied to the exact key used to encrypt the information. If the key is changed, existing encrypted data must be decrypted and reencrypted with the new key. For this reason, public key encryption is often a better choice; however, if you do not currently have a public key infrastructure, you must implement one to support public key encryption.
Web Services Security
As Web services become an increasingly common communication protocol for application integration, it is likely that the Web Services Security (WS-Security) specification and related extension specifications will emerge as the industry-accepted security communication protocol.
WS-Security provides a higher-level protocol but does not provide the ability to perform authentication, it merely provides a common language for a variety of systems on different platforms. You will need to consider the best mechanism for securing the service while ensuring that the target audiences can participate with minimal or no interoperability issues. For now, you may still need to rely on the combination of SSL, user name and password, and tokens to authenticate Web service requests when using HTTP as the transport mechanism.
Even after you build an application integration environment and it is running successfully, your work does not stop there. The environment will need to be monitored and maintained over time. This section discusses the various operational management considerations involved. Implementing the capabilities listed in this section should help to ensure that your architecture continues to function as smoothly as possible.
Operational management is a large topic that covers a broad range of technical and process-oriented issues. Many books, software, and Web resources discuss the various aspects of operational management. In this guide, the discussion is restricted to technically oriented aspects of operational management, specifically to those that are relevant to application integration.
Note: This guide does not discuss processes-oriented considerations; however, effective operations require technology to provide system information, correct interpretation of the information, and processes to ensure adherence to an overall plan. Deficiency in any of these areas leads to poor operational management.
Defining an Operational Management Policy
As with security, an important part of effective operations is to have a predefined operational management policy, which defines how operations occur throughout your organization. If you currently have a policy for operational management, you should make sure that your application integration environment meets the requirements specified there. If you do not have such a policy, designing your application integration environment represents an excellent opportunity to institute a more organized approach to operations.
Your operational management policy, at the minimum, should provide the following information:
- Clearly defined terminologies and target metrics
- Methodology and/or formulas for measuring the metrics to ensure that results are consistent
- Service-level prioritization to ensure that the most important policies are followed first
You should make sure that any policy and service-level metrics clearly define terminology. For example, if you simply state that a system needs to be available 99.999 percent of the time, you often find that you are not defining your requirements appropriately. When you examine the situation in more detail, you often find that the business unit only requires very high availability within business operations hours.
Operational Management Services
A number of basic capabilities are usually required to ensure effective operational management in an application integration environment. Table 3.2 shows these capabilities.
Table 3.2: Operational Management Capabilities
|Business Activity Management||Monitors, manages, and analyzes business transactions processed by the integration system.|
|Event Handling||Receives events and acts on them.|
|Configuration Management||Tracks hardware and software configuration.|
|Directory||Maintains information on applications, subscriptions, and services.|
|Change Management||Manages change within the application integration environment.|
|System Monitoring||Determines whether the hardware, operating system, and software applications are functioning as expected, and within the desired operating parameters or agreed-upon service levels.|
For more information about each of these capabilities, see Appendix A, "Application Integration Capabilities."
Defining Your Operational Management Requirements
As with security requirements, the precise operational management requirements of your application integration environment depend on a number of factors, including the following:
- The operational management requirements of your organization
- The business requirements for application integration
- The technical requirements for application integration
- The capabilities of the applications you are integrating
- The platforms on which applications are running
- Budgetary constraints
The following paragraphs discuss operational management considerations that are common to many application integration environments.
Your application integration environment typically involves multiple applications running on multiple computer systems. To keep your environment running successfully, you must monitor the system to ensure that each element of the application integration environment is functioning properly and is meeting its performance goals.
One challenging aspect of system monitoring in an application integration environment is that information often is spread across multiple systems in different geographic locations. It is therefore particularly useful to have a system that can aggregate the performance data in one centralized location where it can be analyzed. Because data aggregation is one of the goals of application integration itself, you can often use the data capabilities of your application integration environment to support your System Monitoring capability.
Your system monitoring should focus on the following areas:
- System and application health. Tracking the health status of the system and the application. A healthy system or application can perform its operations within expected parameters or agreed service levels.
- System and application performance monitoring. Tracking the system and application response times for service requests.
- Security monitoring. Monitoring security related events and audit trails.
- Service-level monitoring. Monitoring the system and application adherence to agreed or predefined service levels.
The following paragraphs discuss each type of system monitoring in more detail.
System and Application Health Monitoring
In an application integration environment, you need to ensure that the operating system, the applications, and the capabilities that facilitate application integration are all functioning properly. In some cases, you may need to design a special module within the integration logic to perform basic diagnostics of all the components it uses, or create a special test case using specific data (called from a probe) that returns a well-known result if the system is functioning properly.
In some cases, your applications may be instrumented and therefore may provide useful information about their health. However, in situations where an application is not generating that data, either because it is not configured to, or because it is unable to, you can get further information by issuing a probing test from an external system or another application within the system. By using regular probes, you can detect unavailable systems and raise an alert. Setting very frequent probes may affect system or application performance, but setting infrequent probes means that you will be unable to quickly detect unavailable systems. The interval for probes you set should be based on the maximum desired elapse time. For example, if you set probes to occur 10 minutes apart, in the worst-case scenario the alert will not be generated until 10 minutes after the system becomes available.
Often system health is thought of simply in terms of whether the system is up or down. In reality, however, health is not just a binary value. Just as human health is not just measured according to whether the person is dead or alive, a computer can be unhealthy and yet still function, albeit at reduced capacity.
For example, if your application is designed and tested to handle 100 concurrent users and respond within 2 seconds for requests, you can consider it healthy if it meets this criterion. If it takes 30 seconds to respond to requests, you will probably consider it to be unhealthy.
By detecting and diagnosing unhealthy applications early, you can prevent a system or application outage. There are numerous possibilities for systems to become unhealthy during operations, including:
- Virus or malicious attack. A virus usually consumes system resources or alters system behavior. The impact of viruses can range from having no noticeable effect to rendering the system useless. Malicious attacks come in two forms: internal intrusion and external attacks. Internal intrusion is when the attacker tries to gain control and penetrate the system. This form of attack is usually difficult to detect because the attacker intends to be discreet. A problem may not be apparent from a system and application health perspective, even though a security breach has occurred. External attacks usually come in the form of denial of service (DoS). The main purpose of these attacks is to prevent the system from providing services. A system that is hit by a DoS attack can be considered as sick or dead, depending on whether it is still able to process requests. Installing antivirus software and ensuring that it is updated is a good start to battling viruses. However, you should also monitor system requests and probe your applications to ensure that your service levels are being met. If they are not, this is a potential indication that you are undergoing some form of malicious attack.
- Unplanned increase in usage. A sudden and unforeseen increase in usage generally affects externally facing systems that provide services to anonymous service requesters. Such usage increases can render a system sick, because it was never designed to handle the increased load and maintain the agreed-upon service level. If you provide artificial limits or queuing, you can allow additional loads to be handled in a predictable manner. Systems that create new threads to handle requests can be very susceptible to spikes (or storms). If the spikes are large enough to cause the system to generate enormous amounts of threads, it can overload the operating system so that it becomes too busy to manage the threads and cannot allocate resources to handle the actual requests.
- Failure in resilient systems. To provide fault tolerance in your environment, you may use resilient systems, such as Web farms. If one of the servers fails in this type of environment, the system can still function at reduced capacity. It is extremely important that you detect failed servers and fix them as soon as possible so that the system can return to full capacity. You should make sure that you know exactly how many types and levels of failure your resilient systems can handle before the system itself will fail. The majority of Web farms implemented today use pin-level packets, which means an application can be unavailable and still receive requests. In these cases, failures can be very difficult to detect because the service to the end user does not always suffer in a predictable manner.
Integration applications generally rely quite heavily on message-based communications. This type of asynchronous communication provides good scalability, but it can also make it difficult for you to detect failures in communication paths. Most message-based products use dead letter queues to inform the application if any messages fail to arrive or fail to be consumed by the receiving system.
System and Application Performance Monitoring
You should use performance monitoring to ensure that application integration is adhering to agreed-upon or predefined service levels. Performance monitoring also gives you an early indication of potential failures or future capacity issues. The two main areas important to integration application for performance monitoring are:
- System response time measurements
- Resource usage measurements
By measuring system responses for all aspects of a request, you can help to ensure early detection of potential bottlenecks. It is fairly easy to measure performance at a general level, such as the average number of requests per second and the average amount of time it took to process a request. However, one of the more useful items you can track through performance monitoring is delayed responses to requests as they go through the system. Doing so allows you to determine if a delay was the result of a particular request type or other factors. For example, for a purchase order handling system that relies on a number of back-end systems, a particular purchase order that calls back-end system A with more than 100 items may result in an unusually slow response from that system. Without adequate monitoring and tracking, you would find it very difficult to trace the problems based on pattern analysis.
Resource usage measurements help you to determine whether your systems have adequate resources to run at their full potential. Inadequate resources can lead to resource contention, where the lack of resources causes degradation to system performance. If you keep historical information on resource usage, you can track and predict increases in resource usage.
You can also use longer-term observations of performance and resource usage tracking to establish useable baselines. This technique is particularly useful if you need to increase performance or reduce resource usage.
Your application integration environment generally requires secure communications between applications within your organization. As mentioned earlier in this chapter, security is vital in all integration application design. However, the security of your systems depends not only on the design and implementation of the software, but also on appropriate monitoring or auditing capabilities. As an example, imagine that an operating system does not provide the ability to track failed logons. It is very difficult to check if someone is trying to attack a particular account, because system operators have no way of detecting such attacks.
At the minimum, your security monitoring should provide the following capabilities:
- Logon audit log. For tracking logon information. You may want to track all logons, but you should certainly track all unsuccessful logons. The integrity of the security audit log is paramount. The log content should be kept secured with the data available only in read-only mode and accessible to the minimum number of people.
- Data access log. For tracking all access to the data repository. This capability is very important if the original requester identity is used to access the data. As mentioned earlier in this chapter, some systems provide the ability to perform single sign-on or impersonation. The value of the data access log diminishes if the system consolidates the various requester identities into a single identity used to access the data.
- Security policy modification log. For tracking all changes made to the security policy. This capability is important in detecting changes that relax the security policy, including changes due to human error and changes by hackers or disgruntled operators.
- Alert mechanism. An active alerting mechanism is needed to flag suspicious or unusual occurrences. Simply relying on logging is not adequate because the system can generate a large volume of log information. You should provide a rule-based alerting mechanism that allows critical or important analysis to be done automatically.
One very important part of security monitoring is ensuring that the security logs themselves are secure. You must ensure that security logs are accessible only to authorized personnel and that the information captured cannot be modified. Solutions for protecting the log information can involve storing the information on a read-only device and also providing signatures, as a secondary measure, to allow verification of information integrity.
Business Activity Management
Business Activity Management provides probably the greatest potential to clearly demonstrate return on investment to the business owners. Nonetheless, it is one of the areas often left out in integration environments. The short development time for many IT projects often causes the monitoring aspects of a system to be designed last or never designed at all, because they are viewed as an additional benefit. To provide efficient and valuable Business Activity Management capabilities, you should ensure that the correct information is captured during design and development.
To provide you with added value in the longer term, your Business Activity Management should at the minimum provide the following capabilities:
- Business transaction exception handling. This capability allows you to handle transactions that generate business-level exceptions. For example, your system has paused the processing of a loan approval because the credit rating of the applicant was borderline. However, after checking manually, your loan officer has decided to approve the loan. Rather than rejecting the applications and forcing the applications to start from the beginning, your system should provide the capability for the loan officer to reroute the transaction.
- Contextual monitoring. You should be able to track the progress of any business transaction through the process chain. Providing response times at each level of the business step (whether it was processed by a system or by a person) allows you to determine any cause of delays in the process.
- Rules-based alerting. This capability allows you to generate alerts due to business events. This means that you can detect anomalies and potential delays in processing. Early detection of potential delays gives you the opportunity to contact the party that originated the transaction and inform them of the problem in advance, or to fix the problem before it affects them.
- Historical data mining. This capability allows you to capture useful business process information, such as the time it took to process each process step, the data sources for the business process, and the next step in the process. You can analyze the information and then modify your business processes if needed. Generally, the more information you can capture the better, because having more information means that there are more ways to take apart and analyze the data. It is also helpful because you cannot be sure now what information will be useful in the future.
In an application integration environment, if one application raises an event, it often leads to actions in other applications. An unexpected event or exception in one application may lead to the failure of another application, for example. To ensure that your application integration environment is stable, you should have the capabilities to receive system events from your applications and take actions to ensure that other applications and systems react appropriately to maintain service.
In many cases, an exception at the system level leads to particular failures at the business process level. You should therefore have capabilities for dealing with events at the business process level as well. However, it is often useful for the events at the system level to be passed up to the business process level, because a system event may well be the first indication of a potential failure at the business process level.
Change and Configuration Management
Application integration environments are notoriously difficult to manage, because they tend to involve increasing numbers of applications communicating on multiple disparate systems. You should therefore consider the benefits that good change and configuration management bring to your application integration environment.
Implementing change and configuration management effectively is a major project in itself, which can generate significant initial costs. Fortunately, however, you will complete some of the significant work required for effective change and configuration management as you define your application integration environment—for example, developing an understanding of how applications communicate with each other and which systems they run on. If you can define the requirements of your change and configuration management system prior to your work on defining your application integration requirements, you can significantly reduce the costs of implementing change and configuration management.
Your application integration environment may contain multiple directories that contain information about identities, profiles, subscriptions (used in a publish/subscribe scenario), application configuration, and capabilities. Alternatively, this information may all be located in separate parts of a single directory. As you define your application integration environment, you should determine which of this information you need to store and where you will store it.
Most modern operating systems are based on an extensible directory that can be used to store the information required by application integration. Such a directory can be particularly useful in situations where your organization uses one operating system. However, in cases where multiple operating systems are used, it is possible to synchronize each directory into a meta-directory. Alternatively, you may want to store your directory information separate from the operating system and use the capabilities of your application integration environment itself to facilitate replication.
An application integration environment cannot function successfully if you do not carefully consider the security and operations requirements you face. These considerations are particularly important because application integration issues are likely to span many departments within your organization. As you define your application environment, you should use the opportunity to look again at the security and operations practices within your organization and to determine whether they should be modified. If you consider security and operations practices early and give them sufficient emphasis, you will increase the chance of your application integration environment operating securely and reliably over time.
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.