Export (0) Print
Expand All

A Business-Driven Evaluation of Distributed-Computing Models

Salman Shahid
Solution Architect, ZE PowerGroup Inc., Richmond, BC

September 2008

Applies to:
   Distributed Computing

Summary: The strength of any business or technical strategy can be measured by selecting the best fit and differentiating sets of activities—making calculated trade-offs, and ensuring a strong linkage between its processes, people, technology, and assets. (12 printed pages)

Contents

Introduction
Distributed Computing: A Short Trip Down Memory Lane
Endgame: Business Solution or Technical Excellence?
The Evaluation Framework
Conclusion

Introduction

Demands for better strategic positioning and lean operations are pushing businesses across the globe to inject agility, efficiencies, and effectiveness into their business processes and supporting information systems. These information systems are going through radical transition and transformation, and are being affected heavily by the rapidly changing technological landscape. The exercise no more remains just a matter of technology-capability evaluation, but requires a deeper insight into the business goals, objectives, and associated metrics. Distributed-computing models have come of age, and the real challenge now lies in connecting the dots all the way from business strategy to effective utilization of these models, to fulfill operational needs effectively.

Distributed Computing: A Short Trip Down Memory Lane

For the last couple of decades, distributed-computing models as an architectural choice have been playing a critical role in serving up the mission-critical needs of business processes across the entire value chain. Recent ICT advances have blurred the boundaries between platforms and platform services, and have opened up all kinds of possibilities at both the micro and macro levels. Technology decision makers and evaluators have extensive ground to cover when it comes to choosing best-fit distributed-computing models.

Various definitions exist to describe the notion of distributed computing; each of them is geared toward a specific audience. In its most simple form, it is a deployment model in which various parts of an application run simultaneously on multiple computing engines. Having a distributed deployment model in turn requires the availability of application platform services that pertain to networking and communications, in addition to standard application features.

Historically, the driving factors behind using a distributed-computing model have been about improving resource utilization. Legacy client-server architecture has been the basis of major operating systems, including UNIX, Linux, and Windows, as well as the enabling foundation of many Internet services and protocols for the likes of HTTP, FTP, DNS, and so on.

With the advances and economy of scale around infrastructure and communication technologies, ease of management of complex information systems lately has been the major force behind the maturity and high level of acceptance of distributed-computing models. Component-based deployment, standardized interfaces, and object-based message exchanges provided application-level integration on a much wider scale. Remotely accessible component models and frameworks, such as CORBA, COM, DCOM, RMI, and EJB, as well as message-oriented middleware and APIs, such as MQSeries (IBM), JMS (Sun), and MSMQ (Microsoft) have dominated this era. The ability to perform a distributed transaction and asynchronous message transfer has been the hallmark of this model.

Cc984967.BusDrivenEvalofDistComp01(en-us,MSDN.10).jpg

Figure 1. Evolution of distributed-computing models

Ubiquity of TCP/IP-based networks and breakthroughs in wireless technologies further pushed the envelope, making the execution of distributed computing possible all the way from an IP-enabled device firmware to a cluster of enterprise-grade relational databases. New integration and collaboration standards have been developed, giving rise to new ways of performing distributed computing that use SOAP- and REST-based Web Services, AJAX, JSON, RSS, and various other proprietary remoting technologies. SOA emerged as a business-driven architecture, providing a notion of software services that are totally independent of locality and infrastructure. New frameworks for distributed computing emerged to leverage the most effective use of computing resources. SCA, SDO, grid computing, peer computing, OSGI, Bluetooth, and ZigBee are a few examples in a continuously expanding range of frameworks and protocols that provide real business solutions that use concepts from distributed computing.

What we have seen here is a gradual decoupling of business services that a software solution offers from the underlying infrastructure that it uses. The trend has been to execute the software functionality where it makes most sense from an operational point of view, while ensuring that the location of the end user or system is not a hindrance in using the functionality effectively. Business processes have been heavily dependent on information systems, and every strategic and tactical initiative tends to test the agility and effectiveness of these systems.

With the exception of classical single-user desktop applications, all mission-critical enterprise systems execute in a distributed mode to some degree. This distribution can be physical, as well as logical, and keeps requiring additional network-based intelligence, as the model gets more complex. Different business scenarios need different kinds of computing models, each one having a varying degree of distributed processing. Additionally, the platform services that are offered by each one of these models differ in number, as well as quality, and it is the business-process context that decides which aspects of the model are mandatory and which ones are optional.

Endgame: Business Solution or Technical Excellence?

Information technology as a business enabler is responsible primarily for three main tasks:

·       Fulfilling day-to-day operational needs by using efficiently managed information systems.

·       Incrementally adding functionality on top of an existing system to augment the effectiveness of business processes.

·       Maintaining strategic alignment with the current and new business process by providing an agile architectural base.

Most of the changes that are happening apart from the operational needs are either of a transitional or transformational nature and require significant input from the business side. This input can come in various ways, but it should include at least these three sources:

·       A long-term business-strategy map that outlines current positioning and a desired future state.

·       A midterm business-execution plan that entails core business capabilities that must be developed and maintained.

·       A transition plan that includes governance and change-management procedures to enhance the current capabilities. This should include a technology-program portfolio that pertains to key business activities.

Because the end goal is a value-adding business activity that requires a technically strong foundation, the next logical step would be to bring in the enterprise architects to bridge the gap between business goals and solution requirements. This is a critical stage, as any "impedance mismatch” here has long-term and wide-scale consequences. Because technology-based process differentiation can be a key component in achieving a lean value chain, it is imperative at this stage to have a broad view of the business and technology landscape.

Massive commoditization and ubiquity of technology mean that, in order to create and sustain a strategic advantage, architecture has to play a critical role. Given a range of standardized distributed models, commercial and open-source development tools, and global access to human expertise, the competency advantage now translates to how quickly and efficiently information-system architecture can deliver the required business capability.

The Evaluation Framework

Evaluating the business effectiveness of a distributed model is part of a still evolving enterprise-architecture domain. Considered more of an art than an engineering process, this is common territory that is served by the likes of business strategists, business architects, business analysts, enterprise architects, solution architects, system architects, and technical architects, to name a few.

While there is a difference in focus for each of the aforementioned folks, the task at hand is pretty much the same: to ensure that the required business capabilities are served up in the most efficient and effective manner. A simple and effective framework would be to analyze the model and its adoptability across business processes, technology fit, availability of human resources, and corporate assets.

Have Insight into Your Business, Processes, and Activities

Before the technical merits and demerits of a distributed-computing scenario can be discussed, it is imperative to have the business context set up. There are some fundamental questions that must be answered, and they include—but are not limited to—the following:

·       What business are you in?

·       How you are strategically positioned?

·       What has worked for you in the past?

·       Is the business process a primary or supporting activity in the value chain?

·       What level of information sharing must be available across different lines of business?

·       How would the business process be affected after the desired change, and how are these changes going to be measured?

Cc984967.BusDrivenEvalofDistComp02(en-us,MSDN.10).jpg

Figure 2. The Porter Value Chain Model can be used as an aid to help understand the positioning of a business process and an associated technical-solution space.

Existence of a governance framework can accelerate greatly the evaluation and decision-making process. Many organizations might be going through the transformation exercise and might not yet have a formal structure defined in this regard. This should not be held as a disadvantage, as long as there are processes in place around value generation and risk management.

The value-generation process is what the enterprise and solution architects should be dealing with; their activities are focused around translating loosely defined business goals into "SMART" business objectives ("SMART = Specific, Measurable, Actionable, Realistic, Time-bound"). There is no point in starting with a project without hammering out concretely defined business objectives.

For example, improvement of portability and scalability is a coarsely defined business goal, and corresponding objectives to achieve this goal might include the following:

a)  Applications that adhere to open systems standards will be portable, leading to increased ease of movement across heterogeneous computing platforms. Portable applications can allow sites to upgrade their platforms as technological improvement occurs, with minimal impact on operation.

b)  Applications that conform to the model will be configurable, allowing operation on the full spectrum of required platforms.

Another example of a business goal might be to improve security, which can be achieved by defining a concrete objective statement, such as: “A 25 percent reduction in help-desk calls that are related to a security issue” or “Application deployment can use the security policy and mechanism around authentication and authorization that is appropriate to the particular environment, if there is a good layering in the architecture.”

Agility, effectiveness, and efficiency are the required generic attributes around any business activity. Agility translates into how quickly a change in the business scenario can be implemented by modifying or creating a new process with the aid of technology that is mastered and managed by the people within the constraints of available assets. Mergers and acquisitions are a common reality in the corporate world these days; a properly aligned technology-service portfolio need not go through painful transformational processes, as services get consolidated and integrated.

Effectiveness would mean selecting the right process steps, activities, and collaboration levels, as well as choosing the right technology and people with the appropriate skill set to implement it—ensuring that the end business goal gets achieved and the process generates the desired result as part of the value chain. Many case studies are available that suggest the failure or extensive delays of massive rollouts of CRM and ERP deployments, even though the best-of-breed solutions were being used. The common thread in all of these unsuccessful attempts was the fact that business activities were forced to adopt the methodologies that were provided by these solutions, without asking why they were being done in a certain way or what would be the impact of changing it.

Operational efficiency ensures optimal use of human resource and technology in order to sustain a lean value chain. One way of achieving it would be to refine the process iteratively and consolidate information systems, and constantly retrain human resources to drive the effort. Operating-system virtualization is a classical example of attaining operational efficiency across technical, financial, and human-resource levels.

Understand the Technology Landscape

After these objectives are fleshed out and agreed upon, the next step is to establish the bridge between business objectives and required technical services to provide a complete business capability. The focus should be on understanding the current technical-capability landscape and performing a gap analysis to ascertain what additional capabilities are required. This is the stage at which most companies are involved in either transitioning or transforming their information systems, and need most help to understand the value and risk that are associated with the available distributed-computing models. The real challenge lies in ascertaining the optimal degree of distribution that can be balanced with the management overhead, and still providing the required business effectiveness and agility. Some of the common scenarios that are faced by the companies include:

·       Opening up mainframe-based transactional back-office legacy systems to other corporate systems.

·       Establishing real-time information flow between field operations and the corporate office.

·       Enhancing the efficiency of a business process by introducing notification and workflow services as part of an overall process automation.

·       Developing customer-facing online applications that provide self-service operations.

·       Enabling "boundary-less information flow" across the entire supply chain.

·       Consolidating applications to improve efficiency, and generating additional value by creating composite applications.

·       Acting as a technical service broker, and providing an expanded range of remote services either as SaaS or ASP.

·       Employing business-intelligence solutions to fulfill the operational and analytical needs of key decision makers around the business process.

·       Integrating existing corporate systems, such as CRM, ERP, and SAP, where they make most sense to obtain strategic advantage.

·       Introducing a Master Data Management (MDM) system to consolidate various transactional data islands across the enterprise to provide “a single version of truth.”

Driving the solution architecture to deliver concrete solution for these business objectives is the main goal here. Each of these business challenges requires some aspect of distributed computing; despite differences in implementation tactics and quality attributes, there is a common set of platform services that the underlying model must provide. In its most generic form, the technical reference model would comprise three main building blocks: Applications, Application Platform, and Communications Infrastructure, where a standard set of interfaces would exist to glue them together.

Cc984967.BusDrivenEvalofDistComp03(en-us,MSDN.10).jpg

Figure 3. Generic architectural building blocks

The key to leveraging the maximum architectural advantage from a distributed-computing environment is to minimize the variations in the communications-infrastructure interface standards, while retaining the diversity in the offering of platform services. This generally means looking for service diversity at the higher OSI layer, while looking for more consolidation at the lower levels. A case in point would be a business challenge that is faced by a utility company to execute automated collection of meter data. All kinds of metering devices and proprietary protocols are out there, and it would be challenging for any collection engine to accommodate all of it. A better solution would be to phase out legacy meters gradually in favor of IP-enabled meters, thus reducing the need for protocol conversions and data transformations.

The next step would be to investigate the platform services that are required to provide the necessary capability. These platform services eventually would become the building blocks for two broad categories of applications:

a)   Business applications—Implementing business processes for an enterprise or vertical industry. These applications enable the execution of core business processes or primary activities in the value chain and are critical for establishing a strategic edge for a business operation. These core competency applications, owned by specific lines of business, are heavily focused on generating business value, and their operational health cannot be compromised. Inventory-management systems, commodity-trading systems, and risk-management systems are examples of these primary activities. These applications get most of the customization attention and usually are under constant upgrade.

b)  Infrastructure applications—Providing general-purpose business functionality using infrastructure services. These are normally used as shared services across different business areas. User interaction and interoperability are a common requirement for these applications; not being part of the primary activities in the value chain, they often are purchased as off-the-shelf solutions. Common examples might include corporate Wikis, Project Management tools, Calendaring and Scheduling Systems, Messaging and Notification Systems, Workflow Services, and so on.

Irrespective of the category to which a required solution is going to belong, a minimum set of platform services is required and should be provided by the computing model. It should be noted that the process for finding out platform services is industry- and business-specific and might differ per scenario.

Cc984967.BusDrivenEvalofDistComp04(en-us,MSDN.10).jpg

Figure 4. Information systems and generic platform services

These are top-level service categories, and various other types of technical services can be developed by using a combination of any of these. From the perspective of a distributed model, certain services have more emphasis then others. These include Data-Interchange, Location and Directory, Security and Systems, and Network-Management services.

Data-interchange services include data-transformation, conversion, filtering, processing, publishing, distribution, and synchronization functions. Consider the example of using a homegrown UNIX-shell–based document-management system. Would it also readily scrape the data off of a Microsoft Office 2007 suite of applications, convert it into a canonical XML, and publish it as a formatted XHTML?

Location and directory services would include functions such as name resolution, registry service, brokers, and query services. An implementation scenario would be the need to choose between UDDI registry and ESB-embedded registry functionality for service lookups and location discovery.

Given the nature of distributed computing, availability of security services must be given top priority. Functional decomposition over physically distributed nodes means that a lot more data would be exposed on wires for longer periods. Centralized management of authentication and authorization services will be more of a requirement in a distributed world, and the platform should provide all of the necessary services to accommodate that.

A case in point would be a scenario in which a multitier J2EE application that is running on Solaris would have to authenticate a user by using Windows Active Directory, which also would control the application access level. Questions about securing data on the wire also must be answered. What types and level of encryption and digital certification does the model support? Does the platform have the ability to support extended security models, such as Kerberos tickets or Sandboxed execution?

Distributed systems tend to put a considerable degree of intelligence onto the network stack. Nodes, interfaces, and communication networks provide the backbone that makes distributed processing possible. The dependency chain in distributed computing usually is very deep, and there are potentially multiple points of failure all along. Having robust and real-time insight and control into the entire network is a must for smooth operation and business continuity.

The ability of distributed platforms to be in harmony with network- and systems-management tools must be evaluated and ranked. An example would be a scenario in which a company that is purchasing an ESB as part of an SOA stack must know if the ESB would support Microsoft System Center Operation Manager. If it is not supported, what is the alternative, and how would it affect the tactical goals and operational objectives that are defined to support the business use case?

Various other technical capabilities also must be looked into, including the support for messaging backbone, integration capabilities with other corporate systems, compliance observance, platform independence, scalability, transaction processing, and support for new and emerging technologies.

Select the Service-Delivery Model

Business processes and corresponding architectural building blocks eventually will be realized with concrete solutions. With the rapid adoption of SOA as an architectural approach and maturity of the platforms and tools around it, it has become a de facto standard to which all other architectural activities would adhere. Leveraging legacy applications, enabling distributed processes, creating composite applications, and facilitating application-portfolio consolidation are few of the business cases in which SOA is making an immediate impact. Because SOA opens up the possibility of utilizing a packaged unit of functionality (service) in a location-independent way, the potential and push for having an “extended enterprise” are obvious.

This notion of “boundary-less information flow” requires an understanding of attributes of available service-delivery models, as well as how they would relate to both the business processes in question and the governance model that is being followed. The notion of delivery models used to be attached with applications; but a fresh look must be taken, given the direct consequences of having tight coupling between remotely and internally provisioned services within the context of SOA.

Topographically, services can be either deployed on-premise or accessed from a remote source. Various models exist to categorize remotely accessible services, including Software as a Service (SaaS), Cloud Computing, Cloudware, Platform as a Service (PaaS), and Web-oriented architecture (WOA). Each of these can serve a somewhat different concern, but the common thread remains that none of them is deployed and managed by the service-consumer organization. Infrastructure availability, resource provisioning, and quality of the service become the service provider’s concerns.

Simultaneous availability of these different service-delivery models sparked the debate as to which one is the most suitable. Having all of these options is welcoming, but it taxes the decision-making process. Overall economic scenarios, market forces, ICT advances, and big software vendors all are turning out to be major drivers in tilting toward one model or another. The ultimate factor, however, still remains the customer’s business and technical environments, and how well a given delivery model fits with its strategic and tactical vision.

From an SOA governance perspective, the choice between these two service-access scenarios can be determined largely by following these three steps:

·       Horizontally categorizing services for belonging to either primary or supporting activities within the value chain.

·       Vertically categorizing services as being business services or data services.

·       Qualitative assessment of risk versus value that is associated with remotely accessing the services, and ranking them for the suitability of remote access.

Depending on where in the value chain a software solution is being used, one gets some pointers as to what kind of delivery model it would like to choose. Business activities and associated applications that are part of primary processes are good candidates for utilizing on-premise deployment models. This is to ensure the highest level of customization and product/service differentiation in the entire value chain. Heavy entry cost of in-house development can be amortized over time by providing an enhanced product/service level. Supporting business processes that are not directly adding revenues for the organization is a suitable candidate for employing a variation of remote service-access models. Low entry cost with acceptable service levels is enough of a reason to get into a “pay-as-you-go” mindset to support activities that are not directly generating revenue for a company.

Services that are more focused on processing of data given complex business rules are good candidates for on-premise deployment. This will ensure the flexibility and efficiency of the underlying business process that they are servicing. Ease of integration of business rules, as well as customizability of algorithm, are major requirements for retaining strategic advantage in the process chain. This level of service requires deep vertical-domain knowledge that is developed over time and accessible only in-house. Services that are involved with data retrieval and lookups are better candidates for remote access, given their cost-efficiency. This also alleviates a service-consumer organization having to manage data islands in-house in order to support its core business processes.

Another factor that influences the choice between on-premise versus remotely consumed services is the degree of reusability of data that is acquired via these services across other information systems in the enterprise. Data-integration needs, either at the application or database level, might exist, and it makes more sense to have services deployed on-premise to support these EAI initiatives better. On-premise deployed services have an entire corporate landscape available to them and can readily integrate with legacy systems that might not expose themselves as easily as classical Web services. Remote services still can be used in this scenario as dependent services (that are not available in-house) that fill in the data or process gaps.

Whatever the category to which a service belongs, the SOA governance model should enforce that a risk profile be associated with it. The risk profile should rank every service, based at least upon the following activities:

·       Business-impact analysis, in case of service unavailability. This can be ascertained in terms of RTO (Recovery Time Objective) or RPO (Recovery Point Objective), and it should take into account the degradable level of service that a business process would accept as an alternative. It should be noted that because of the deeper dependency chain that SOA-based services exhibit, the unavailability of a single service can halt more than one process! Ideally, putting a dollar figure on the revenues that are lost because of service unavailability would bring things into clear light.

·       Security and compliance requirements for certain services would provide critical indicators as to which services can be accessed over the cloud without compromising the protection of information assets (for example, in organizations that deal with personal and financial data). The health and banking industries are more sensitive to the secure handling, processing, dissemination, and archiving of the business data with which they deal; they must ensure that the same level of due diligence is taken by a remote service provider when dealing with the data as would happen on-premise. Various compliance standards are coming up, with SAS70 being the prominent one, to provide a reasonable assurance to the consumer organization about the process maturity of the service provider. Although they reduce the risk profile of a service to some extent, these standards should not be taken as final assurance on the service availability and quality, but instead should be used as an assessment tool as part of an overall SLA.

·       Assurance of a minimum level of service quality by the provider must be part of an SLA. This can be ratified by implementing a beta program that should test out the service performance within the given constraints. Average response time, concurrent accessibility, service up time, data throughput, service reliability, transaction handling are security and logging controls are some of the metrics around which an evaluation can be made.

·       As part of a service life-cycle management program, services must be versioned, deployed, managed, monitored, and retired as part of a defined process. Test, staging, and production versions of services must be made available on separate environments, and a migration strategy should be developed. This process is not limited to remotely accessible services, but should be implemented also as part of governance process for internally developed, deployed, and consumed services. Help is available in the form of some service-delivery management frameworks, such as ITIL. The maturity of this process would help one to ascertain the expected reliability of the service. All other things being equal, if service life-cycle management processes are nonexistent or immature, but the business process is critical, it is time to look for stable, matching, and cost-effective service availability in the cloud.

·       Service integration and interoperability is another important factor to determine, especially in the case of remote access. Although SOAP- and REST-based Web services are accepted as standards in terms of protocol and service contracts, there are underlying differences that can affect the ease of integration of these services into an enterprise backbone. This is mainly due to the rapidly evolving standards around Web services, as well as a lack of availability of tools and platforms to match the pace of evolution. The SOAP-versus-REST debate, SOAP-version differences, WS-* specs versus WCF standards, and document-literal versus RPC-encoded issues have added additional challenges, as well as opened new opportunities.

·       Beyond classical SOAP-based Web services, the notion of so-called “WOA” promises ubiquitous access to the “service cloud” by rich Internet applications (RIA) using features that include AJAX, JSON, Flash Remoting, and a new breed of rich Internet clients for the likes of Microsoft Silverlight, Sun Java FX, and Adobe Air—facilitating direct service access via the presentation layer. It is a question of how ready internal applications are in terms of consuming these applications, given that each one of these offerings might be using a different interface and data model. It is important to assess service offerings on these technical standards to ensure cost-effective interoperability with internal systems. Obviously, it is lot easier to resolve these issues at either end, given on-premise deployment of these services.

·       The service cost and licensing model is another parameter to look into, when deciding between locally or remotely deployed services. Remote services can be metered and charged in various ways, including per access, per user, concurrent access, and perpetual monthly/annual license basis. It is an understanding of the usage pattern as exhibited by the consumer systems of these services that will help decide the pricing model to go for. It just might turn out that an extensive usage requirement of the service would make it unfeasible to be purchased as an offsite-only offering. Note that there might be an additional cost associated with customization, training, and specialized infrastructure needs, such as extra storage. All of this would mean either internalizing the purchased services or developing and deploying them in-house afresh. The cost of service development, deployment, and management must be taken into consideration, as it would go down considerably if there were a mature software-development and support model already in place in-house.

Given the flux that the SOA landscape—or distributed computing, in general—is in, it would be safe to assume that most enterprises would be using a hybrid model for service utilization at this point in time. Depending upon the maturity of internal development and support activities, services that have high-risk profiles would stay in-house, while services that do not require a high degree of mitigation and contingency can be used remotely.

With utility computing getting more mature, scalability requirements for services along with cost differential eventually will push enterprises to alleviate themselves from having to host and manage services and infrastructure. In order to best utilize the rapidly emerging utility-computing models, it would be imperative to have for the in-house systems a solid architectural foundation that has the ability to develop and maintain loosely coupled components and expose these on standard interfaces.

Cc984967.BusDrivenEvalofDistComp05(en-us,MSDN.10).jpg

Figure 5. Time and Users versus TCO

Know Your People

Adoption of any distributed-computed model is a strategic decision, and has a direct impact on the level of efficiency and effectiveness of the business service that is enabled by the information that is using that model. Adoption of any distributed model will not happen in a vacuum and requires a robust support model from a human-expertise perspective. The range of stakeholders is rather wide and includes folks from the business as well as technical sides. Executive and business-line managers would be concerned about the risks that are associated with the adoption of a specific platform. These will include maturity of the model, availability of a support system around the platform, and ready availability of an experienced and knowledgeable human resource to drive the solution implementation.

An architecture team should be concerned about the generic capabilities that a specific platform is able to provide, as well as the current and future architectural suitability of the platform. Software developers and project managers must have their focus on the extent of support that the model offers around software-engineering practices. They must be able to answer critical questions about the efficiency and agility of the software-development life cycle, as well as ensure that the model will work with the user-interface requirements of end users.

Watch Your Wallet

Scale of adoption of distributed-computing models must be scoped properly. This would help in cost-effective rollout of the new technology and achievement of an early ROI. Various ways exist for dicing and slicing the enterprise to support the adoption of a distributed platform. A pilot-project–driven rollout process across geographical boundaries, a line of business, or a specific domain can be adopted. The mantra should be failing early and frequently, as well as keeping the cost of failure to minimum.

It is imperative to measure the desired ROI and TCO as early as possible, and use results from pilot projects to fine-tune them. Establishing a quantifiable link between business and financial objectives is a challenging but much-needed exercise, to sustain proper funding for adopting a complex and wide-scale technology solution. This is bound to help answer tough questions, such as: What would business gain in adopting an enterprise-wide SOA architecture? Framework and tool support in this area is still maturing, with Control Objectives for Information and related Technology (COBIT), Val IT, and Balanced Scorecard (BSC) techniques being high on the adoption rate, alongside homegrown Microsoft Office Excel–spreadsheet templates.

Cc984967.BusDrivenEvalofDistComp06(en-us,MSDN.10).jpg

Figure 6. Using Control Objectives for Information and related Technology (COBIT) to derive controlled execution of solution rollout

Cc984967.BusDrivenEvalofDistComp07(en-us,MSDN.10).jpg

Figure 7. Using IT Balanced Scorecard (ITBSC) to measure effectiveness of strategy execution iteratively

Conclusion

The pace of advances in ICT, an explosion of smart consumer electronics, rapidly changing work ethics, and a range of distributed-computing models with which to work all are pitching in to present significant challenges to organizations as they try to transition or transform for the better. Architects are supposed to provide concrete answers to questions such as: Why would one like to move away from a working mainframe transaction system to a distributed J2EE environment? Is RFID the right solution in this supply-chain model? Why should we be on the SOA bandwagon? Businesses need help and assurance that an answer to these questions should provide a maximum value proposition and a sustainable risk-mitigation strategy across all domains. Establishing the business context should precede the technical-evaluation stage to provide the much sought-after link between business strategy and solution provisioning. At the end of the day, the strength of any business or technical strategy would be measured by selecting the best fit and differentiating sets of activities—making calculated trade-offs, and ensuring a strong linkage between its processes, people, technology, and assets.

Show:
© 2014 Microsoft