Export (0) Print
Expand All

Enterprise Applications Management

Nagaraju Pappu, Satish Sukumar

Canopus Consulting, Bangalore

August 2007

Summary: Current-generation enterprise applications form a complex IT ecosystem. The enterprise architecture methodologies, SOA, and composite application architectures make it possible to create highly configurable applications that are integrated at the platform level. However, because the application architectures make dynamic configuration and run-time configuration possible, the post-deployment management of enterprise applications is a very complex activity. This document describes the nature of enterprise applications from one management point of view, and it describes the issues and challenges that need to be considered when designing an enterprise application management framework.

Contents

Introduction
The Implications of a Composite Approach
Service Oriented Computing & Service Driven Economy
Service-Oriented Architectures
SOA in the Context of Management
What Is Expected From an Infrastructure Management Provider?
The Challenge of Enterprise Service Management
(Applications + Infrastructure) Is Not Equal to Enterprise Systems
The Role of Standards
The Important Problem Areas
Engineering Challenges
Operational Challenges
Technical Challenges
Summary and Conclusions
About the Authors

Introduction

Current-generation enterprise systems are complex IT ecosystems that serve mission-critical business functions. The set of applications that constitute an IT ecosystem are not simply a bunch of independent applications with application-level integration schemes and protocols; instead; they share complex interdependencies among them—they are usually integrated at a platform level instead of at an application level.

Today’s enterprises are technology-driven organizations—the technology enables the enterprise to function as a multi-function, multi-process, and multi-structured organization. Appropriate organization structure and related business processes are designed to serve a business function in the most effective fashion possible.

Enterprise architecture primarily refers to how to design composite, standards-compliant software applications and systems that fit into overall enterprise engineering goals and objectives. Technology has matured immensely in the last few years. Most of the domain problems of yesterday are routinely solved as “implementation” problems today. More importantly, the emergence of standards in the way technology is conceived, created, deployed, and managed has enabled many product and services companies to offer integrated services to end-user enterprises, thus creating many business opportunities.

However, most knowledge about enterprise architecture is primarily limited to how to conceptualize, create, build, and deploy standards-compliant, loosely coupled applications that are integrated at a platform level. There is very little information or knowledge regarding the post-deployment management issues that arise when effectively running a complex enterprise IT system.

Many software engineering methodologies and disciplines address issues only until the point of verification and validation; they tend to remain silent about post-deployment management complexities. Most software engineering methodologies do not address the supportability aspects of the software. As a result, we do not have proper models to estimate the cost and effort of supporting complex application systems.

There are standards, such as ITIL, but these standards are mainly limited to infrastructure management, and they fall short in the case of complex distributed systems management.

The following issues make the management of enterprise IT ecosystems even more complicated:

  • There is an increased trend toward composite applications and SOA. This enables building highly configurable run-time assemblies. The holy grail of SOA and composite fusion techniques is to push the configuration of the system from the design and implementation time to the post-deployment time. This increases the complexity of management of enterprise systems by at least an order of magnitude.
  • There is also an increased trend toward outsourcing the management of the entire enterprise system. However, an effective outsourcing model requires a carefully designed methodology to define the SLAs capture the various management challenges, and to provide mechanisms to integrate the applications in an overall application/system management environment. There is not a unified methodology to manage measure and monitor the enterprise systems.

The goal of this set of articles is to fulfill this gap. In this series of articles, we describe a systematic approach to enterprise IT ecosystems management.

This article introduces SOA and composite techniques from the context of enterprise applications management. We argue that we must radically change our existing definitions of infrastructure—enterprise infrastructure must include all resources that are part of the ultimate service delivery, not just the physical hardware. We describe various issues that arise when creating a successful enterprise applications management practice.

The Implications of a Composite Approach

Today’s life cycle includes everything starting from planning, budgeting, acquisition, development, management outsourcing—it no longer means simply “application development life cycle.”

With so much of standardization across the board, today’s application development has evolved more into “composing” business functions from available services instead of developing applications from scratch. In other words, today’s application development has only two main functions: integration and configuring the underlying resources. The resources that applications use today are not just hardware and physical infrastructure; today’s applications also use other services, applications, and even systems.

However, this pushes the complexity to management time from development time. The development of new applications today is so standardized that they are performed more or less on the fly. But, the entire application, together with all the components it uses, becomes a complex beast. Even a small application today uses a database engine, an application server, a Web server, and many other middleware products. The applications can be deployed as multiple instances; more hardware can be added to the applications. The interactions between all the dependent components can be a nightmare to manage.

Each new advance pushes a problem from a higher dimension to a lower dimension. For example, a decade ago, ERP was considered a business domain problem, but today, it is a routine engineering problem. Similarly, distributed application development was a complex architectural problem a few years ago; today, it is a technology problem.

As new standards emerge, we are better able to consolidate experience and create new sets of terms of reference. These terms of reference eventually become industry standard objects, methodologies, and processes. Today, workflow modeling is simply a question of conforming to one of the many XML standards. As standardization becomes more pervasive, the “uniqueness” of a problem vanishes mainly because standardization pushes a set of unique problems into a set of generalized problems. As the problems become more general (meaning they are now computing problems instead of business problems), more products will emerge.

Right now, the space of application development has been more or less standardized, and vendor consolidation has already taken place. Application development today is more or less standardized around J2EE and Microsoft .NET. A few years ago, there were too many application building methodologies.

However, the management frameworks are still evolving, and there are still too many players offering too many solutions. The consolidation exercise is currently underway, and in the next couple of years, we will see a level of consolidation emerge in the management vendor space that is very similar to the application development environment consolidation.

Service Oriented Computing & Service Driven Economy

In the last forty years or so, the computing industry and IT industry have gone through a very rapid evolution phase; finally, the computing industry has matured enough to an extent that it can now be paraphrased in the words of Nicolas Writh: "After working for the machine for so long, finally it started to work for us.”

Chronologically, the evolution could be explained in four time slices: the first twenty-year period between the 1960s and 1980s, the next decade from the early 1980s until the early 1990s, the mid 1990s until 2003, and the present period.

The major evolution is from back-room data processing applications to today’s service delivery. The service-oriented computing and service-driven economies did not happen overnight. There are fundamental paradigm shifts in all areas of technology—starting with the kind of hardware, applications, methodologies, algorithms and techniques, and the nature of applications. Figure 1 brings together the evolution of the last 30 years into a single, unified perspective.

Click here for larger image

Figure 1. Evolution of Computing (Click on the picture for a larger image)

Initially, the computing revolution started with big, mainframe computing, led by IBM. Each application was unique, custom developed, and maintained by programmers. Most often, the applications were developed, maintained, and used by the same set of people. The applications were mostly back-room data processing applications, such as payroll processing, some accounting packages, scientific computing, and so on. Programming was very ad-hoc, and very procedure oriented. Application development had to start from scratch.

The major “computational problems” of that time basically concerned figuring out how to understand the “domain” and transform the domain into a set of computing problems. Programmers needed to interact with “domain experts,” who closely study the domain and come up with a set of programming constructs. For example, a domain expert will tell them about accounting, and the programmer has to come up with how he will store, retrieve, and manipulate “accounts.” But to create an “account” as a computing construct, the programmer had to first solve many other lower-level problems, such as creating an access path to the disk, data consistency, provide some guarantee that the application cannot corrupt the data, and so on.

The first major evolution of this time is the IT industry’s understanding of data processing and data management. From many turnkey projects that were completed, the industry isolated the common computing problems that had to be solved across all the applications; this led to the innovation of database management systems. Database systems solved the problem of how to model data, and they provided a set of common services that are required by all data processing applications, such as transaction control, a data creation and manipulation language, user management, data consistency, and so on. With the evolution of database systems, the computing industry transformed itself from data processing to information management.

These changes also mean that many of the earlier domain problems were now routinely solved as data modeling problems. A business problem has become an engineering problem.

The standardization of the data modeling into a relational data model and the associated interfaces led to the creation of product companies such as Oracle, Sybase, and Informix. These are the first big products of the computing community.

It also enabled non-technical people to operate and use the applications. This led to the programming community being relieved of maintenance and operation work. This is the time many software engineering methodologies started to emerge, primarily because many businesses started to use computing resources and there were many people who are involved in making the computing systems work.

Innovation of the previous era leads to solutions of the current era. The structured programming concepts led to the creation of database management systems, research into algorithms and computing, identification of computing problems, and so on.

The major innovation of the database era is the invention of open systems. Because of the inherent theoretical models of database systems, they were capable of being ported to different kinds of environments and hardware. Because of the large-scale availability of database systems, even medium-sized enterprises started using computing systems. This meant that a new kind of hardware was needed—not expensive mainframes, but much lower-cost computers and environments. Basically, these two developments led to many innovations at that time, including UNIX, networking, workstations, and graphical user interfaces. This heralded the era of desktop computing.

At the same time, the approach to application development became more “problem-oriented” than “procedure-oriented.” An example of this difference is that programmers were asked to build reporting systems instead of creating the steps required to print a report.

At that time, workflow systems and office automation were the major new additions to the computing systems. With workflow systems, processes and information were finally integrated.

The next step was a natural and logical step: bringing together all the innovations, including desktop computing, graphical user interfaces, networking, and end-user applications—finally, the Internet was born. The information systems are now transformed into content delivery systems. Companies like Yahoo and Amazon developed massive distributed systems and developed architectural enhancements to develop very large scale systems that are very diverse in their functionality and services.

After large companies started adopting the Internet paradigm, they saw an enormous opportunity to cut costs, scale exponentially, and finally, to allow their customers to make decisions. The decision-making process was pushed as close to the end customer as possible. The programmers stopped making all decisions and building hard-wired processes and applications. Applications are now composed and configured by users other than programmers. This brought a massive design methodology change in the way applications are built and deployed.

Finally, all the investments into application development frameworks, standards, methodologies, large-scale system building, and data center–type deployment made it possible to build systems as a set of services. The current trend is no longer data processing; it is now service delivery instead of content delivery.

Service delivery assumes that all the resources to make the service—such as hardware, computing resources, processes, workflows, and even applications—already exist. A service is nothing but how all these underlying “objects” are configured and put together to make the overall service. This is really the context of the SOA.

Service-Oriented Architectures

Service-oriented computing assumes that everything is standardized. The best analogy of service delivery is the way electricity is delivered to our homes:

  • The electricity generation process does not produce electricity for the ultimate end user/customer. It produces electricity in a manner that is meant for the distribution process. The distribution process distributes the electricity without any regard to the way it is generated. It does not make any distinction between whether it is generated using a nuclear reactor, hydro-electric turbines, or coal/gas turbines. Very few people care about the source, process, and technology of the generation.
  • Similarly, the distribution process uses its own complicated technology, process, and tools to efficiently distribute the electricity.
  • The manufacturers of electricity appliances do not care about the generation process or distribution process. It focuses only on the efficient manufacturing process of the appliances and its application. In fact, the only concern—as far as electricity is concerned—is whether the appliance has an efficient power usage model.
  • The interface manufacturing process, such as wiring, switches, and plugs, does not concern itself with generation or distribution or the appliances. It focuses only on the efficiency of its own process.
  • The ultimate customer does not concern himself or herself about any of the preceding. In fact, customers do not care at all about electricity—they care only about its effects.

The entire process assumes several things:

  • Each process in the chain delivers a service packaged as a product to its nearest neighbor.
  • Each process in the chain works according to a standard interface, without worrying about the inner implementation details.
  • Each part of the process has its complexities and super specialties that are very distinct from another process.
  • Each part of the process provides certain guarantees to the other parts—the guarantees are in terms of reliability, conformance, availability, capacity, correctness, and so on.

SOA and service-oriented computing work on a very similar basis.

SOA assumes that all objects that are required already exist. Its focus is on connecting and configuring the dots, instead of creating new objects. Today, most enterprise applications and systems already exist—the investments are in how to derive the maximum value from them.

It assumes standard interfaces, such as XML, are driving the way different applications are connected to each other and work with each other. XML is to applications what TCP/IP was to infrastructure. XML connects applications and services together in the same way TCP/IP enabled infrastructure connectivity. So, in a very short time from now, we will see XML devices for applications management in the same way that there are SNMP devices for infrastructure management.

SOA requires that all the underlying “objects” conform to certain QoS parameters. This is where the real key to management lies. Everything—starting from business processes, IT systems, software systems, and even physical infrastructure—is considered as a set of resources that come with a certain guaranteed quality of service.

In summary, SOA means the following:

  • The elements of SOA
    • Connecting the dots
    • Assumes that all objects already exist
    • Standards and frameworks
    • Integration, interfaces, and protocols
  • The fundamental shift in the paradigm is because of the following more recent developments:
    • Programming is a configuration of the underlying environment.
    • Decision making is pushed as close as possible to the customer.
    • Complexity is pushed to management from development.
    • SOA is nothing but integrating the available infrastructure, whether it is physical or logical.

SOA in the Context of Management

The entire enterprise IT ecosystem can now be treated as an “infrastructure”—the same methodologies and techniques that apply in infrastructure management also apply equally well in service management.

SOA answers the following key questions from a management perspective:

  • How is programming done today?
    • Most objects that are required are available as service containers with well defined interfaces.
    • Programming consists mostly of configuring the underlying environment and making connections between the existing service objects.
    • A service container provides all aspects of the underlying infrastructure, including the management aspects.
  • Are applications really unique and different from each other?
    • Applications are unique only in the way they use the underlying infrastructure and how they connect different objects together.
    • Most of the monitoring data and several metrics are no longer defined by the application programmer; they are automatically provided by the underlying environment.
    • Different applications are like different flavors of UNIX.

What Is Expected From an Infrastructure Management Provider?

The transition from a pure infrastructure management to service management requires a fundamental transformation in the thinking of an organization. All infrastructure management organizations need to change their understanding of what constitutes infrastructure today. A new definition of infrastructure is required. The new definition includes the entire enterprise IT ecosystem, starting from business processes to physical infrastructure as a set of resources that have well-defined management characteristics.

It is no longer feasible to view infrastructure, applications, major IT systems, and business functions separately. The “silo” approach leads to too many different kinds of management practices and too much complexity.

It is also very important to distinguish between management, measurements, and monitoring. Monitoring is so largely automated that it is no longer a “management activity.” However, providing meaningful measurements, diagnostics, and management information is a very important management activity.

The methodology has to transform from “I received a query, and I produced data” to “I have a question, and I produce an answer.”

It is also very important to understand that a management vendor has to serve the needs of different kinds of users. Because the entire system is in the hands of the managed services vendor, there are various types of reports that have to be produced for various different types of users. One report has to answer questions from end users, whereas another report has to provide information to the senior management of the company. This requires investments into very detailed measurements each tailored to a specific user community.

The Challenge of Enterprise Service Management

The challenge of any enterprise management platform is to provide up-to-the-second information about the events that occur within the enterprise, so the enterprise can quickly and effectively respond to the events. The bottom line is to enable the enterprise to become more responsive and competitive. However, providing an end-to-end enterprise management solution cannot be done by putting together various tools that monitor the IT infrastructure and applications. We need to provide a holistic management platform that addresses all the operations, functions, and business systems that span the entire enterprise. The various infrastructure and application management tools provide monitoring and management information that is relevant from the view of that particular application, but the correlation of this information to the respective business impact is a complex problem. Any attempt to correlate such data/events from the perspective of its impact on the business is bound to be futile. The CEO of a company would be more interested in knowing whether the order processing units and the inventory management are working together—this is a very different problem than whether the database throughput is 50 transactions per second or 25 transactions per second. Surely, the database throughput affects the performance of the enterprise at some level, but the performance of the business process as a whole is significantly more valuable than providing measurements and metrics from the infrastructure and applications. The various stakeholders in the enterprise will have different questions and needs for different kinds of measurements and metrics. Figure 2, in the next section, explains this in detail.

(Applications + Infrastructure) Is Not Equal to Enterprise Systems

“Infrastructure monitoring and responding to threshold conditions does not amount to
Enterprise Applications Management”

If the enterprise IT system is equal to the applications and the infrastructure that runs the applications, we can provide a management platform by monitoring and measuring the performance, reliability, and availability of the applications and infrastructure. However, we must first examine whether this assumption is true.

Let’s consider a simple case of an enterprise system that consists of an order processing system and an inventory management system. The business is to receive orders over the Internet and fulfill these orders, depending on the inventory levels. The application infrastructure comprises an application server, a database, and a few Web servers.

Let’s look at the concerns of various stakeholders of the system:

Click here for larger image

Figure 2. Different Stakeholders – Different Concerns (Click on the picture for a larger image)

The CEO of an organization would have little interest in the network bandwidth or the database throughput. An inventory manager would be more interested in which items are fast moving and which items need to be restocked than in exceptions and error logs.

It is obvious from Figure 2 that by monitoring the applications and infrastructure, we cannot provide the management information related to the SLAs and KPIs of all stakeholders. The most we can provide is the availability information of the infrastructure and applications, but we cannot provide the real business-level performance and bottleneck information. The picture gets very complex in the case of complex global enterprises.

To provide relevant management information to the different stakeholders, we must provide:

  • A consistent, holistic view of the enterprise from multiple dimensions.
  • A consistent management methodology of the enterprise in terms of reliability, availability, performance, and supportability (RASP) that is meaningful at the various dimensions of the enterprise.
  • A consistent way of defining the overall quality of service measurements, using models that are meaningful to the respective stakeholders.

The Role of Standards

The key to a successful management solution offering lies in adopting the service offerings around certain standards. There are too many systems, applications, and infrastructures today. If we adopt tailor-made solutions to each application or system, very soon the problem becomes so complex that it very quickly gets out of hand.

“A standard pushes a higher dimensional problem into a lower dimensional problem”

There is a significant investment that the industry made in evolving certain standards. Today, there are various standards cutting across different business dimensions. The emergence of a standard pushes a higher dimensional problem into a lower dimension. For example, the advent of J2EE and .NET simplified the development of complex distributed applications by providing a common environment for development and deployment. This helped application developers to concentrate on the “business logic” instead of focusing more on how to manage the dynamics of the underlying environment. Relational databases transformed domain logic into data engineering problems. A standard like BASEL-II transforms the banking processes into IT problems.

However, adaptation of a standard increases the overall cost of the solution, primarily because adopting and integrating a standard is not an implementation or a technology problem alone. It demands organizational commitment at various levels.  Adaptation of a standard allows an organization to make use of external resources, leading to significant performance and cost savings in the long run.

Standards make the problem of monitoring simply automated. Because most systems conform to a certain standard, the standard itself specifies how to monitor the system and what type of monitoring information to expect.

“We monitor infrastructure

We measure applications

We manage reliability, availability and performance of business services”

It is also possible to define measurements that conform to a standard. Because the standards are public domain and are widely used, it is possible to get industry-wide benchmarks. Infrastructure management today is possible largely because of the well established notions of reliability, availability, performance, and a variety of benchmarks related to availability, performance, and reliability. Similarly, with the emergence of standards like J2EE and .NET, similar benchmarks and consistent definitions of availability and performance of application environments are available today.

This dramatically changes what is managed, what is measured, and what is monitored. We don’t manage infrastructure—we monitor infrastructure. We don’t manage applications, we measure applications. What we manage is the overall reliability, availability, and performance of the entire IT system.

The Important Problem Areas

This section describes some of the important problem areas that have to be addressed as part of the overall service management practice. Application management and infrastructure management are just part of overall service management.

The problem areas are divided into three major categories:

  • Engineering challenges
  • Operational challenges
  • Technical challenges

Engineering Challenges

Any comprehensive enterprise infrastructure management framework must address the following issues:

  • ITIL has many limitations when it is applied to distributed systems and business process management. ITIL takes a very vertical approach to systems; this is also known as a silo approach. This does not work well because enterprise infrastructure management requires a holistic approach to management.
  • Service level management requires a top-down view into the infrastructure, but monitoring is essentially a bottoms-up game. It is impossible to construct a useful management report from monitoring information alone. We need to create a top-down model of management—first, define what is to be managed, define the required measurements, and then integrate with necessary monitoring products to get the required measurements.
  • Enterprise infrastructure is a highly connected infrastructure; therefore, we need a way to manage the complex dependencies that exist between the physical infrastructure, applications, and business processes.
  • Therefore, any complex enterprise IT-EcoSystem requires a thorough Enterprise Systems Management Architecture. The architecture must address the following issues:
    • How do we integrate different tools and products?
    • How do we define a pre-define set of objects and associated measurements that can be reused extensively across applications?
    • How do we address the limitations of ITIL in the context of end to end business service management?
    • How do we define a diagnostic model?
    • How do we integrate different kinds of reporting services and metrics?
    • How do we create a model that integrates management SLAs with the measurements and monitoring information?

Operational Challenges

The biggest operational challenge is knowledge management, and the ability to deal with rapid change of technology and applications. The most teething operational problem is to learn to deal with change management, release management, and constant change and upgrades to the environment. Managing downtimes of systems and creating effective process guidelines for different areas of applications management are central to operational success.

The tools and technology that are available today for service management are still evolving. Many new products, offerings, and tools will emerge in the next couple of years. After aspect-oriented programming becomes an industry standard, we can expect to see many tools that can automatically drill-down into the entire infrastructure and provide very useful “transactional” level metrics and monitoring. Until these problems are solved at a technology level, they remain operational problems.

Technical Challenges

The major technical challenge is to integrate various processes, measurements, tools, diagnostic models, and reporting systems into a unified architecture and create a service management platform. This type of platform cannot be put together by buying some products and tools together or by putting together some ITIL processes.

Another technical challenge is to understand various different technologies, application standards, their complexities and relationships, and how they affect the overall application and create a long term knowledge management framework.

Summary and Conclusions

Applications management is to be understood in the larger context of the overall business services management. In this document, we presented an industry and architectural picture of how service-oriented computing is evolving and its implications to enterprise applications management. We presented the overall framework of what is expected from a managed services vendor; the kind of tools, processes, and standards that are available; and how applications are constructed today and its implications to management.

We also provided a detailed description of various challenges and issues that have to be addressed to create a long term execution roadmap. These issues were categorized into management, engineering, operational, and technology issues.

In the next set of documents, we examine enterprise applications management from ten different knowledge areas, present a consistent methodology that uses the same measurements used at physical infrastructure to all enterprise resources, and apply this framework to managing .NET-based application environments.

About the Authors

Nagaraju Pappu has more than 15 years of experience in building large scale software systems. He worked for Oracle, Fujitsu, and also for some technology startups. He holds several patents in real-time computing, enterprise performance management, and natural language processing. He is a visiting professor to Indian Institute of Technology, Kanpur, and International Institute of Information Technology, Hyderabad, where he teaches courses on software engineering and software architecture.

His areas of interest are enterprise systems architecture, enterprise applications management, and knowledge engineering.

Satish Sukumar has more than 13 years of experience in software architecture, design, development, implementation, infrastructure management, and customer support. He has held various positions in Microland and Planetasia over a ten-year span. He has spent three years with Veloz global Solutions’ R&D center in Bangalore as their Vice President of Engineering.

Satish specializes in enterprise software architecture. His research interests include knowledge representation, performance measurements and management, real-time data analytics, and decision support and workflow/agent based distributed computing.

Nagaraju and Satish are currently independent technology consultants, working out of Bangalore, India. They can be contacted via their company Web site: www.canopusconsulting.com. They also maintain a blog—www.canopusconsulting.com/canopusarchives—where they regularly write about topics related to design theory, software architecture, and technology.

 

Show:
© 2014 Microsoft