Export (0) Print
Expand All
Separating DSL Semantics from Implementation
Expand Minimize

Architecture Evaluation and Review Practices

Bb896741.skyscrapr_banner(en-us,MSDN.10).gif

Denise Cook

June 2007

Updated December 2007

Summary: Consider an evaluation of your work as a way to produce improved specifications by tapping into the experiences of veteran architects. See each evaluation as a valuable learning opportunity. Your projects will benefit, your organization will benefit, and so, too, will your career. (7 printed pages)

Contents

Introduction
Why?
What?
Who?
When?
How?
Where?
Critical-Thinking Questions
Further Study

Marry your architecture in haste, and you can repent in leisure.

–Barry Boehm

Introduction

Like me, as a software architect, you might tend to be confident in your abilities and, at times, even a little arrogant. You likely have made it to where you are by paying your dues: starting out writing maintenance code, moving on to some green-field development, eventually leading successful development projects—and, usually, a few not so successful ones—before becoming an architect. After several years, you might think that you have seen it all—"Bring on new technologies, stormy political waters, and ugly legacy systems to contend with! If you can dream it up, I can architect it."

Confidence is an important trait for a technical leader, but it should be accompanied with an ongoing willingness to evolve your skills and evaluate your work. Experienced architects know that they are going to miss things. And, the earlier that you can detect a problem with your architecture, the better off that the project will be—because the longer that a fault goes undetected, the costlier that it will be to correct. If you have indeed "seen it all," you know that architecture evaluations are your best friends.

Risk Mitigation

In recent years, many organizations have introduced architecture evaluation as a critical component of the software-development life cycle. The objective is to identify potential issues with a proposed architecture, prior to the construction phase, to determine its architectural feasibility and to evaluate its ability to meet its quality requirements. So, before you throw up your hands thinking that this is yet another layer of process with the potential to slow you down, take time to understand the reasoning behind it. This is all about risk mitigation. It's good for your organization, and it's good for you as an architect.

Imagine that you are an investigative journalist. For each assignment, your job is to research a topic deeply to uncover the hidden facts, and to report the story in such a way that it provides context to your readers. How does what you uncovered have the potential to affect their daily lives? Unlike traditional analytical journalism, which simply reports a story from the data that is available, investigative journalism attempts to determine if what has been presented is, in fact, reality. Architecture evaluation shares that objective. The purpose of the evaluation is not simply to review and communicate the candidate-architecture specification to the stakeholders. The objective is to review and evaluate the architecture, assess its ability to meet quality requirements, detect design errors early in the software-development life cycle (SDLC), and identify potential risks to the project. In other words, the objective is to determine if the reality of the specification measures up to its claims.

Like investigative journalism, architecture evaluation is based on the old Journalism 101 fundamentals: who, what, when, where, why, and how. Whether you are preparing to have one of your candidate architectures reviewed or you are conducting an evaluation yourself, these questions address the major components of the process. Throughout your career, you will be exposed to specific methods of architecture evaluation that have emerged in this important domain. While each architecture has its own flavor, they all share key concepts that are relevant in any context.

Five Ws and an H: An Evaluation Toolset

The following sections examine an approach to software evaluation and review, organized by each of the fundamental journalism questions.

Why?

Why should an organization review and evaluate software architecture? The bottom line is that architecture review produces better architectures—resulting in the delivery of better systems. Too often, systems are released with performance issues, security risks, and availability problems as a result of inappropriate architectures. The architectures were defined early in the project life cycle, but the resulting flaws were discovered much later. They were exposed when the project was affected most negatively by change, when downstream artifacts were too costly to overhaul.

The most significant benefit of evaluation is to reassure stakeholders that the candidate architecture is capable of supporting the current and future business objectives; specifically, it can meet its functional and nonfunctional requirements. The quality attributes of a system—such as performance, availability, extensibility, and security—are a direct result of its architecture; therefore, quality cannot be introduced easily to your system late in the game. An evaluation of the architecture while it is still a candidate specification can reduce project risk greatly.

There are also some positive side effects of evaluation. First, the process necessitates the unambiguous articulation of the system's quality requirements. If the requirements are too vague to evaluate an architecture against, they must be elaborated upon. Poorly specified requirements result in hit-or-miss architectures. Evaluation also forces you to document the architecture clearly, so that it can be reviewed. Furthermore, as you participate in regular evaluations of your work, you learn to anticipate the questions that will be asked and the typical criteria against which your work will be measured. Over time, this process promotes stronger architectural skills.

Going further, an investigative journalist would ask why an organization wouldn't conduct software evaluations and reviews. A common response would be concern over the cost of the effort. It should be noted that, as with any process, evaluations should be right-sized for the target effort. Other reasons for not conducting architecture evaluations that you might have to overcome include a fear of exposing limitations in skill or experience, or reluctance to provide a client with visibility to the work.

What?

What is a software architecture evaluation and review? Basically, it is a process by which conclusions can be drawn about the suitability of an architecture. Architectural decisions are evaluated to determine how they enable or restrict the ability of a system to meet its architecturally significant requirements.

The objectives for a review are based upon stakeholder concerns and focus on specific aspects of the architecture. Objectives will vary from project to project, according to each system's specific requirements, but there are a few general categories under which most tend to fall. Typically, stakeholders want to ensure the quality and suitability of the architecture, identify areas in which improvement is required, open a dialogue between decision makers to address areas of risk, and negotiate any necessary trade-offs.

What are the outputs of an architectural evaluation and review? The primary output is a comprehensive report that describes the evaluation-and-review findings. This document need only be as formal as required by the project, but it should serve as a concise summary of the assessment that can be communicated to the project team, as well as the stakeholders. The report should include the scope of the review, evaluation-and-review objectives, architecturally significant requirements list, findings and recommendations, and an action plan.

What is the scope of an architecture evaluation and review? The scope describes the boundaries of a specific instance of a review. For example, the architecture of the entire system can be evaluated, or only part of the system. A review can evaluate the architecture against all of the system's quality requirements, or only the most critical ones. Discover the appropriate scope by prioritizing the goals of the evaluation, based on its defined objectives.

What exactly should be reviewed? Based on the defined objectives and scope, create a list of the specific criteria against which the architecture will be measured. The list might include system-wide properties, significant functional requirements to deliver, and general attributes of quality architectures. The goal is to review and assess how each item on the list is affected by the architectural decisions that are made.

For example, performance is a quality objective that ends up on most evaluation criteria lists. Working from a typical business requirement, the architecture could be expected to execute predictably within its required performance profile. To actually evaluate the architecture, however, the performance criteria must be stated explicitly. An example could be the architecture's ability to deliver 3,000 lookup requests and 4,000 transactions within a four-hour period, with a peak load of 15 percent of the transactions taking place in a 45-minute window.

Reliability, security, availability, extensibility, manageability, and portability are all quality attributes that can be considered in an architecture evaluation and review. Keep in mind the scope and objectives of the evaluation, to keep the list manageable and useful to your project.

A true investigative approach, however, takes time to ask, "What criteria have been excluded, and why?" Are there political agendas at stake that selectively ignore aspects of the architecture? Have software and other technologies been mandated that constrain the architecture's ability to meet its objectives? While some of these scenarios cannot be avoided in the real business world, it is always appropriate for the architect and the reviewers to acknowledge any limitations of the architecture, even if they cannot be removed.

Who?

Who participates in a software architecture evaluation and review? The objective of the selection process is to ensure that people with the right skills and relevance to the project are assigned to support the effort effectively, without creating a crowd that is too large to be efficient. Ideally, there should be active representation from three contingencies: an evaluation team, project stakeholders, and project practitioners.

The evaluation team conducts the actual evaluation and documents all findings. In large organizations, an evaluation team often comprises practitioners who rotate through the team in between other projects. Staffing the evaluation team with practitioners from the target project should be avoided, if possible, to maintain the highest degree of objectivity. For very small projects, however, self-assessments and peer reviews are completely acceptable. It is critical that members of the evaluation team have respect and credibility as architects, so that their conclusions will carry weight with the project representatives and stakeholders.

Stakeholders are the people who have specific architectural concerns and a vested interest in the resulting system. Most of the architectural requirements were specified by these stakeholders, so that their participation in the evaluation is critical.

System architects and component designers are the key project representatives and are responsible for communicating the architecture and presenting their motivations for design decisions. Other project representatives to include are project and program managers, developers, system administrators, and component vendors.

The follow-up step for an investigative approach is to ask, "Who is missing from the participant list?" What stakeholders or project representatives intentionally were not included? Occasionally, practitioners and stakeholders are excluded because of past experiences. Perhaps they were not supportive of a previous evaluation effort—not dedicating enough time, not taking the evaluation as seriously as they should have, or exhibiting defensive or contentious behavior. Part of the evaluation process is coaching the participants. If someone is important to an evaluation for the knowledge that they have or the requirements that they represent, it is worth the effort to try to influence their behavior, so that they can contribute to the process.

When?

When should an architecture evaluation and review take place? If only one evaluation can be performed, it takes place ideally as early in the life cycle as is reasonable and possible. Generally, you want to conduct the evaluation when the architecture is specified, but before anything has been implemented. The goal is to identify any areas of concern as early as possible, while they are still relatively easy and cheap to correct.

That being said, an evaluation and review can be conducted at any stage in the life cycle. For projects using an iterative development approach, evaluation can take place within each iteration—whenever architectural decisions have been made. Evaluations also can be conducted on legacy systems, to assess their ability to support future business objectives.

Your investigative instincts should be getting sharper by now. How can we take the "when" question a step further? Beware of stakeholders or project representatives balking at the timing of an evaluation. The reasons could be completely valid; maybe they are unavailable, or they truly feel that the timing is inappropriate. Digging a little deeper might reveal project issues. The architecture team might be struggling. They might not see the evaluation as their chance to get valuable input and advice. Stakeholders might not be ready and willing to negotiate any conflicting requirements. Take the time to uncover the true reasons behind any postponement attempts. You might find a critical risk hidden behind that reluctance.

How?

How is an architecture evaluation and review performed? Prior to the review, you should gather inputs that describe the architecture and explain the rationale behind the architectural decisions that are made. Examples of typically selected inputs are the architecturally significant requirements, an architectural description or software architecture document, an architectural-decisions document, and an architectural proof of concept.

The primary activity of the evaluation-and-review process is the assessment of the architecture. A proven technique involves the use of scenarios, which allow the quality attributes of the architecture to be evaluated in specific contexts. Walking through the steps of a scenario provides you with the opportunity to describe how an architecture will respond to specific demands that are placed upon it. If you want to assess how easily a system that is built upon the candidate architecture could be modified, you could create a scenario that describes a set of specific changes to implement in the system. You then could analyze the architecture, looking for modifiability tactics such as semantic coherence and generalized modules. For small-scale evaluations not requiring such a detailed technique, a simple questionnaire or checklist could suffice.

The final step of the evaluation-and-review process is to document the findings, and communicate them to the project team and stakeholders. When architectural concerns or deficiencies are exposed, it is critical to provide recommendations for improvement that are actionable. The whole point of the investigative approach is to uncover issues that otherwise might have been overlooked. If recommendations are too generic to be implemented, the evaluation cannot contribute much to the success of the project.

Where?

After the review—where do you go from here? When the evaluation report is complete, you typically are given an opportunity to respond to the findings and recommendations. The report then is forwarded to the stakeholders for use in planning the next steps for the project. Sometimes, an evaluation will identify the need for trade-offs. For example, if the architecture cannot support a specific performance requirement, stakeholders must determine if the benefit of strengthening the architecture to achieve that requirement is worth the cost. Following an evaluation, the architectural decisions should be updated, requirements refined and prioritized, and the project adjusted as necessary.

While each evaluation produces different results, the goal is always the same: to produce a better architecture. For you, the architect: Consider an evaluation of your work as a way to produce improved specifications by tapping into the experiences of veteran architects. See each evaluation as a valuable learning opportunity. Your projects will benefit, your organization will benefit, and so, too, will your career.

Critical-Thinking Questions

· A validated architecture does not guarantee the quality of the resulting system. How can downstream design decisions undermine the architecture's ability to meet its quality objectives?

· How can the introduction of evaluations help your organization adopt a standard method of architectural description?

Further Study

· Clements, Paul, Rick Kazman, and Mark Klein. Evaluating Software Architectures: Methods and Case Studies. Boston, MA:Addison-Wesley, 2002.

· Kazman, Rick, Mark Klein, and Paul Clements. ATAM: Method for Architecture Evaluation. Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute (CMU/SEI), 2000.

· Kazman, Rick, Len Bass, Gregory Abowd, and Mike Webb. "SAAM: A Method for Analyzing the Properties of Software Architectures." Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute (CMU/SEI), 1994.

· Kazman, Rick, Len Bass, Gregory Abowd, and Mike Webb. "SAAM: A Method for Analyzing the Properties of Software Architectures (Proceedings - ICSE 16)." Long Beach, CA: IEEE Computer Society, 1994.

About the author

Denise Cook is a method architect and content author with IBM Rational, contributing to the definition of IBM's software-development methods, including the Rational Unified Process (RUP). Before joining the Rational team, Denise worked as a lead consulting architect for IBM and Andersen Consulting. She has 17 years of experience in the field of software engineering.

 

This article was published in Skyscrapr, an online resource provided by Microsoft. To learn more about architecture and the architectural perspective, please visit skyscrapr.net.

 

Show:
© 2014 Microsoft