Export (0) Print
Expand All
Separating DSL Semantics from Implementation
Expand Minimize
1 out of 1 rated this helpful - Rate this topic

The Customer: The Missing Link


Richard F. Weldon, Jr.

April 2007

Summary: Understanding business processes requires working with the customer to understand their business fundamentals. (6 printed pages)


What We Learned
Further Study


Several years ago, I was assigned as an architect/developer to build a replacement for an existing tracking system that managed the configuration of technical manuals for the company's main product. When we started our analysis sessions, we found out that the main user of the system was out on medical leave and was not expected to return for about three weeks. The organization provided an individual to act as a subject-matter expert (SME) to help with the analysis sessions. Our SME was not an actual user of the system, but was one-step removed. By that I mean that he provided information and markups to the main system user, who entered them into the system. He participated in validating the output of the system and approved the publishing list before the manual was published and sent to the customer.

Because this project had a short time frame, we were encouraged to forge ahead with the resident SME and check in with the system user when he returned. Therefore, we pressed ahead with the analysis and prototype.

We spent a couple of days trying to understand the system from a business perspective. We created a simple data and process model based on the "visible" parts of the system familiar to our SME. These aspects of the system seemed pretty straightforward—storage, retrieval, sorting, and printing—so, we moved on to the active prototyping stage. By the end of the third week, when the system user had returned to work, we had a working menu structure with the main screens and a working database. There was still a lot of work to do, but we felt ready to have a "show and tell," now that our system user was back.

We invited our SME, the system user, their manager, and the IT manager. The system user listened attentively while we presented the analysis information, and seemed intrigued while we walked through the menus and screens. Then, the system user asked a few short and direct questions. As we started asking for feedback, the system user looked us straight in the eye and said, "You've done a lot of work, and it looks great. But why did you do it like this? It's not how the current system works, and it won't support the process." Then, he went through a quick list of things that the prototype did not do and that the system had to do. It turned out that this system was like an iceberg: Only 10 percent was visible, the easy 10 percent. The other 90 percent consisted of complicated, twisted rules that governed the publishing of each page of a massive technical manual.

We left the meeting somewhat confused and frustrated, but with a new list of things that needed be included in the final system. It was clear that much of our prototype would must be scrapped and that we would be pretty much starting over—and not only with the construction aspect. Some very serious analysis would be required to ferret out the detailed business rules that made up the very core of the processing and provided the real value of the system.

What We Learned

So, what went wrong? As we looked back at the effort, it was clear that we had made four key mistakes.

Make Sure You Have Access to the Right People

There is no substitute for access to the actual users of the existing system. Individuals who use the existing system every day not only know the discrete steps they must perform to complete the business processes, but they also have expectations for what the new system must do. There are processes executed by the system user that a secondary user might never see or interact with—which, in this system, turned out to be the very core of the functionality. In this case, the tribal or hidden knowledge was so massive and complicated that it took months to understand and document it.

It is important to include secondary users, like the SME who helped us, because, even though they are one step removed from the actual user, they provide an important perspective that adds detail to the inputs and outputs of the process. Additionally, they also become an important link in the validation of the process and the new system, because they have interactions with system users. But the fact is that, because secondary users have casual or "fill-in" interactions with the system, they do not have the complete picture and often lack knowledge of the hidden business rules or processes that allow the system to perform all of its primary tasks.

The final piece of the customer puzzle that was missing was the individual with the authority to make decisions and approve changes or alterations to existing business rules and processes. Often, the system user and SME are implementers of their specific tasks, without the authority to make improvements to the process. All too often, they will outline a business rule with the caveat "I don't know why, but that's the way we've always done it" or "Seems kind of the long way around the tree to me, but it's always been that way." While the decision maker might not be able to attend every analysis session, regularly scheduled review times to rule on process changes and approve progress are essential. Rework is on the horizon when the decision maker is kept completely out of the analysis loop until all the analysis is done.

When SMEs are being discussed, make sure that they include one or more actual system users, one or more secondary users, and access to the person who can make decisions about the system.

Identify Solid, Measurable Objectives (SMART)

The only objective we received from the customer camp was to "replace the existing system." This objective was too ambiguous to measure and could only be validated by the system user. What should have happened was a meeting of the decision maker, system user(s), and secondary user(s) to develop the strategic direction of the effort by defining SMART objectives. SMART is an acronym for a strategic, measurable, agreed-upon, realistic, and testable objective.


Objectives must be at a high enough level to provide direction for the entire effort. They should be decomposable to tactical actions and link to existing business directions or define new ones. For example, "The system will eliminate manual updates to the publishing list" defines the strategic portion of an objective, but it is just the first step.


To show that the objective was met, there must be a real target defined as a measure of success. The target is usually expressed in a numeric percentage that the system will accomplish. For example, we would update the strategic portion of our objective as follows to add the measurable component: "The System will eliminate 90 percent of the manual updates to the publishing list." The conundrum with this kind of measure is that the system users must establish a baseline of the current percentage of manual updates being performed to know if the objective is being met. Often, they do not have a clear picture of the current performance, so a baseline must be established by measuring the current system.


The key stakeholders, decision makers, system users, and secondary users all must agree that these objectives articulate the correct strategic direction for the project and be willing to sign off on them as soon as they are documented. It does not mean that they cannot or will not change later in the project. What it does mean is that changes are under a change-management process that evaluates the impact of the change before it is approved. Changes to strategic objectives have a far greater impact on the project than changes to the tactical actions that come from the objectives, and they must be managed.


The objective must be realistic and obtainable. There are times when a strategic object looks realistic when writing it; but further, deeper analysis of the processes affected or required to meet the objective reveals that it cannot be met within the current scope, budget, or at all by system automation. In those cases, the decision-making team (key stakeholders, decision makers, system users, and secondary system users) must revisit the objective and, using the change-management process, alter or change the objective, so that it is realistic and obtainable, or else delete the objective altogether.


The final piece of the objective is a strategy to validate that the objective was met. In our example of "The system will eliminate 90 percent of the manual updates to the publishing list," the strategy to validate this objective would be written as: "Currently, manual updates are required on 55 percent of our publishing lists. It is expected that 90 days after the new system is implemented, manual updates will be required on no more than 10 percent of the publishing lists. The new system will provide the system users a facility to log all manual updates and to which publishing list they apply. Monthly reports will be run, showing the percentage of publishing lists that required manual updates and the types of manual updates applied." This becomes part of the objective statement.

This objective tells us that we must understand what constitutes a publishing list, and how they are constructed. We must provide a mechanism to allow the system user to log any manual updates and link those log entries to the publishing list. We must provide a report to be run on demand by the system user (currently, expected to be monthly) that shows a list of publishing lists, any manual updates required ,and the percentage of publishing lists that required manual updates. The expectation is that, after 90 days, the user will be familiar with how the system works, and most of the user errors will be eliminated. Any manual updates that are still required will be areas that the system did not cover. In the plan for system evolution, the system owners might want to consider enhancements to the system that eliminate the most common manual updates shown on the monthly report.

Most systems will define 3 to 10 SMART objectives. These become the starting point for the analysis of the business requirements for the system that will be built.

Link Business Use Cases and Business Requirements to Concrete Objectives

Good objectives are met by determining the value that the system must provide to meet the targets. One of the best ways to understand the value that must be provided is to work with the SMEs to identify the business use case. From the sample objective, we can define a business use case to provide the publishing list and provide reports to management. The use-case model might look like Figure 1.


Figure 1. The use-case model for the automated publishing system

We identified a use case to provide the publishing list to the publishing-system user, a use case to gather information about the manual updates, and a use case to provide the manual-update statistics to both the publishing-system user and publishing-system management. These three use cases and two actors are derived directly from the strategic objective. An architect would work the publishing-system user and publishing management, usually in a joint-requirement planning session to define the specific value of each use case and the process for providing that value. Objectives help manage the scope and provide the "what" for SMART targets (measurable) and assessment of the system meeting the targets (testable).

Subsequent use-case analysis will provide a detailed process for each use case, usually in the form of an activity diagram and business requirements tied to the use-case detail. These process and specific requirements lay the foundation for the system that will be built and become the contract between the stakeholders and the system developers.

Remember: Each business use case must link back to an objective, and the detail must define how the objective will be met—including supporting processes, activities, and measures.

Clear Statement of Work Between IT, the SMEs, and the Customer

This defines what will be created, which resources are required, and the grounds rules that the project will follow.

I find that the simpler the statement of work (SOW) is, the better the communication tool it will be. I like to have the following sections in my SOW.

Definition of the Deliverables of the Project or Effort

I find that a clear statement of what will be produced keeps everyone on the same page. For example, in this case, I would include the following in my list:

  • Strategic objectives document
  • Business use-case model and definition
  • Business object model (business entities)
  • System use-case model
  • System object model
  • Code
  • Test plan and test cases
  • Implementation plan
  • Project plan and schedule

As supporting information, I would include links to descriptions of each deliverable.

Resources Required

I include resources by name, if I know them, and the role that they will perform on the project. If the individual has not been identified, I include the role that is needed. A sample list might include:

  • System owner (decision maker).
  • Subject-matter experts (system user(s) and secondary system user(s).
  • Project architect.
  • Project manager.
  • Project analyst (system, business and functional).
  • Developer (s).
  • Test lead and testers.
  • Ground rules. Included in the ground rules will be descriptions of the techniques that will be used (UML, use cases, joint requirements planning sessions, and so on) and general ground rules, such as the code of conduct and status-meeting cycles.

I like to keep the main statement of work simple and straightforward. Usually, I include the estimates for the effort and milestones in the project plan, which is created by the project manager and approved by the system owner.


In the case of the system to track the pages of the technical manuals, the objectives and business use cases were defined, validated, and approved. Everyone realized the real size and scope of the project. We managed to both construct a good tactical plan that met the objectives of the business and deliver the system. Even though the effort virtually started over, the resulting system was in use for 10 years.

Further Study

  • Kruchten, Philippe. The Rational Unified Process: An Introduction (3rd Edition). Boston, MA: Addison-Wesley Professional, 2003.
  • Weldon, Jr., Richard F. "Software-Architecture Practices in Requirements Management in a Package/COTS Environment." Rational User Conference 2003.


About the author

Richard F. Weldon, Jr., has been an IT Practitioner for 26 years and has extensive experience as a developer, software architect, and software-development coach and methodologist. He has led software-development process-improvement efforts and twice has spoken at the IBM Rational User Conference.

This article was published in Skyscrapr, an online resource provided by Microsoft. To learn more about architecture and the architectural perspective, please visit skyscrapr.net.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
© 2014 Microsoft. All rights reserved.