Export (0) Print
Expand All
54 out of 61 rated this helpful - Rate this topic

Measuring Success with Software Factories and Visual Studio Team System

 

Marcel de Vries
Info Support

Jack Greenfield
Microsoft Corporation

November 2006

Applies to:
   Microsoft Visual Studio Team System
   Microsoft SQL Server 2005 Reporting Services
   Software Factories

Summary: This white paper discusses how software factories and Microsoft Visual Studio Team System can be used together to improve quality, predictability, and productivity. Using Visual Studio Team System data-warehouse and reporting capabilities, the software-factory builder can reliably determine which aspects of product development need improvement and how to modify the software factory to improve them.

This white paper concludes that greater quality, predictability, and productivity can be achieved with a software-factory approach, rather than with traditional one-off development. The concepts and working methods are targeted at an audience of systems integrators and enterprise customers who develop custom software. (23 printed pages)

Contents

Introduction
Changing the Way We Build Software
Measuring Quality and Productivity
Applying Visual Studio Team System
Using Measurement Constructs (ISO 15939)
Putting It All Together
Conclusion
References

Introduction

Building software today is hard. Systems get more complex and larger every day. We face rapidly changing technology, while trying to keep pace with the demands of business customers who want us to write more software better and faster. Is it really possible to be more productive, while producing better-quality software? Can greater productivity be sustained across maintenance and upgrades, without degrading quality or significantly rewriting the code? Systems with millions of lines of code might not be possible to rewrite, especially if the business wants the changes quickly. In this white paper, we will explain how you can be more productive and produce better software by taking a software-factory approach to software development.

Changing the Way We Build Software

For several decades, the software industry has created software systems to support the needs of its customers. Despite all that experience, however, quality and productivity are not improving quickly. We realize about the same level of productivity today as we did 10 years ago, according to an annual survey by the Standish group. Currently, 54 percent of all software-development projects are delivered over budget, 66 percent are considered unsuccessful by their customers, and 90 percent are not delivered on time! The most disturbing aspect of the survey, however, is that these figures have not improved much over the last decade.

Confront a group of developers or project managers, and ask them if they think they are successful in building software today. They almost always tend to laugh a little, and then admit that they also fall victim to these kinds of results. It's sad to see that we tend to think that because building software is such a creative process we have to live with poor project delivery.

How Software Is Built Today

Poor productivity often results from not having the right requirements or all of the requirements. Productivity will degrade seriously, not only if we are building the wrong system, but also if the scope of the project is creeping—causing it to exceed its initial volume of functionality. We also find it difficult to achieve the right level of testing, to ensure the level of quality that is expected. Failed maintenance and upgrade releases are often related to the inability to predict the effect of code changes on product quality. How do we ensure that a change to one part of the system has not broken another part?

A lot of these problems arise because we learn too little from the projects we have done. Ask development teams if they learn from their mistakes, and it's remarkable to see how few of them actively harvest learnings and apply them to future projects. Few teams regularly reuse solutions created in the past, or keep track of the things that went well and the things that went wrong. As a result, there is not enough knowledge transfer between projects; knowledge remains in the heads of the developers. Lessons already learned are relearned by new developers. Developers find it hard to leave existing projects, because they know so much about them, and find it hard to join new projects, because not much is written down.

Because most projects fail to deliver on time and within budget, we can see that we also have a predictability problem. Ask project managers how they do their planning and scheduling, and it's even more remarkable to see how many stick with old habits, saying things such as, "I ask my best programmer for an estimate on how much this feature will cost, then I multiply that estimate by two; my programmer tends to be optimistic." Often, this kind of "budget" is then reduced, either to deliver a competitive bid, make the numbers work out, or satisfy expectations. These old habits are hard to change.

But let's be honest. Project managers are almost destined to rely on expert guesses, because they do not have reliable metrics to use instead. What they need is the ability to capture metrics from software-development projects in a reliable way—for example, in a data warehouse. From those metrics, they could learn how their development teams operate, and use that knowledge to construct better estimates. Using historical data to calibrate to organizational or project performance improves estimation accuracy, but most estimates are made without it.

With poor predictability, it's hard to control projects. When project managers make a decision, how can they know what effect it will have on schedule and costs? You simply cannot use poor estimates to predict the impact of the decision making. It's like shooting with your eyes closed, and hoping you'll hit the target.

Because of these problems, we produce too little software in too much time. We deliver software of poor and unpredictable quality. We have difficulty keeping development under control. It's hard for practitioners to change projects. Every hour over budget costs extra money, and every defect found in testing—or, even worse, in production—costs still more money, as does every bad decision made along the way. Building software today is very expensive.

Using Software Factories

Can we improve? Yes. We can build software on time, within budget, and with adequate quality. First, however, there must be an organizational awareness that the current approach to building software is grossly inefficient. Without awareness of existing problems, there will be no drive to improve. To start building software systems predictably, we must make a cultural change. We must make it easier for practitioners to know what to do, when to do it, why to do it, and how to do it, and we must automate more of the rote and/or menial aspects of their work. These goals can be achieved by formalizing selected aspects of the development process. What are those aspects? How can we formalize them without sacrificing agility?

Creativity and Formality

The key is to formalize the fine-grained activities that create and modify work products, instead of trying to formalize the entire development process. Coarse-grained workflows can then evolve to suit the project requirements and circumstances, provided that invariants are maintained to ensure that the results are valid. You might work on writing multiple classes at the same time, for example, switching between them as needed, provided that all of the dependencies among them are satisfied when you compile. This flexibility preserves agility and prevents queuing around a single activity. When you formalize fine-grained activities, instead of trying to formalize the entire development process, you gain quality, predictability, and productivity without sacrificing agility. You learn where formality is needed and where it is not needed, making it easier to strike the right balance between agility and formality. By applying formality only where and when it is needed, you allow the development process to evolve in a very organic and bottom-up way, as experience is gained. This approach also makes it easier to capture knowledge and transfer it between projects and practitioners, as we shall see later.

Creativity is needed for solving problems, but not for performing rote and/or menial activities. It's sad but perhaps not surprising that a large part of the day-to-day work of the typical developer consists of rote and/or menial activities. The key to gaining productivity and predictability without sacrificing agility is to encourage and support creativity where it is needed, and to formalize where it is not needed. The more we formalize with patterns, practices, and tools, the more gratuitous variation we drive out of the process. Driving out gratuitous variation makes it easier to measure software-development projects, learn from them, and use those measurements to improve future projects. Formalizing the rote and/or menial activities enables greater creativity, by reducing the amount of time and energy spent on activities that do not require creativity.

Software Factories

What we are talking about is industrializing software development, applying techniques long proven in other industries to our own industry, in the hope of making things better for our customers and ourselves. This industrialization is achieved by creating software factories that make software development more productive and more predictable at the same time.

A software factory is a packaged set of integrated processes, tools, and other assets that are used to accelerate life-cycle tasks for a specific type of software component, application, or system. Acceleration is achieved by giving practitioners guidance that helps them know what to do, when to do it, why to do it, and how to do it. We do this by providing process guidance with just enough process formalization, components that can be rapidly assembled and configured, or frameworks that can be rapidly completed, and by providing specialized tools that fully or partially automate the rote and/or menial activities.

One of the main goals we want to achieve with a software factory is to learn from the solutions that we created in the past for commonly encountered problems or requirements, and to apply those learnings to future projects. To do that, we need a way to describe those reusable solutions, and a way to organize them around the areas of interest or concern in which the problems or requirements they address are typically encountered. Organizing in this way helps to narrow the set of problems or requirements in focus at any one time, making it easier to identify reusable solutions that might apply. These areas of interest can be at a high level of abstraction, such as defining architectural layers, or at a low level of abstraction, such as defining method signatures for C# classes or interfaces.

Schemas and Templates

Software factories use the terminology of IEEE 1471, Recommended Practice for Architectural Description of Software-Intensive Systems, which calls an area of interest a viewpoint. Viewpoints can be defined for a variety of reasons, such as describing different parts of a product, at different levels of abstraction, in different phases of the software-development life cycle. Viewpoints nest, so a Data Access Layer viewpoint might contain a Data Access Library viewpoint, a Logical Database Design viewpoint, a Physical Database Design viewpoint, and a Data Security viewpoint.

Work products that are produced or consumed from a given viewpoint make up a view. A view is defined by its viewpoints in much the same way that an object is defined by its class. A work product might be a file, part of a file, multiple files, or parts of multiple files. In a Data Access Library viewpoint, for example, a work product might be a file containing a data-access class for a database table. A view defined by the Data Access Library viewpoint might be a project containing a set of data-access classes for all of the tables in a given database. Viewpoints can be crosscutting, as well as modular. The work products for a Data Security viewpoint, for example, might be preambles in the insert, replace, and update methods of data-access classes.

Viewpoints tend to drive the creation of project types and tools. For example, a Data Access Library viewpoint might map to a class library project type based on a project template with certain properties, containing item templates with certain properties for data-access classes. When we create projects of that type, we are creating views based on that viewpoint. The contents of the views are the files in the project folder. These are the work products defined by the viewpoint.

At higher levels of abstraction, viewpoints tend to drive the development of tools. For example, a Business Process viewpoint might be manifested by a business-process modeling tool. The tool exposes views based on that viewpoint in the form of models, which are the work products defined by the viewpoint. It also supports the activities defined by the viewpoint with menu commands and other gestures, such as a drag-and-drop operation from the toolbox onto the diagram to create a new message type.

For each viewpoint, we need a name and a description. We also need to know the work products produced or consumed, the steps involved in creating or modifying those work products, and the assets used to perform those steps. In other words, a viewpoint should tell us what to build, how to build it, and what we can use to build it (patterns, tools, templates, and so forth). Viewpoints are the building blocks of software factories. They formalize the fine-grained activities that create and modify work products.

Every software factory defines the set of interrelated viewpoints required to build its products. This set of interrelated viewpoints is called a schema. You can think of a factory schema as a table of contents that helps you discover how the software factory is organized, so that you can use the assets it provides to build the products it targets. A factory schema is a lot like a database schema, which helps you discover how a database is organized, so that you can navigate, query, and manipulate the data that it contains. Instead of describing the organization of a database, however, it describes the organization of a software factory.

To give a little example of a schema, let us look at a factory that creates Enterprise Administrative Applications based on a Service-Oriented Architecture (SOA). For this factory, you might define the following viewpoints:

  • Front-End Applications. Describes the creation of Web or Windows applications that support data entry on the user's desktop. It tells the user how to create Web or Windows forms using a forms designer and data-entry controls from a library you developed. The activities for this viewpoint are to create Web or Windows forms. The work products are the forms. The assets are the forms designer and the library of data-entry controls.
  • Process Services. Describes the creation of Web services responsible for managing business processes. The Web services are always constructed in the same way from a pattern, and have a service contract described by a C# interface using typed objects generated from an XSD schema. The activity for this viewpoint is creating Web services. The Web services are the work products. The assets are the pattern and the typed object generator.
  • Platform Services. Describes the creation of Web services responsible not for business data, but only for services generic to all systems, such as printing, auditing, authorization, and so forth. This viewpoint provides generic services available for reuse, and tells the user how to evaluate and customize each service. The activities for this viewpoint are evaluating and customizing the services. The work products are the customized services. The assets are the generic services.

The last piece of software-factory nomenclature to understand is the factory template, which is an installable package containing the assets supplied by the software factory. To use a factory, a practitioner must install the factory template on a workstation.

Factories are usually developed bottom-up. After you have identified a target product family, as you build systems, you can start discovering and describing work products and activities; organizing them into viewpoints; developing assets to support the activities; organizing the viewpoints into a factory schema; and packaging the assets into a factory template.

Feature Configuration

One of the keys to building effective factories is defining work products, activities, and assets that can be put to use in many different solutions. For example, the Data Access Library viewpoint can provide a library that helps you access a SQL database. You might find that you do not always use the same RDBMS for every project, and that your library must therefore accommodate other RDBMS. The work products used with the library and the descriptions of the activities performed with the library might also have to adapt accordingly. The more variability a viewpoint accepts, the more flexibility you have for applying it to multiple solutions, but also the more work you must perform to configure it correctly. Accommodating variability introduces complexity. You must find the right balance between too much variability and too little variability, to make your software factory effective. The more generic you make the factory, the less productivity and predictability you gain.

A good way to determine how much variability should be accommodated is to analyze the features of the solutions you expect to build. Using a technique called commonality variability (C/V) analysis, you separate common (or mandatory) features that must appear in the same way in every solution from variable (or optional) features that might appear only in some solutions, or that differ in how they appear from one solution to another. Describing C/V analysis in detail is beyond the scope of this white paper, but there are many papers and books on the subject for the interested reader.

In the factory that creates Enterprise Administrative Applications based on an SOA, Data Access is a mandatory feature (as every solution will perform Data Access), but Web Front End is an optional feature (as some of the solutions will have Web Front Ends, and others will have Windows Front Ends or even no front end at all).

In a factory, you can use feature models to describe the features that can appear in a member of the product line, to separate the common features from the variable ones, and to indicate how the variable features can appear. (Feature models are described in detail by Czarnecki and Eisenecker. For more information, see the section of this white paper.) Feature models can also define the way decisions about variable features influence each other, such as stating that one variable feature require another, or that one feature be incompatible with another. These decisions are made for a given application by configuring the feature model. Configuration is a simple process that involves specifying which of the variable features described by the model will appear in the application, and how each of them will appear. A simple feature model for the previously described factory is shown in Figure 1.

Aa925157.bldsft01(en-us,MSDN.10).gif

Figure 1. Feature-model example

While feature models are often used for user-visible features exposed as requirements, they can also be used for design, implementation, testing, and deployment features visible only to developers. Some powerful scenarios are enabled by linking these features, so that configuring user-visible features configures developer-visible features. For example, linking user-visible Front End features to developer-visible solution-layout features might let the factory generate a default solution layout based on the type of front end chosen by the user.

Feature modeling is just one of many well-known mechanisms that can be used to describe variability. Others include forms, tables, wizards, designers, templates, patterns, scripts, and code. Variability mechanisms can be used alone, and in various combinations, to specify and implement variable features in a software factory.

Keys to Success

What are the key success factors in transitioning to a software-factory approach? You need the following abilities:

  • Find the commonalities and variabilities in your products.
  • Measure how your product-development process performs today, in terms of productivity and quality.
  • Define and improve a process that supports product development efficiently.
  • Provide a transparent model that helps everyone understand the productivity and quality being achieved, and to use the transparent model to drive cultural transformation.
  • Plan across more than one project at a time.
  • Quickly and cheaply develop reusable assets that encapsulate knowledge and make it easy to reuse, especially custom tools.
  • Identify specific domains and target them with custom tools and processes, instead of trying to apply general-purpose tools and processes to all domains.

Measuring Quality and Productivity

Now that we know a bit about software factories, let us look more closely at measuring quality and productivity in the context of a software factory.

As it turns out, the factory schema provides a useful mechanism for organizing metrics. Because each viewpoint targets a specific aspect of the software-development process, you can use viewpoints to define targeted measures of productivity and quality. Using those measures, you can gather data for specific aspects of the software-development process. By analyzing the data, you can then determine which viewpoints need to improve, how to improve them, and what you can gain by improving them.

To implement this approach, you need a way to express product size, time and budget spent, and product quality. These measures can be used to quantify predictability, productivity, and quality for each viewpoint. They can also be used to evaluate the end products produced by your factory. By measuring each viewpoint, as well as overall factory performance, you can determine how each viewpoint affects overall factory performance, and therefore how much to invest in supporting the activities in a given viewpoint with reusable assets. For example, you might provide simple guidelines for viewpoints that don't significantly affect overall efficiency, and sophisticated Domain-Specific Language (DSL)–based designers for viewpoints that do.

This approach is similar to the way in which large enterprise organizations optimize their business processes. They define the required skills, processes, and tools to produce the work products for a specific business goal. From there, they measure the effort it takes to fulfill the process, and then they analyze where they can approve. They call it Business-Process Monitoring, but it's basically what we are doing when we optimize a software factory. Clearly, measurement is a critical factor in establishing a baseline of the current performance of your factory and in identifying the right investments to make in software-factory development. This process helps you get the best return on investment in terms of predictability, productivity, and quality. It helps you compare the results to the goals initially set before you started factory development.

Using Function Points to Express Product Size

One of the aspects of software development we probably want to improve is productivity. To quantify productivity, you need a metric that you could use to express productivity in terms of software-product volume built in a span of time. When we are able to predict the size of the system and measure product-size growth during development, we can better predict time required to complete the project and measure productivity in terms of hours spent per unit of product size. By measuring the actual growth and size, we are able to identify differences between the actual and planned values, and to start analyzing and managing the differences when they become apparent.

At this point, you might be wondering how we can predict product size and growth with enough accuracy to make this kind of measurement and analysis useful. It certainly does not seem possible if we are developing arbitrary applications one project at a time. If we are using a software factory, however, we have two advantages that significantly improve predictability. First, we are developing a member of a specific family of products with known characteristics, not just an arbitrary application. Because a factory allows us to describe a product family and its salient features, and more importantly to refine that description as experience is gained over the course of multiple projects, we know much more about an application being developed using a factory than we do about an arbitrary application. Second, we are developing the application by applying prescriptive guidance supplied by the factory. We therefore perform many development tasks in largely the same way from one application to the next, using some of the same patterns, templates, libraries, and tools. If we standardize the way in which we do some things, a factory tends to remove gratuitous variation from the development process, making it much more likely that product size and growth will follow similar patterns from one application to the next. If we are using a software factory, measuring these values and identifying, analyzing, and managing the differences between their planned and actual values can be extremely useful.

There are already many estimation methods that can help you determine the size of your system. If you want a metric that can help you express size and productivity, you need an objective quantification. This objective quantification can be accomplished by using a method that is standardized. One of those methods is Functional Size Measurement, as defined in the ISO 24570 standard. (ISO/IEC 24570:2004 specifies a method to measure the functional size of software, gives guidelines on how to determine the components of the functional size of software, specifies how to calculate the functional size as a result of said method, and gives guidelines for the application of said method.) This ISO standard uses function points as a way to express the size of the software system based on functional specifications. These function points can be considered as a "gross metric" to determine the size of a system and to estimate effort and schedule. During development, this metric can be used to determine whether the project requires more or less work relative to other similar projects.

With function points, you define the size of the system in terms of business functionality. This functionality is determined from the early requirements you have gathered, and scored using the specification. Function-point analysis leverages the knowledge of building database-oriented applications, and can be applied whenever you build a system that uses data manipulation in a database. Function points are calculated around the knowledge of the number of estimated tables your application will have and the number of data-manipulating functions as data retrieval and data-update functions. From this, you can calculate a number of function points that expresses the size of your product. After you have expressed your estimated product size, you can learn how much time it takes to implement one function point, or even use historical data already available to make predictions on how much time it should cost to implement a function point. A software factory can influence the time spent to implement a function point (productivity), the number of defects per function point (quality), and the predictability of your estimations.

Looking more closely at predictability, we have seen how a factory allows us to separate the common features of all members of a product family from the variable features that are present in only some members, or that have different sizes or different characteristics in different members. Instead of gathering requirements for a given product from scratch, we can therefore assume it has the common features shared by all family members, and focus on specifying only its unique or variable requirements.

Returning to function-point analysis in the context of a factory, we find that we can start with a fixed minimum product size that we will always have, because our product family always contains certain fixed parts. This fixed minimum product size is a measure of the common features of the product family. We can also define computable sizes for many of the variable features that might or might not be added to our basic product profile. From this information, we can estimate what a certain configuration of features will cost. That information, in turn, can help us decide which features to build. In other words, function-point analysis based on feature configuration gives us a way to make informed trade-offs regarding cost and functionality up front.

Let us say, for example, that we applied function-point analysis and determined that the system we are going to build has an estimated size of 500 function points. As we start building this system, we can determine that it costs 6,500 hours to build. From that, we can express our productivity as 13 hour (h)/function point (fp).

If we also keep track of the defects we found in the product during development, user-acceptance test, and production, we can also express that number as a quality metric. Say that we found 500 bugs during development, 50 during acceptance testing, and 5 after going into production. We could express this as having 1 defect/fp during development, 0.1 defect/fp at acceptance test, and 0.01 defect/fp in production.

Now, say that many of these defects can be traced back to the front-end applications viewpoint. From that, we learn that this viewpoint has a high contribution to the overall number of defects, and we can focus our attention on what might need improvement within this viewpoint. From this kind of analysis, we can determine which viewpoints to improve and how to improve them, to reduce the number of defects the next time the factory is used. The great thing about having a quantification of the number of defects against a metric like function points is that you now can set goals for the improvements you want to achieve by your investments. For example, I want the number of defects/function point to go down by 20 percent for the front-end applications viewpoint. Performing defect and function-point analysis on a per-viewpoint basis gives us a powerful tool for improving our product-development process, because it helps us determine where the bottlenecks lie and, therefore, where to invest and how to invest to obtain better results.

The base metrics, just explained, can tell you something for your next project or perhaps even the current project, if you have more development to do and the development touches the viewpoints for which you have gathered metrics. As an example, say that after gathering requirements and applying function-point analysis, you know the new system is going to be 750 fp. You can now predict that implementing those requirements should result in about 9,750 hours of work, and that you should find about 75 bugs during user-acceptance tests.

The accuracy of these predictions depends highly on the level of metrics you have gathered and on the degree to which future system-development projects resemble the ones for which you have gathered metrics. Each viewpoint in a software factory accommodates a certain amount of variability in the development of the system. The amount of variability, in turn, determines the kinds of assets the viewpoint can offer to the factory user, and therefore the level of consistency from one project to the next for that aspect of the product. For example, in the first version of your factory, you might have only a few viewpoints that describe the architecture of your system and that provide only guidelines to support the developers in fleshing out the implementation. The developers using this factory will have to write a lot of code by hand. After building multiple systems using this factory, however, you might see that the construction of the user-interface portion of those systems tends to vary a lot, as the individual developers interpret the guidelines quite differently. From that observation, you could conclude that the user-interface viewpoint accommodates a lot of variability. When you have many viewpoints in your factory with high variability, your measurements and predictions will be less accurate then when you have very limited variability in the viewpoints.

Now, at this point, we might ask whether or not the level of variability in a given viewpoint is really necessary. If not, we should be able to increase the accuracy of our measurements and predictions by removing the excess or gratuitous variability. For example, let us look at the user-interface viewpoint again; but this time, we provide a set of library components supporting the guidelines, and a graphical tool that configures user navigation paths. By providing these assets, we are formalizing some aspects of the user-interface development process that were previously loosely defined. This formalization reduces the amount of variability accommodated by the user-interface viewpoint. Systems developed using this factory will exhibit greater uniformity in user-interface construction, making our measures and predictions more accurate. Because the magnitude of error in estimates decreases as the variability accommodated by the factory decreases, predictability can be improved as a factory evolves by removing excess or gratuitous variability. In practice, productivity and quality also tend to improve with predictability, because reducing variability reduces the amount of time required to build a given feature and the number of defects introduced during its construction. At this point, a quick word of caution is in order: Following Occam's razor, variability should be reduced as much as possible, but never to the point of making the project take longer or cost more by over-constraining the developers.

When you start using function points, you can initially use historical data from surrounding organizations found in the literature to do your first estimations. Historical data is useful, because it accounts for organizational influences—both recognized and unrecognized. The same idea applies to the use of historical data within the software factory. Individual projects developed using a software factory will share a lot in common. Even if you do not have historical data from past projects, you can collect data from your current project and use it as a basis for estimating the remainder of your project. Your goal should be to switch from using organizational data or industry-average data to factory data and project data, as quickly as possible (McConnell, 2006).

Using Measurements to Improve a Factory

By baselining or actually calibrating the productivity and quality parameters measured in hours per function point and defects per function point, you can analyze the project data and identify activities that might take a lot of time, or viewpoints that have a high defect contribution. After calibrating, you can start changing the way in which your factory is organized, and improve it in a variety of ways, such as the skills it requires, the process it defines, or the reusable assets it provides. It is crucial to identify the areas that need improvement, so that your investments are well-placed. However, that should be relatively easy, now that you have a way to determine how much each viewpoint contributes to predictability, productivity, and quality. After you have a baseline in place with initial data, you can run a continuous loop that analyzes the performance of each viewpoint, uses that information to determine what to improve, makes the improvements, and then repeats the process.

This virtuous cycle can be used to target a variety of measures. To improve productivity, for example, you can identify the least-efficient viewpoint in terms of hours/function point, and improve it by providing more or better guidance or training; by improving the templates used to construct initial cuts of the work products; by providing specialized tools that automate work product construction and modification; and so on. A key part of this process is estimating the cost of making a given improvement, estimating the gain in productivity likely to result from making the improvement, and estimating whether the results justify the investment. After implementing the improvement and incorporating it into the factory, you can measure whether it met the goals you set, in terms of the reduction in hours/function point. Figure 2 illustrates this process.

Aa925157.bldsft02(en-us,MSDN.10).gif

Figure 2. Iteration loop for factory development

Applying Visual Studio Team System

Now that we have learned how to define a factory and use measurements and analysis to refine it iteratively, consider how to enable our product-development team to use the factory to create the required work products. This enablement starts with a development environment that supports the whole product life cycle from birth to discontinuation, such as Visual Studio Team System. Using Team System is a key to enabling your product-development teams to benefit from the approach described earlier.

Team System consists of roughly two parts. The Visual Studio Team Editions (Team Edition for Architects, Developers, and Testers) installed on the development machine are part of the Visual Studio Development IDE and provide tools to specific roles in your development team. The second part is the Team Foundation Server (TFS) that hosts core life-cycle aspects of Team System, such as Version Control, Work-Item Tracking, Team Build, Team Portal, and the Data Warehouse that contains data about the projects that use TFS.

Implementing a Factory with Team System

Currently, Team System does not understand software factories. However, because Team System is so configurable and extendible, we can manually set it up to support a software factory by mapping various parts of the factory schema onto various configuration elements or extension points.

Remember that a software factory contains a schema that describes its organization. As we have seen, the factory schema defines a set of interrelated viewpoints, and each viewpoint describes related work products, activities, and assets for users in a specific role. We can use this information to configure Team System for developing applications.

A viewpoint can be mapped to a concept that Team System calls an area in one or more iterations. The role associated with a viewpoint can be mapped to one or more Team System project roles. In practice, multiple viewpoint roles will probably be mapped to a single Team System project role. The activities defined by a viewpoint can be added as work items in those areas at project creation, and directly assigned to the appropriate role. They can also be documented by customizing the process guidance, and custom work-item types can be created to track them and link them to work products. Some of the work products can be described using custom work-item types. Content assets, such as guidelines, patterns, and templates, can be added to the project portal document libraries. Executable assets, such as tools and class libraries, can be placed in the version-control system. To measure and improve the performance of our factory, we can add metrics to the Team Foundation Data Warehouse.

The keys to configuring Team System are the project-creation wizard and the process template. The project-creation wizard is a tool for creating projects in TFS. It uses a file selected by the user called a process template to configure the server for the project. The template contains several sections, each describing the way in which a specific part of the server will be configured. With the process template, for example, you can define work-item types, customize version control, define areas and iterations, define roles, assign the appropriate rights to each role, set up the project portal, and do many other things to customize the development environment and the development process.

Let us look at how to use the project-creation wizard and process template to configure Team System to support a software factory.

Defining Work Items

Team System uses work items to track the work that must be done to create a given product. Work items describe the work that must be done, and they identify the party accountable for that work at a given point in time. Work items can be of different types designed to describe different kinds of work. For example, a bug can be described by a work item of type Defect that contains information pertinent to fixing a bug, such as the description of the bug, reproduction steps, estimated time to analyze or fix the bug, and so on. Work-item types are created or modified by changing the XML definitions loaded into the server and used at project-creation time. They can also be modified after project setup.

Some types of work items defined by the MSF for CMMI Process Improvement, such as Bug and Requirement, describe work products. One type of work item, Task, describes the activities performed to take a work product from one state to another. Both types can be put to use effectively in a factory, because they very closely match the concepts used to define the factory. Specifically, we can book work items for the work products defined by a viewpoint, and tasks for the associated activities. Using these work items, we can gather information on how much time is spent on each activity, and we can learn what impact the activity has on factory productivity.

One of the issues you will encounter when performing this mapping is that work items currently do not nest. The inability to nest work items makes it hard to describe the work products of aggregate or composite viewpoints, such as the Data Access Layer viewpoint described above. You will also find that many work products are not explicitly described by work items in a typical team project. For example, instead of creating a work item to describe a data-access library, we might create a work item to describe the construction of a data-access library, and then link the work item to the source files for the library in the configuration management system. Another issue is that Team System tasks do not carry pre- and post-conditions like activities in a factory, so the criteria for moving them into or out of scope are not documented, and scheduling decisions must be made manually.

Defining Areas and Iterations

Work items can be linked to a so-called area of your project and to an iteration. Areas provide a way to book the work on a specific part of the solution that is of interest when you want to run reports on the accumulated data in the data warehouse. Areas in Team System closely match the concept of viewpoints in a software factory, as both represent areas of interest or concern.

When you register the work that is done on specific areas of interest, you can find out which areas contain the most bugs or consume the most time. When you map areas of interest in work-item tracking to your factory viewpoints, you can use these metrics to provide the productivity and quality measures for specific viewpoints.

To obtain the correct information from work items, you must define the areas and iterations properly when you start product development. A factory simplifies this task by defining the viewpoints of interest for the type of product being developed. All you have to do to set up your areas correctly is define an area for every viewpoint. You might then need to map the viewpoint names to area names that will be familiar to your development team, so that they can readily identify the area to which a given work item belongs.

One very good starting point in defining viewpoints for a factory is a set of common viewpoints that tend to appear in many factories. Good examples of common viewpoints appear in the schema for the Business-Collaboration Software Factory (see the section at the end of this white paper). Two of those common viewpoints that prove particularly useful in configuring Team System are System Engineering and Project Engineering. In the System Engineering area, you should make a subtree containing the architectural viewpoints that describe salient parts of your system. This will help you identify which parts of the system have the most significant impact on productivity (time spent) and quality (number of defects). The Project Engineering area is also interesting, because it can help you find anomalies in the way in which you have formalized activities in your project, and it can help you decide whether or not to improve the process definition at certain points. Figure 3 shows an example of areas and iterations that reflects the schema for a simple factory that builds a service-oriented administrative application with multiple front ends.

Aa925157.bldsft03(en-us,MSDN.10).gif

Figure 3. Example area definition reflecting viewpoints

The areas you define for work-item tracking will evolve, along with the factory. As the factory matures, the set of viewpoints that make up its schema will change, as will the set of viewpoints you want to measure. The area tree can become pretty deep if you try to incorporate every viewpoint defined by your factory. It is very important that you do not explode the tree into many different levels. Keep in mind that it needs to be very simple, so that team members can easily identify the areas to which work items should be linked. The more deeply nested the tree, the harder it becomes to find the right area for a given work item. If it becomes too hard, developers will simply book work items near the root of the hierarchy, defeating the purpose of creating a deeply nested tree.

Defining Roles

The roles you create within Team System should reflect all the roles identified by the viewpoints in the factory schema. The roles do not have to be exactly the same as those identified by the viewpoints, but each role in your project should map to one or more roles identified by viewpoints. Because roles identified by viewpoints are generally more fine-grained than roles defined for a project, one project role generally implements multiple related viewpoint roles.

Configuring Product Features

By modifying the wizard and the process template, you can enable configuration beyond the configuration enabled by the default wizard and the process templates that ship with TFS. In fact, you can create wizard custom pages that let the user configure some of the variable features defined by the factory. Returning to the previous example of a building block for data access configured to use different database products, you might want to ask the user which database product to use, and then place a version of the building block preconfigured for that product in version control as a starting point for the project. While software factories generally require configuration throughout the development process, customizing the project-creation wizard and the process-template format is a good starting point. Figure 4 shows how the feature model shown in Figure 3 can be configured using a custom wizard page.

The project-creation wizard is a great point of extensibility that fully supports the software-factory concept of finding assets with commonality. Take these assets and develop them separately. Define the different variability you would need for the asset to be reused, and use a wizard page to question the user on how the variables must be set for this particular project. Then the wizard can make the customization based in the input, and tailor the asset to the needs of the project. While the concept of commonality and variability in software factories goes beyond the preconfiguration of these assets, this can be the starting point of customization. After a selected asset is configured, the factory might also provide an additional configuration tool to change selected configurations after the initial setup of the project.

In a factory, you would use the previously shown feature models to describe the possible features you want to use, the way in which certain decisions on a feature can influence other features, and the way in which the factory instance is created. As you saw in the example in Figure 1, you can create a feature model to resemble a set of features for a factory targeted to the development of an administrative application using an SOA.

Figure 4 shows how this model can be reflected in a custom wizard page that allows you to preconfigure the factory.

Click here for larger image

Figure 4. Project wizard custom page (Click on the picture for a larger image)

There are at least two additional ways to support configuration in team-project setup, besides the project-creation wizard. You can customize the version-control repository to anticipate major configuration decisions that will be taken during solution development, so that it is easy to save the state of the solution in configuration management before the decisions are taken, and to restore it later if you have to change a decision. This technique is called backtracking. You can also define work-item types that capture configuration decisions, and then add instances of those types to the work-item tracking database as the decisions are made, to capture the outcomes, as well as to schedule the resulting work.

Customizing the Project Portal

Team System has a project portal that can be used by the development team to share project-relevant information. The portal is also the entry point to the process guidance for a specific process template, as well as to reusable assets, such as templates and guidelines. The reusable assets to be uploaded to the portal are supplied by the process template. You can also change the content displayed in the process-guidance Web site. This customization is performed using an InfoPath document. The InfoPath document is compiled to create a new Web site that can be incorporated into the process template. After uploading the new process template, you can create team projects that use your customized process-guidance Web site.

Adding Measurements to the Data Warehouse

The Team System Data warehouse keeps track of all kinds of information about the development of the solution. One section of the data warehouse holds information about work items, which is interesting from a factory perspective, as described earlier. Other sections hold information about tests, daily builds, and other Team System features. The data warehouse can be extended in two ways to support measurement.

First, you can change the fields kept in the warehouse for a specific work-item type by modifying the work-item type definition, by either changing the fields it contains, or adding the fields to new facts or dimensions in the warehouse. When a field is marked as reportable in the work-item type definition, it will be added dynamically to the data warehouse. Of course, if you want to show reports on these additional fields, you will also have to create reports for the data and upload them to the reporting server, to make them accessible to other team members.

Second, you can incorporate data generated by custom tools. If your factory provides custom tools that generate data, and you want to use the data in the data warehouse, you can add a custom data-warehouse adapter to TFS, as shown in Figure 5.

Click here for larger image

Figure 5. Team Foundation Server data-warehouse architecture (Click on the picture for a larger image)

For example, to measure the size of each solution in terms of the number of lines of code, you would build a custom tool that counts the lines of code in file, and a custom data-warehouse adapter. You would also add a step to your daily build that runs your custom tool over the sources in the current solution, and places the result in a file. The custom data-warehouse adapter would then pick up the information from the file and make calls to the data-warehouse object model provided by Team System to add the information to the data warehouse. Custom data can be viewed using custom reports.

Using Measurement Constructs (ISO 15939)

So far, we have looked at how to define a factory, refine a factory using measurement and analysis, and configure Team System to support a factory. Before we can put all these insights together to build and refine software factories with Team System, we must know one more thing: how to collect the right information.

What we need are formal definitions of the relationships between the things we are measuring and the information we need to support refinement. Those definitions are called measurement constructs. Measurement constructs are combinations of base measures, derived measures, and indicators. A base measure captures information about a single attribute of some software entity using a specified measurement method, and it is functionally independent of all other measures. A derived measure is defined as a function of two or more base and/or derived measures, and it captures information about more than one attribute. An indicator is a measure that provides an estimate or evaluation by applying an analysis model to one or more base and/or derived measures, to address specified information needs. Indicators are the basis for measurement analysis and decision making. A measurement construct describes an information need, relevant entities and attributes, base and derived measures, indicators, and the data-collection procedure. Additional rules, models, and decision criteria can be added to the base measures, derived measures, and indicators. Figure 6 illustrates the structures of a measurement construct (McGarry, 2002).

Click here for larger image

Figure 6. Structure of measurement construct (Click on the picture for a larger image)

Key terms on software measures and measurement methods have been defined in ISO/IEC 15939 on the basis of the ISO international vocabulary of metrology. The terms used in this white paper are derived from ISO 15939 and PSM (Practical Software Measurement).

Defining a Measurement Construct

To define a measurement construct that we can add to our TFS data warehouse, we will use the following steps:

  • Define and Categorize Information Needs

    To ensure that we measure the information we need, we must clearly understand our information needs and how they relate to the information we measure. Experience shows that most information needs in software development can be grouped into one of the seven categories defined by ISO 15939: schedule and progress, resources and cost, product size and stability, product quality, process performance, technology effectiveness, and customer satisfaction. An example of an information need in the category of product size and stability might be, "Evaluate the size of a software product to appraise the original budget estimate."

    These information needs can be used to measure the properties of a specific viewpoint in a software factory. They must be prioritized to ensure that the measurement program focuses on the needs with the greatest potential impact on the objectives we have defined. As described earlier, our primary objective is usually to identify the viewpoints whose improvement will yield the best return on our investments. Because viewpoints can nest, we can often roll up measurements to higher-level viewpoints. For example, if we had a User Interface viewpoint containing viewpoints like Web Part Development and User Authorization, we might roll up the customer-satisfaction measurements from specific Web parts to the user-interface level.

  • Define Entities and Attributes

    A measurable attribute is a distinguishable property of a software entity. The entities relevant to the information need ("Evaluate the size of a software product to appraise the original budget estimate," for example) might be a development plan or schedule, and a baselined set of source files. The attributes might be function points planned for completion each period, source lines of code, and a language-expressiveness table for the programming languages used.

  • Define Base Measures and Derived Measures

    Base measures are functionally independent of all other measures and capture information about a single attribute. Specifying the range and/or type of values that a base measure can take on helps to verify the quality of the data collected. In our example, we have two base measures: the estimated size of the software product and the actual size. The scale for both base measures will range from zero to infinity. A derived measure captures information about more than one attribute. These terms are illustrated in Figure 7.

    Click here for larger image

    Figure 7. Base and derived measures software-size growth (Click on the picture for a larger image)

  • Specify Indicators

    Indicators are the basis for measurement analysis and decision making. They are measurement values presented to users. To use an indicator correctly, its users must understand the relationship between the measure on which it is based and the trends it reveals. The measurement construct should therefore provide the following information for each indicator:

    • Guidelines for analyzing the information. For our example, we might provide analysis guidelines like this: "Increasing software-size growth ratio indicates increasing risk to achieving cost and schedule budgets."
    • Guidelines for making decisions based on the information. For our example, we might provide decision-making guidelines like this: "Investigate when the software-size growth ratio has a variance of greater than 20 percent."
    • An illustration of interpreting the indicator. For our example, we might provide an illustration like Figure 8 and describe the following: "The indicator seems to suggest that the project production rate is ahead of schedule. However, after further investigation, it turns out that the actual size of one item was higher than planned, due to missing requirements that were not identified until initial testing. Resource allocations, schedules, budgets, and test schedules and plans are affected by this unexpected growth."

    Aa925157.bldsft08(en-us,MSDN.10).gif

    Figure 8. Graphical-representation indicator, software growth planned vs. actual

  • Define Data-Collection Procedure

    Now that we know how to relate the base measures to the information needs, we must define the data-collection procedure. The data-collection procedure specifies the frequency of data collection, the responsible individual, the phase or activity in which the data will be collected, verification and validation rules, the tools used for data collection, and the repository for the collected data.

Using a Measurement Construct

To use a measurement construct successfully, we must address two important issues: influencing indicators and avoiding dysfunctional measurement.

Influencing Indicators

To use indicators successfully for measurement analysis, decision making, and changing processes, you have to be sure that their users must know what they represent, how to interpret them, and what can be changed to influence their outcomes. For our example, the user must understand that to make the Actual Size in Function Points meet the Planned Size in Function Points, we must produce more function points in the same amount of time (that is, we must be more productive). The user must also understand how to increase productivity.

Avoiding Dysfunctional Measurement

Decision makers must also know how to avoid dysfunctional measurement. The goal of measurement is to improve performance by making changes, such as performing different activities, applying different assets, and so on. It is important to ensure that the changes make sense. You do not want people making counterproductive changes to achieve an expected outcome in some measure. In our example, the pressure to meet the Planned Size in Function Points might be so high that people would start to pad the implementation with additional lines of code. Identifying and describing risks is one of the keys to successful measurement. A best practice is to avoid measuring individuals. The more significance is attached to any quantitative social indicator, the more likely it is to distort and corrupt the processes it is intended to help improve.

Putting It All Together

At last, we are ready to combine everything we have learned and talk about how to define and use measurement constructs to improve a software factory supported by Team System.

Adding Measurement Constructs to the Data Warehouse

As described, each measurement construct needs to define at least the information needs, the entities and attributes, the base measures and derived measures, the indicators, and a data-collection procedure. To map this to the Team System Data Warehouse, we must determine how to obtain the required information, by either modifying work-item type definitions to add fields and mark them as facts or dimensions, or building a custom tool and custom data-warehouse adapter that collects data produced by the tool. We also must determine how to display the indicators, usually by creating custom Microsoft SQL Server 2005 Reporting Services reports.

Iterative Improvement

When you have mapped your factory onto Team System, you can start using it to build solutions. It will guide your team in building the solutions, and will provide you with information based on the measurement constructs you have defined and implemented. From there, you can start analyzing the measurements and using the indicators to identify opportunities for improvement. With this information, you can decide which viewpoints can be improved, how much you can gain by improving them, and how much to invest in improving them. Finally, you can make the improvements you have chosen, usually by adding guidance and providing better assets to support the creation of work products. As you build solutions using these improvements, you can again use the measurements and indicators to determine how the gains you realized compare with the gains you expected, and calibrate your investment planning accordingly.

Conclusion

This white paper is motivated by a desire to change the grossly inefficient way in which we build software today with "one-off" or project-at-a-time development. Our customers see that we struggle to deliver projects on time, within budget, and with the expected features. We can help ourselves and our industry as a whole, by capturing the knowledge we gain from experience and transferring it to other projects using software factories. We learned how to define a factory and how to measure its performance in terms of productivity and quality.

By quantifying the sizes of the products we build, measuring the time spent to build them, and registering the number of defects found, we can describe the performance of our factories. The mapping from the factory schema to Team System is done using the customization and extensibility points in Team System. We can set up Team System by placing the assets identified by the factory schema in the version-control repository or the team foundation portal. We can use the portal to provide process guidance for activities described by the factory schema. We can use the project-creation wizard to arrange the initial setup of our factory, and we can use feature modeling to create a mapping to define forms to add to the wizard. A large portion of the initial project is done using the process templates, and we can modify the templates to support our factories.

By defining measurement constructs and implementing them in the Team System Data Warehouse, we can gather metrics that describe software-factory performance in terms of productivity and quality. Over time, we can use these metrics to improve our factories constantly, as well as to gain not only productivity and quality, but also predictability by removing excess or gratuitous variability.

The end result of implementing software factories with Visual Studio Team System is more successful projects and greater customer satisfaction.

References

Austin, Robert D. Measuring and Managing Performance in Organizations. New York, NY: Dorset House Publishing Co., Inc., 1996.

Greenfield, Jack, and Keith Short. Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools. Indianapolis, IN: Wiley Publishing, Inc., 2004.

McConnell, Steve. Software Estimation: Demystifying the Black Art. Redmond, WA: Microsoft Press, 2006.

McGarry, John, et al. Practical Software Measurement: Objective Information for Decision Makers. Boston, MA: Addison-Wesley Professional, 2002.

 

About the authors

Marcel de Vries is an IT architect at Info Support in the Netherlands, as well as Visual Studio Team System MVP. Marcel is the lead architect for the Endeavour software factory, targeted at the creation of service-oriented enterprise administrative applications used at many large enterprise customers of Info Support. Marcel is a well-known speaker on local events in the Netherlands, including developer days and Tech-Ed Europe. He also works part-time as trainer for the Info Support knowledge center. You can read his blog at http://blogs.infosupport.com/marcelv.

Jack Greenfield is an architect for enterprise frameworks and tools at Microsoft Corporation. He was previously chief architect, practitioner desktop group, at Rational Software Corporation, and founder and CTO of InLine Software Corporation. At NeXT Computer, he developed the enterprise objects framework, now a part of Web objects from Apple Computer, Inc. A well-known speaker and writer, he is coauthor (with Keith Short, Steve Cook, and Stuart Kent) of the bestselling, award-winning book Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools. Jack has also contributed to UML, J2EE, and related OMG and JSP specifications. He holds a B.S. in Physics from George Mason University.

Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.