UML or DSL: Which Bear Is Best?

Len Fenster
Microsoft Corp.

Brooke Hamilton
Microsoft Corp.


March 2010


Summary: This article describes the scenarios in which UML or DSLs should be used, and how each can be effectively integrated with the other.



The release of Microsoft Visual Studio 2010 Ultimate marks the first time that architects will have a set of UML and DSL modeling tools in the same development environment. While the concepts of UML and DSL modeling have been around for a long time, this is the first tool release that effectively combines them in one product and enables rich integration among multiple models.

Yet, it was not long after the new UML capabilities were announced that a debate ensued over which modeling tool is superior. This debate, however, is perhaps as meaningful as a humorous scene from the US television program The Office:

Jim Halpert: “Question. What kind of bear is best?” Dwight Schrute: “That’s a ridiculous question.” JH: “False. Black bear.”

DS: “That’s debatable. There are basically two schools of thought...” JH: “Fact. Bears eat beets. Bears. Beets.Battlestar Galactica.”

DS: “Bears do not... What is going on?! What are you doing?!”

A debate about what is the “best bear” is meaningless. Along the same lines, this article will show that there is no “best” between UML and DSL. Just as the polar bear and the black bear are both best-suited for their particular environments, so, too, do UML and DSL have their unique strengths towards a particular problem space.

This article will not try to state which tool is “best”; instead, it will describe the scenarios in which UML or DSLs should be used, and how each can be effectively integrated with the other.


Where Is UML at Microsoft?

Just about every software architect and developer has at least some familiarity with the Unified Modeling Language (UML). Created by Rumbaugh, Booch, and Jacobsen as a means to hasten the adoption for object-oriented technologies, UML 1.1 was proposed to and accepted by the OMG in 1997. Since that time, UML has evolved into its current form of version 2.2. Yet, for these past 12 to 13 years, developers and architects who work within the Microsoft suite of tools were resigned to call upon Microsoft Visio or third-party software to try to reap the rewards that the uniformity of UML promised. The lack of UML tooling and support in Microsoft’s main development environment, Visual Studio, has been a void that many architects and developers have long wished was filled.

Instead, Microsoft provided a rich authoring environment for graphical domain-specific languages when it released the Domain-Specific Language (DSL) Tools capability with Visual Studio 2005. The Visual Studio 2010 Ultimate release adds—among other things— the ability to have DSL diagrams interact with each other and with UML diagrams. It also adds UML 2.x–compliant (or “logical”) class, component, activity, sequence, and use-case diagrams.

Keen observers might be quick to point out that this is not a complete list of UML 2.x diagrams. UML 2.2 defines 14 types of diagrams, 7 of which are a type of structure diagram (such as the class and component diagrams) and 7 of which are a type of behavior diagram (such as the activity, sequence, and use-case diagrams). However, the included diagrams cover the most used features of UML, and the underlying modeling framework allows for the dynamic addition of more diagrams with a later release, service pack, or power tool.


Which Modeling Technique Should I Use?

Using Visual Studio, architects have been creating custom visual designers that are specific to particular domains and generating code and other artifacts from them since DSL Tools was introduced. Until now, however, if a custom designer were needed to help model a particular domain, there was not much choice; DSL Tools was the only way to go. Even if they just wanted a state diagram with a code generator, they had to create a custom DSL. Some customers were reinventing UML-like designers by using DSL Tools.

Now, however, with the introduction of the UML diagrams—and the flexibility to not only extend the design surface for them, but also to generate artifacts from them—should we infer that Microsoft will no longer be encouraging development of custom DSLs? Should focus be moved from developing custom DSLs to extending the UML diagrams that ship with Visual Studio? After all, the great thing about the introduction of these new UML features is that it opens up new possibilities; it allows for the creation of models and artifacts that were, at best, nontrivial to create in the past. But with these new possibilities comes the need to make a choice. When should we extend the capabilities that are provided with the UML designers, and when should we look to create entirely new DSLs?

For architects, Table 1 describes the essential differences between the two approaches.

  • Cost of initial implementation is lower:
    • Five standard UML diagrams are included in the box.
    • Profiles must be authored.
  • UML diagrams interoperate in known ways (for example, class diagrams and sequence diagrams).
  • All valid UML notations are allowed, even if they do not apply to the domain that is being modeled.
  • Cost of initial implementation is higher:
    • A DSL language (meta-model and notation) must be determined and evolved.  
    • A designer (graphical, forms-based, or textual) must be implemented with the toolkit.
    • Interoperability between DSLs must be discovered and implemented.  
  • Language is constrained to the domain that is being modeled.

Table 1. Comparison of UML and DSL for modeling applications, from point of view of architects


If architects want to specify the usage of their modeling tools by development teams, the comparison is different. This might happen if the architect has defined a standard architecture that is to be followed by development teams and the architect wants them to create models of each instance of the architecture. For example, an architect might define a pattern for creating Web services, and then give development teams a set of modeling tools for creating each individual Web service. Tables 1 and 2 highlight important distinctions between the two approaches. When we consider UML, we know that it:

  • Has been a standard since 1997. With more than a decade of broad use, UML is a more standard (but less specific) way to communicate ideas than a DSL.
  • Was not created to satisfy the needs of a particular development language or platform. UML can describe object-oriented concepts just as easily for a system that is written in Java and runs on Linux as for one that is written in C# and runs on Windows.
  • Has implementation costs that are lower than DSLs at first, because the UML tools are included in Visual Studio, while DSLs must first be developed.
  • Can be used to create approximate descriptions of real systems when the domain in question is not well understood. As such, it is often used for documentation.
  • Models have standardized notation, but rely on profiles, stereotypes, and comments to add domain-specific information.
  • There is a variety of off-the-shelf tools.  
  • Design communication and documentation is often the goal.
  • Code-stub generation is commonplace.
  • Over time, cost of use is higher.
  • Models have domain-specific notation.
  • DSLs are custom-built and custom-tailored.  
  • Forward-engineering of working software is usually the goal.
  • Platform-specific code generation is commonplace.
  • Over time, cost of use is lower.

Table 2. Comparison of UML and DSL for modeling applications, from point of view of developers


DSLs, on the other hand have some advantages over UML. For example:

  • They do not contain unnecessary aspects of what they are modeling. If you look at a UML model you might find many diagrams—and many aspects of each diagram—that have not been used for that particular model. DSLs tend to be much more focused on the details of the domain in question and use the terminology of that domain.
  • The long-term cost of using a DSL can be much lower than with UML, because DSLs are created to fit a specific domain, as opposed to the work that a user has to do to apply general-purpose UML to a specific purpose.



UML and DSL are both useful modeling techniques. However, it is important to understand which scenarios make sense for which technique.

Scenario 1: Using UML to Model a Problem Domain

The sweet spot for UML is modeling problem domains. In other words, it is great for defining objects, their relationships, and their interactions. These models do not have to be platform-specific, or they can have platform-specific information applied via UML profiles. This scenario, however, is certainly not a new concept for most architects and developers, as they are used to seeing (and ignoring) UML models that are used for documentation.

The fresh aspect of UML for modeling domains is that Visual Studio 2010 now puts UML models in the same solution as the code that implements the models. Consider the difference between a model that has been pasted into a document and a model that lives with the code and defines the structure—the objects and their relationships— of that code. It is true that previous UML tools (for example, UML designers in Visio) allowed for the generation of code stubs, but the key difference is that we can now connect the models directly to the code. This allows changes in the model to be immediately reflected in the code. Consequently, this changes a model from a documentation annoyance to a useful abstraction that can be used for productive discussions between architects and developers, and UML becomes a forward-engineering tool instead of only a sketching surface.

However, for forward-engineering to work with UML, we have to extend the model from the pure UML language specification to make it more specific to the desired implementation. Choices are available here. We can:

  • Make assumptions in the code generators that translate the nonspecific notation of UML into specific platform code.
  • Apply a platform-specific profile, so that we can mark-up UML diagrams with information about how we want the code to be generated.
  • Create additional platform-specific models that instruct the code generators on how to apply the model to rendered code.

The simplest approach is to make assumptions in the code generator. In the simplest cases, this will work fine. However, it falls apart when the models get more complex, because you might need to specify certain platform-specific attributes that do not exist in other cases. Applying a profile is a simple, inexpensive way to add more platform granularity to your UML models.

Figure 1 illustrates an example of this—a UML class diagram with a C# profile. Of special note is how the C# stereotype extends the UML with the Is Partial and Package Visibility properties, which helps us to forward-engineer.

Figure 1. UML class diagram with C# profile


The third approach—creating additional platform-specific models—makes sense when you need to keep the UML strictly platform-independent. This is not a concern for most Windows development, but is often needed for embedded systems or software that is expected to have a long lifespan and will have to run on many different platforms.

Scenario 2: Using a DSL to Model Variability in a Well-Known Problem Domain

We use frameworks every day; and, often, the code that we write against those frameworks is repetitive, with only minor variations. This is the sweet spot for DSLs: abstracting the variability in boilerplate code, and exposing that variability to the developer through simple configuration in designers. A good example of this type of DSL is the Microsoft Entity Framework. You can either write code directly against the framework or use the DSL designer that is built into Visual Studio. The designers are linked to code generators that inject the developer’s configuration into boilerplate code to configure the APIs.

Scenario 3: Using a DSL to Configure a Domain that Is Modeled in UML

This scenario is more complex than the previous two, but is a more powerful and productive use of UML and DSLs together. Some problem domains that you model in UML could be executed in a variety of ways by using additional code at run time. For example, you could use UML to describe a domain or framework for pricing insurance policies. The domain might require configuration data at run time, or it could be an API that is used by multiple insurance programs that need to price policies. In addition, you decide to provide tooling in the form of a DSL that makes configuring or programming against your insurance-pricing domain easier. (A more detailed example of this scenario appears at the end of this article.)

Another way to look at this scenario is that it combines Scenario 1 and Scenario 2, because you can create a framework by modeling it in UML, and then create a DSL to improve the experience and productivity of working with that framework. (See Figure 2.)

Figure 2. UML and DSL within the same system


An important note is that the people who author the framework and DSL are usually not the same as the people who are using the DSL. In the insurance-pricing example, one group would likely be responsible for the pricing API and DSL, and other groups would use the DSL for creating pricing applications. The group that is using the DSL would not need to interact with the UML models, because the domain concepts that they need will be represented in the DSL.

Scenario 4: Using UML as a DSL

In fact, UML can sometimes be used as a DSL. For example, the Web Service Software Factory Modeling Edition (also known as the Service Factory) that was created by the Microsoft patterns & practices team provides a set of commonly used DSLs with which many people are familiar. For those who are not familiar with it, the Service Factory provides a modeling environment that makes it easier for architects to model Web services in a consistent way, independent of a particular implementation (for example, ASMX and WCF). The Service Factory then lets you configure specific implementation details (for example, so that it can generate code that is specific to whether the implementation is ASMX or WCF.)

Figure 3 shows the service-contract model DSL within the Service Factory. The shapes in the DSLs of the Service Factory describe the logical components of a Web service and generate multiple instances of classes and interfaces that complete the desired technology implementation (WCF or ASMX).

Figure 3. Service Factory service-contract model example


Interestingly, this model looks very much like a UML class diagram— which raises the question, “Could the new UML capabilities within Visual Studio 2010 be extended to provide the same capabilities as this DSL?” The answer is, “Yes.” Figure 4 illustrates how we have extended the UML class diagram with a custom profile, to provide some of the same functionality as that which is provided with the Service Factory.

Figure 4. Extended UML class diagram as service-contract model DSL


So, we know that we can create DSL-like capabilities by extending the UML models in Visual Studio 2010. However, should we? This is definitely the tougher question to answer; and, unfortunately, like all difficult questions, the answer is, “It depends.”

Recall from Table 1 that DSLs typically have a higher cost associated with them for the creation of initial implementations. One advantage for using UML like a DSL is that modeling with UML can be used to prototype a DSL. Extending the UML models might provide a lower-cost alternative for you. Later, when it is clear what elements must go into a DSL, a DSL can be created by harvesting the knowledge that is gained while using the UML model. In other words, a general-purpose UML model that is applied to a specific purpose can be used as the basis for creating a DSL at a later time.

If you do not need to expend a lot of effort to mold the UML to fit your modeling needs, you might even be able to get away with not having to create a DSL at all. Tread carefully here, however. We all know that what starts out as a “little utility program” to serve only a small purpose often grows into something much larger. The point is that when you see that the cost of extending the UML models to fit your needs exceeds or equals the cost of creating the same capabilities by using a DSL, you should switch to the DSL.

Although the initial costs of using UML are lower than creating a DSL, there are some other points that you should know:

  • Users of the models have to understand how the UML elements map to the domain concepts—making the models less clear in how they describe their domain.
  • Users of the models also might not be aware of which parts of the UML apply to the domain and which are unnecessary.
  • Code generators are more complex to write, because they have to traverse the standard UML model to get to the profile elements. For example, in our conversion of the service-contract DSL, we must search for classes that match specific stereotypes to obtain the specific instance that we want, instead of just using the domain model that a DSL provides.
  • With UML, code generators bear the primary responsibility for validating the model by throwing exceptions back to the user when they create models that are invalid for the domain. Model validation should be the responsibility of the model itself. In a simple DSL, the relationships that are defined take care of a lot of this for you. In a more complex DSL, you can create validation rules that run in the model to help the user transition from invalid to valid states.

Practical Example of DSL and UML Working Together

Recently, the authors of this article have been working with the ASPEN Program (Advanced Software Productivity Environments) at Raytheon Company, an American defense contractor, to implement a software factory for creating message-exchange services. A message-exchangeservice is a component that receives external messages, transforms the messages into internal message formats, and then publishes the messages to internal receivers. For the purposes of this article, let us simplify the example a little, so that you can focus more on understanding how UML and DSL can work together and less on the specific problem domain.

Raytheon creates message-exchange services with many of its systems; until now, however, each service was hand-coded to deal with different message formats, transport protocols, and platforms. However, the fundamental design of the services is common to all of them; therefore, there is an opportunity to create an abstraction, if we can remove the dependency of each message-exchange services on a particular platform and a particular set of messages.

The development process started with analysis of the execution domain, using platform-independent UML models. The behavior of the model was implemented in action language. (Action language is an implementation of UML standard action semantics for specifying behavior in models.) Platform-specific models were used to map UML and action-language concepts to the target language. A set of code generators was used to transform the models into both Java and C++ code. More target platforms will be added when the need arises.

An executable message-exchange service requires configuration information to specify which messages are being mapped and which transport protocols to use. The configuration is supplied in XML. To facilitate the creation and consistency of this XML configuration, the factory authors created DSLs to represent it. One DSL was created to import or create messages; another was written for message mapping, and to specify transport protocols and message publish/receive information. Developers do not interact with the UML models when they are using the factory.

The following steps summarize the process that is used to create the factory:

  1. Platform-independent models were created by using UML and action semantics.
  2. Platform-specific models and code generators were used to generate executable code on chosen platforms.
  3. DSLs were created to allow a developer to describe the desired message-handler configuration.

From the perspective of the developer who is using the factory to create a message-exchange service:

  1. Create a new instance of the message-exchange service factory.
  2. Use a set of DSLs for configuring messages, message mapping, and transport protocols.
  3. Execute a build, which triggers the generation of configuration files and the packaging of an executable message-exchange service.

According to Peter DeRosa, program manager for Raytheon’s ASPEN effort, “Working with Microsoft and the Visual Studio team, ASPEN has set aside the ideological modeling debates in pursuit of concrete production solutions that incorporate the benefits of both approaches to deliver high-quality solutions and dramatically lower life-cycle costs. Building on our existing strength with UML-based software- production techniques, we are additionally applying DSLs to extend the same rigor and results to domains and viewpoints that are not easily represented with UML.”



Although the black bear is a generalist animal and can adapt to numerous habitats, it would not do as well as the polar bear in the arctic. The polar bear is specialized to thrive in the arctic. Each bear has its own unique set of strengths that are specially purposed for its needs. Such is the case with DSLs and UML. UML is a generalist; it can be used for various purposes, from describing system requirements to modeling a set of object-oriented domains and classes. DSLs have the advantage of being very specific to their purpose—much more specific than the general-purpose UML.

This article has tried to provide evidence as to why there is no “best” modeling choice between UML and DSL, as each toolset has its unique strengths. It has also illustrated how Visual Studio 2010 Ultimate can help combine these modeling techniques to create an even more powerful modeling environment. Platform-independent UML models, UML with platform-specific profiles, and DSLs can all exchange data in order to provide a complete model of a system.

While UML and DSL are both models that allow us to raise the level of abstraction, each lets us model different aspects of an application.

There is no “best.” And, if someone tries to debate you over the subject, you should just reply with:

“Fact. Bears eat beets. Bears. Beets. Battlestar Galactica.”



The authors would like to thank Mike Cramer, Peter DeRosa, Terri Potts, Jezz Santos, John Slaby, and Christof Sprenger for their help in shaping and refining this article.


About the Authors

Len Fenster ( ) is the lead solution architect for .NET Development for Microsoft Consulting Service’s U.S. East Region. During his 13 years at Microsoft, he has focused on helping enterprises create robust applications that are based on Microsoft technology. Most recently, Len has worked with the Microsoft patterns & practices team on Microsoft Enterprise Library, and the Visual Studio team on an integration solution between Microsoft Office Project Server and Visual Studio Team System Team Foundation Server. Even before his career with Microsoft, he led a global team of developers and architects that built distributed applications that are based on Microsoft technologies. Since the advent of .NET, Len has served as a solution architect for Microsoft Consulting Services and has leveraged his considerable experience to help enterprises incorporate .NET into their own technology strategies and solution-development life cycles.

Len is the author of several technical articles, as well as EffectiveUse of Enterprise Library: Building Blocks for Creating EnterpriseApplications and Services(Addison-Wesley Professional, 2006). He speaks regularly to companies and at architecture forums about how to architect solutions that are based on .NET and incorporate this solution development into an overall SDLC.


Brooke Hamilton ( ) is a senior consultant for the Civilian Federal Services Group of Microsoft Consulting Services. He has over 15 years of experience designing and implementing systems for several industry sectors, including petroleum, financial services, nonprofit, healthcare, insurance, and government. Brooke specializes in raising abstraction levels through model-driven development and using models to connect business customers to their software. His current project involves implementing software factories and lean development practices for Raytheon.

Rate this content 

From Problem to Solution: The Continuum

Between Requirements and Design

Christopher Brandt

Requirements do not describe a problem that is to be solved; instead, they specify constraints on the design of a solution. A solution is one answer to the question, “How do we do this?”, where this refers to the problem that is to be solved. Normally, there is more than one solution to a problem, which means that requirements cannot be captured until the nature of the solution is known. However, the nature of the solution cannot be known without understanding the problem. Therefore, identification of the problem must be the first step when we are faced with a new challenge—be it a new development project or an undesigned aspect of a solution. Next, the nature of the solution can be determined; then, requirements can be specified, so that finally the solution can be designed.

Starting with requirements before understanding the problem will bias the form of the solution, which kills creativity and innovation by forcing the solution in a particular direction. When this happens, other possibilities cannot be investigated. This is a mistake that can be made, regardless of the process that is being used. Unfortunately, describing the root problem in an unbiased, abstract statement is not an easy task; it requires all members of the team to step back from their own biases of the solution’s form. Each person must challenge the constraints that

are implied by the statement of the problem that is being crafted. This is done by simply asking questions about what is really needed.

The endgame is a set of statements—each of which has just enough constraints to describe a problem, but not enough to bias the solution unnecessarily. From here, the best form of the solution can be determined by the right people.

From this point on, the requirements and design can be advanced in sync—with the requirements feeding the design and the design bringing out more questions about the solution and its requirements. A development process

can be viewed as a knowledge-transfer process. The product owner transfers knowledge of the problem to a design team. In response, the design team transfers knowledge of the solution back to the product owner. Each iteration is a cycle of knowledge transfer, where the entire design team advances its understanding of the solution and how it got there. The product owner and designers must have a good working relationship, because they are all designing the solution.

A full version of this article is online and available at

Christopher Brandt ( is the Systems Architect at Moneris Solutions. He has been working on loyalty-transaction processing for 11 years.

Architecture Modeling:

Necessity, Connectivity, and Simplicity

Neelesh Wadke and Mayank Talwar

Simply defined,software architecture is a blueprint of the complete system—depicting the subsystems and/or components, along with their intense coordinated interactions. An architecture model should not be just a relic that would be created during the design phase and then lose the sync with the implemented system. The ever-changing present demands continuous synchronization of the

requirements, design, and its implementations. It is essential that this happen at every stage of the software-development life cycle (SDLC), starting from requirements gathering and interaction with various stakeholders.

With many intelligent and sophisticated development environments being released for better management of software development, the industry has realized the need for facilitating an architect with more powerful tools. It is essential that an architecture model should connect effectively all of the interdependent SDLC phases and act as a focal point in application life-cycle management (ALM). The Ultimate Edition of Microsoft Visual Studio Team System 2010 can make software-architecture modeling more simple, structured, and reliable.

In Visual Studio Team System 2010, the requirements of a software system can be well documented by using the newly supported UML diagrams. Various diagram entities can be linked to work item(s) in Team Foundation Server (TFS) and further tracked to proper closure. This helps in achieving requirement traceability throughout the life cycle. The UML use-case diagram provides a feature to add links to relevant artifacts. The UML component diagram allows you to create components/subcomponents with appropriate dependencies and expose both Provided and Required interfaces. The UML class diagram can be used to further design these interfaces and classes. The UML sequence diagram can be generated by using the entities from the use-case, component, and class diagrams.

In addition to all of these diagrams, Visual Studio Team System 2010 also supports the activity diagrams. Using the layer diagram, the components of an architecture can be categorized and grouped into application layers. By using reflection and analyzing the call stack, the layer diagram can identify the dependencies between these layers intelligently. Furthermore, one can also use the layer diagram to validate the architecture and ensure that the dependencies are not violated by any calls that are against the proposed design.

Model Explorer helps an architect view all the modeling projects and the entities that are present in a solution. The Generate Dependency Graph feature of Visual Studio Team System 2010 Ultimate Edition is like a boon to the community, as it will allow checking of the intensity of the dependency between classes, namespaces, and assemblies.

Thus, innovation and technology together have played a vital role in bringing in a lot of sophistication and simplicity in modeling.

Neelesh Wadke ( is a Principal with the Education and Research group of Infosys Technologies, Ltd. He has worked in the field of Software Education for almost 10 years and gathers an overall experience of over 14 years.

Mayank Talwar ( is an Associate with the Education and Research group of Infosys Technologies, Ltd. He is an MCTS: Microsoft Team Foundation Server Configuration and Development.