Model-Driven Development (Part 1)
Click ARCast: Model-Driven Development to listen to this ARCast.
Harry Pierson: Hello. This is Harry Pierson, and welcome to ARCast. ARCast is a panel-style discussion podcast featuring panel members from inside and outside Microsoft, as they discuss and debate architectural topics. Each month, we will feature a new topic and new panelists, and we will post their ongoing discussions weekly. For more information, please visit the Architecture Resource Center at microsoft.com/architecture.
This month, we are discussing modeling. Here is the topic we sent to our panelists. There is a great deal of buzz these days about model-driven development techniques, such as software factories, domain-specific languages, language workbenches, UML, model-driven architecture, etc. Similar attempts in the past have failed, namely the CASE tools of the 80s. What is different now? Is model-driven development likely to gain widespread adoption? Will it make building software faster, better, and cheaper?
Our panelists this month are:
Martin Danner, founder of Arrowrock Corporation and architect MVP for Microsoft;
Jack Greenfield, architect with the Enterprise Frameworks and Tools Group at Microsoft. Jack was formerly a chief architect at Rational Software, and he is the coauthor of the book Software Factories;
Steven Kelly, chief technical officer of MetaCase and cofounder of the DSM Forum;
Mauro Regio, architect with the Microsoft Architecture Strategy Team; and
Brian Selic, IBM Distinguished Engineer at IBM Rational, and adjunct professor of computer science at Carleton University. Brian is also the co-chair of the OMG team responsible for the UML 2.0 standard. Now, here are our five panelists' opening remarks.
Martin Danner: Hi, this is Martin Danner with Arrowrock Corporation of Boise, Idaho. I am an in-the-trenches developer and analyst with over 20 years of experience developing applications of all sizes. I am really excited about the recent advances in model-driven software development. This technology promises to produce order-of-magnitude improvements in both productivity and quality. At the same time, I am more than a little concerned that this might be déjô vu all over again. I was there when the CASE tools hit the scene with great fanfare in the eighties. I was a system analyst at the time, working for a large airspace company. They put a lot of money and effort into the development of CASE technology, but for some reason it just never took off. Perhaps the models were too difficult to create and maintain. Perhaps the resulting code required extensive modifications to work properly. Or maybe it was just that the culture was not ready for the paradigm shift.
In any event, I would like to know from my other panelists what they have done to examine the mistakes of the past, and what they are going to do to avoid those mistakes in the future. What is your strategy to gain mainstream adoption of your technology in the real world of software development? As I monitor the online chatter among folks creating this new model-driven development technology, I am concerned that the discussions center mostly on academic concepts, with little or no concern as to whether the typical IS shop is ready or able to adopt it. I often wonder if this new technology will require special skills that don't currently exist in the IS workforce. Will developers treat models as a new kind of development language, or will model-driven development be limited to only a few who are willing and able to master it? Will these models be first-class artifacts throughout the entire life cycle of an application, or will the fancy code generators all be abandoned as soon as the developers start tweaking the code directly? I look forward to hearing from what other panelists have to say. I will measure their comments against my yardstick of practical experience, and then report back to you my humble opinion of their strengths and weaknesses, as well as their prospects for success in the real world.
Jack Greenfield: Hi, my name is Jack Greenfield. I've spent most of my career building either enterprise applications or model-driven frameworks and tools for enterprise-application developers. I am currently an architect with Enterprise Frameworks and Tools at Microsoft. Before coming here, I was the chief architect for Rational XDE. During the dot-com era, I was founder and CTO of Inline Software, a company that developed model-driven frameworks and tools for enterprise developers. Before that, I was the chief architect for an enterprise-application frameworks and tools projects at Fannie Mae, which was Fortune 1 at the time. I was also one of the leads for the enterprise-projects framework at NeXT.
In my mind, the primary factor that will determine the success or failure of model-driven development in the industry today is pragmatism. There is a stark contrast in terms of pragmatism between the two leading approaches, which are software factories from Microsoft and MDA from IBM and the OMG. With MDA, you have two domains—or viewpoints, as we call them: the platform-independent model and the platform-specific model. These two viewpoints are both based on UML, which is a general-purpose modeling language, and they are related by transformation, since PSMs are generated from PIMs.
With software factories, by contrast, you have an arbitrary number of viewpoints, such as user interaction, business workflow, or logical database design, to name a few. In fact, you can define as many viewpoints as necessary, to describe the business requirements in the software under development. Each viewpoint is potentially based on a DSL, but is tailor-made to address a set of unique concerns for that viewpoint. Viewpoints can be related by nesting and by a variety of operations, such as trace, validation, analysis, refactoring, weaving, or optimization, in addition to transformation.
Why the other operations? Because in the real world, we don't always know enough about a pair of domains to fully automate the relationship between them with transformation. Sometimes, it makes more sense to edit two models independently, for example, and then use a validation tool to make sure they are mutually consistent. A viewpoint is not always based on a DSL, because software factories can use code, SQL, HTML, and a variety of other formats and tools to describe a domain, in addition [to] or instead of models. Why use these other formats and tools? Because in the real world, we don't always know enough about a domain to model it effectively. In a software factory, we can use other forms of guidance, like patterns, code samples, templates, wizards, frameworks, to name a few.
So, we learn more about key abstractions in the domain by working with these less-automated forms of guidance; we can gradually move to models. So, what is different this time from the CASE tools in the 1980s? I think it is precisely the pragmatic bottom-up approach we are taking with software factories. Unlike MDA, which optimistically assumes—like CASE did—that most or all of the software can be generated from models, software factories blend modeling with other software-development practices to meet the needs of developers in the real world. There isn't time to talk about this topic in-depth here, so we will address it in the panel session. Also, we have written about it extensively in our book called Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools. Thank you.
Steven Kelly: Hi, I'm Steven Kelly, and I work for MetaCase as CTO and lead developer of the MetaEdit+ domain-specific modeling tools. The last time we heard a message promising massive productivity increases was with CASE tools and fourth-generation languages like PowerBuilder. They failed to bring about the expected revolution, because they tried to impose three things on the users: a way of working, a way of modeling, and a way of coding. Since the tool vendor's way rarely fit with the users' existing practices, there was a major disconnect. Of all the problems, perhaps the difference between the generated code and the kind of code the users had handwritten was the worst. Because the vendors had to make one tool work for as many people as possible, the code generated couldn't be tuned to the specific needs of all its users. It also had no chance to take advantage of their existing investments in code and frameworks.
This meant that the resulting code was bulky, ugly, inefficient, and unfamiliar. There were, however, a few success stories of CASE tool use. The key feature in the successes was that there was no need to go in and edit the generated code by hand. If you have to edit the resulting code by hand, you end up having to maintain the same system in two different formats. And all our experience teaches us that this is a nightmare that no round-trip promises can save us from.
To be able to generate full production code from models, you need to have both the modeling language and the code generator be domain-specific. And, when I say "domain" here, I'm talking about a very narrow range of applicability—say, a single range of products of a single company. That way, the modeling language contains concepts that are instantly familiar to developers, since they are the problem domain concepts which they already work with. Similarly, the code generators can generate just the kind of code the best developers there write by hand. Time and again, it's been proven in practice that such an environment is the best way to build software, leading to increases in productivity of 500 to 1,000 percent in places like NASA, the U.S. Air Force, Lucent, and Nokia.
The big question, then, is: What's the cost of building your own modeling language, modeling tools to support it, and your own code generator? For a long time, that cost was measurable in tens of man years—way too high to make domain-specific modeling practicable for all but the largest projects. Now, however, there are tools like our own MetaEdit+ and Microsoft's DSL tools that make building such support much faster. We have dozens of successful projects, ranging in size from just three developers up to several hundred. However, even dozens of industrial success stories don't make something a mainstream success. It will be interesting to hear other participants' opinions on when and how the market will and turn, and when we'll see the next big leap in software-development productivity.
Mauro Regio: Hello, my name [is] Mauro Regio, and I'm a Solution Architect with the Architecture Strategy Team at Microsoft. My current area of focus is on large-scale enterprise-integration projects.
In discussing software factories and modeling, many of our customers ask if they should seriously invest in them. In answering these questions, I like to offer my reflections on a couple of key factors about the role of XML and metadata, and what makes software factories so powerful. XML technology is pervading the modeling world. XML makes model metadata open and self-describing, flexible, and extensible. So, the point is they become really useful and effective for modeling. Openness and extensibility are two key values from a technical—and, even more, from a business—standpoint. An enterprise decision to use a certain model, in any business or technical domain, is a strategic decision.
From a technical perspective, it's also a risk-mitigation factor, because I can extend the model. I won't be stuck in the future if, for any reason, the initial model chosen will not be powerful enough to express the semantics of the concepts I'm modeling. This means that key business and technical decision makers will feel much more comfortable than in the past, in using modeling-driven techniques in developing software systems. Then, we need to consider that model-driven development and software factories are not just about models. Yes, it is about sophisticated, open, and extensible models. But it also means the ability to use the various models to capture different aspects of the problem we are dealing with, and of the solutions to that problem.
To transform these models from the problem space into the solution space at a certain abstraction level, possibly using a Synapse transformation. It also means the ability to use these models together with descriptive development guidelines, and to inject these models and transformation and guidelines into a developer environment. And make it tailored to the development of a specific solution, rather than having to start with another "Hello World" application.
So, will model-driven make building software faster, better, and cheaper? I don't like making marketing-like statements. And I don't have any conclusive quantitative data to prove how much faster and or cheaper. And, essentially, I don't think that is the point—not at all. However, I believe model-driven development—based on open and semantically self-described models, integrating methodology and guidelines into the tools used to architect, design, and develop software systems—will radically change the effectiveness of the development process. Thank you.
Bryan Selic: Hello, my name is Bryan Selic, and I work for IBM. I'm an IBM Distinguished Engineer, and my areas of responsibility involve modeling languages, the definition of standards related to modeling languages, and tools that support model-driven development. I've got a lot of experience—close to 40 years—in the software industry. And I've been doing model-driven development in the past 18 years, I guess, although it wasn't called that when I started.
Modeling is something that has been used throughout history as a technique of dealing with complexity in engineering. And, therefore, it is quite appropriate for use with software, simply because software is one of the most complex things that we've ever attempted to do. Some of these systems reach orders of magnitude that are perhaps comparable to biological systems. And we definitely need help. To me, the essence of model-driven development is about two things. One is abstraction—that is, raising the level of abstraction of our software, both in terms of how we think about the problem, and then how we specify our solutions. And, second, a very important thing that often gets forgotten is the introduction of more and more automation into software development—specifically, using computer-based tools to help us do development.
What has changed over the last few years—in fact, I'd say over the last decade—is that we have reached a new level of maturity in the technologies involved. The modeling languages have gotten a lot more sophisticated; although, to be honest, I think we still need a much more comprehensive theory of modeling-language design. But we're certainly farther along than we've ever been before. And the other things that have matured to a significant extent are modeling tools. I don't believe that CASE actually failed.
I think CASE tools were simply a step, an early step, in the right direction towards more automation and higher levels of development. And I've certainly seen, in my experience—in the last 7 to 10 years—some significant projects that have used model-driven development, and have greatly increased productivity and quality of the software produced. So, I'm quite an optimist when it comes to the future of model-driven development. There is something standing in the way. Perhaps the biggest impediment is the culture change that accompanies something like this.
Harry: Thanks for listening. I'm the host of ARCast, Harry Pierson. Don't forget: The conversation continues next week, so tune in.