Export (0) Print
Expand All

Model-Driven Development (Part 2)

 

February 2007

Click ARCast: Model-Driven Development to listen to this ARCast.

Harry Pierson:   Hello, this is Harry Pierson, and welcome to ARCast. ARCast is a panel-starred discussion of ARCast, featuring panel members of inside and outside Microsoft as they discuss and debate on architectural topics. Each month, we feature a new topic and new panelists, and we will post-out ongoing discussion weekly. For more information, please visit the architecture resources center at microsoft.com/architecture.

This week, we continue the discussion of model-driven development. The discussion ranges from considering organizational and individual role changes to the importance of expertise and the development of modeling languages.

Martin Danner:   Hi, this is Martin Danner with Arrowrock Corporation in Boise, Idaho. I am impressed with the comments of each panelist on last week's ARCast. They all have distinct observations on the past, present, and future of model-driven development. I really liked Jack Greenfield's pragmatic approach to model-driven development. In particular, I liked the way that software factories combined modeling with guidance in [the] form of patterns, code samples, and templates and wizards. This blended approach seems to be [at the] center of success of software factories. As Jack pointed out, we [do] not always know enough about the domain to model it effectively. However, do we have bits and pieces of knowledge that we can reuse and rearrange into new design for implementations? I can think of this bits and pieces of knowledge as guiding components as building blocks. Just like [LEGOs], you can assemble these blocks in different ways to come up with million unique structures.

Another feature of software factories that I like is factored and [encompasses] the multiple viewpoints. Clearly, software firms and mobile phones are completely different [from], say canon software. Each embedded concept is different [from] one another. Although both contain 1 and 0 at [the] executable level, at the conceptual level they are very different domains. It's exciting to think that tools [have] matured to [the] point [that] we can now model each domain using a custom toolbox, all of the elements tailored to express the best concepts involved. Because, after all, the application developers, most of the query, what they do at conceptual level. Steve Kelly pointed out that both the modeling tool and the code generator, they transform the model in the code need to be domain-specific. They offered an interesting definition about domains which I found useful. It describes the domain as a narrow-range applicability, such as a single range of products from a single company.

Certainly, the people creating solutions for such a domain have developed a terminology for describing the concepts within that domain. Why not use that terminology as a basis for modeling language for the domain? Now, they can go beyond specialized modeling tools that allow them to create solutions much faster using familiar concepts, and then convert those models into codes that look and feel like the code they themselves have written. Now, that sounds like progress to me.

I agree with Mauro Regio's comment, that models should be XML-based. XML logic performs various operations on models, such as validation and transformation. Using a rich set tools [that] are already available, maybe like XML. What's more at open is possibility of automatically generating models from the resources. This causes some intriguing possibility areas of reverse engineering, converting and merging models, and launch up it between models and artifacts that they generate.

Mauro also pointed out that the modeling tools must be extensible, so they can be adopted as domain knowledge increases, whereas [the] domain itself evolves and changes. This sort of flexibility is unprecedented, and [this] lack of this flexibility may have been a major reason why CASE tools failed to meet expectations. I did take exception [to] one of the Mauro's comment, though. He said that faster, better, cheaper is not the point. Should there might be catchy bit of marketing speak, but I think the underlying premise is spot-on. In my opinion, the whole motivation [for] model-driven development is to simultaneously increase development productivity, improve the quality of software produced, and drive down the enormous cost of software development. And I would like to add one other reason [for] model-driven development: coupling with ever-increasing complexity.

As Brian Selic pointed out, that software is one of the most complicated things we ever attempted to do—in some cases, rivaling biological systems in complexity—and it's getting more complex all the time. However, we using the models going to cope up with ever increase in complexity, how we are going to fit bigger and bigger ideas into our brains? I think Brian has exactly the right idea when he says that abstraction and automation are essential to dealing with the complexity. Model-driven development embraces abstraction by allowing us to ignore implementation details and focus on the concepts. Model-driven development also embraces automation by allowing us to automatically transform conceptual models into executable implementations. And the new [crop] of modeling tools gives us an unprecedented degree of control over both modeling environment and transformation to implementation.

So, the question still remains: How existing IT organizations adopt this new approach to software development? To me, it employs a new kind of organization. Domain-specific models and guidance automation are new tools that architects and developers can add to their existing bag of tricks. For them, these represent an evolutionary improvement in the tools that [they're] already used to working with. As such, I think the adoption will be primarily a training issue as used to with any new technology. The more propelling and fundamental change imposed by model-driven development is the need for organizations to create their own domain-specific languages, patterns, templates, and other forms of guidance. This employs a whole new job category, perhaps best described as factory builders. These factory builders, the people have produced the domain-specific modeling tool, patterns, templates, and so forth. Those require [a] special skill set, a new skill set. I don't think that the skill set exists in today's IT organizations. It will require highly experienced developers working in a completely new way to produce new kinds of deliverables. This mass to significant change in both the structure and culture of the IT organizations.

As Brian pointed out, this culture change could well be the biggest impairment to the success of model-driven development. So, what is it going to take to produce the paradigm shift? Perhaps that's not the fair question to pose to the panel, as all expertise is primarily technical. However, I think it's the question that must be answered, if model-driven development is to enjoy mainstream adoption and be considered ready for prime time.

Jack Greenfield:   Hi, this is Jack Greenfield with week two. I would like to respond to Steven Kelly's observation in week one that CASE tools fail to fulfill their promises, because they force three things on their users: a way of working, a way of modeling and a way of coding. Let's look at each of these, in turn. Most methodologies currently in market have the same problem that CASE did. Namely, they prescribe the one-size-fits-all approach, regardless of the type of problem being solved. So, it get the same advice—regardless of, say, whether you are building eBay or software for [an] antilock breaking system running on an embedded control on an automobile. Of course, these two are very different domains. For eBay, we have large numbers of concurrent users to manage, and we have to expose financial transactions to the threat of Internet; whereas, with the antilock breaking system, we have limited resources, we have to work in new real time, and we have safety-critical issues to deal with.

Software factories, by contrast, don't prescribe a specific methodology. Instead, you can think of software factories as a meta-methodology. The authors of software factories define the process, architecture, guidelines, and tools followed by the users. These components can be tuned and tailored for the specific problem domain. Also, with CASE, since models were the dominant artifact and code took a back seat, existing tools such as CM system, debuggers, [and] defect-tracking systems were therefore much less useful than that [what] had been. With software factories, by contrast, we manage models as ordinary source artifacts. Therefore, most of the existing tools can be used with models and metadata they carry, just as easily as they can be used with code. Now, it's true that we can enhance these tools using the metadata captured by models, as described in a book. And we have already started doing that with Visual Studio Team system, for example. As time goes on, we see more and more use of metadata. However, in the meantime, we can use the tools as [...] currently defined. This is the key property of the software factories.

Now, let's talk about modeling. As you know, with software factories, you can define your own domain-specific languages, instead of accepting the set of languages prescribed by a committee. They presume to know exactly what your problems are and how you are going to solve those using models. More importantly, software factories let you identify the domains of interest for your problem domain, your architecture, and your process. In other words, defining domain-specific languages for a particular family system in a software factory is not just bubble versus squares. It's about choosing what part of system to model, and what level of abstraction and what aspects of each part need to be captured, and how they should be captured.

Now, at this point, some people might ask whether or not this is a good idea, to allow software-factory developers to define their own modeling languages. After all, the UML was designed by experts and it still has a lot of issues, as we know. Issues for which is they around criticized. However, it's interesting to know that UML is also designed by a committee and, of course, you know the power of elephant, giraffe, and rhinoceros, which were all horses designed by committee. Rather than have a committee design a general-purpose modeling language to solve all possible problems, we think it makes sense to have software-factory developers to find individual focused languages to solve very specific problems. Unlike the committee, that [can't] possibly know in advance what the problems are that need to be solved, the software-factory authors can be experts in their problem domains and the architecture used to support software that, well, addressed those problems, and they can therefore define languages that are precisely tuned for tasks in hand.

Finally, let's talk about the way of coding. Martin asks in week one whether or not models will be first-class artifacts throughout the software life cycle. I think they can be and should be, especially if we don't round-trip and there are many effective ways of separating the generated handwritten code, as described in the article published on MSDN.

For me, the bigger issue—at least, for [the] time being—is whether or not code will remain first-class artifacts throughout the software life cycle, or will the model-driven development technology [that] we bring to market now make the same mistake as CASE, trying to replace code entirely by models? That may be possible, on some distant day in the future; today, it's just not practical. For this reason, we want to treat code as first-class citizens. Models are first-class citizens, but not the dominant all the types of artifacts required to build software in the real world today are appears in the software factory.

When the day finally does come that we can generate all of the code for a system from models, as we currently do from programming languages with compilers, we will see that no one tool or language will [be] able to meet the needs of all systems. Instead, we will find multiple modeling languages, and multiple tools will be required. Why is that? Well, because unlike with the programming language, where we are abstracting the single physical or virtual machine with models abstracting a variety of different forms of platform abstraction, including servers and operating-system components and middleware elements, and we are mapping those abstractions onto the wide variety of problem domain concepts. This concludes my comments for week two. Talk to you next week.

Steven Kelly:   Hi, this is Steven Kelly from MetaCase. Jack Greenfield has the right drawing of distinguish between the MDA and domain-specific modeling. Industrial experience of DSM shows consistence much faster then current practices, including current UML-based implementation of MDA. This DSM has been happening well over the decade. There are number of well-documented, objectively analyzed real-world cases. I guess Mauro Regio wasn't aware of this. If you want to see some, take a look at the www.dsmforum.org. Industrial cases there consistently show productivity increasing by factor of 5 to 10, so 500 to 1,000 percent. In contrast, even a flagship MDA study sponsored by MDA Tool Company in the laboratory scenario only showed the increase of 35 percent. Even MDA hardliners like Grady Booch, along with Brian Selic and other IBM luminaries, recognize this, as they say in the article in MDA general last year. The full value of MDA is only achieved when the modeling concepts map directly to the domain concepts, rather than to computer-technology concepts—in other words, when [the] modeling language used is domain-specific, rather then being UML. They are absolutely correct in that, and also correct in agreeing that such a vision is a long way off still from the current MDA tools.

I have [to] disagree with Jack when he says [the] model-driven development world can be categorized in the MDA and the software factories, and these are the two leading approaches. He is right clear between the MDA and the rest, but I don't see many people in the rest would accept being labeled in the software factories. For one thing, is not necessary the most appropriate refactoring the names, being all-too-easily misunderstood, because there is false analogy between the program mist and factory's assembly-line workers, and also the disconcertingly having the same abbreviation as "science fiction."

Humor aside, the software factory's approach is intriguing, and it tries to include pretty much everything that people will mistakenly note to be good practice in code-based software development. So, patterns, components, frameworks, aspects, services: These are all great things, and we should be using them. But they are not real keys of model-driven development. For model-driven to work, I think every one agrees that we need domain-specific modeling languages and domain-specific code generator; these two are included in Microsoft patients, and—indeed, from my point of view—they form the key differentiating factor between how recent developer already programming and the new vision in the software factory's book. For my money, the best term for [the] new age of software development is something like domain-specific modeling with full code generation, but the way how useful is that DSM will do. DSM also shares its abbreviation with rather more appropriate form DSM cause. Jaw according to the DSM engineers Web site relatively easy to modify with big performance with basic bottom parts. Now, certainly, if you have good tools, then getting good performance out of DSM should be relatively easy, and I personally side with those who think the tools to create modeling language should be as easy to use as the bolting things together.

Creating the modeling languages is hard that Martin Danner said, and something that few people [are] expert at. Having tools makes the process easy is the key success factor, allowing you to concentrate on then what you want in a modeling language, rather then have to code modeling tools for that language.

The increase in the productivity with the DSL language calls it strongly if with how well that modeling language is built to fits its domain. Perhaps, even more importantly, allowing the modeling-language designer to work on a high-level view of modeling language with the tool doing any necessary hard work allows the modeling language to evolve freely later. Nobody gets some modeling language right [the] first time, and nobody works in [a] domain that is completely static. If getting modeling tools support for new modeling language requires significance amount of coding, off soon come at the point when the amount of tool code changes a new modeling language feature would require make that change impractical. That's the point with which modeling language becomes to go the way [of] the dinosaurs.

For DSM to be successful, one: It must be carried out with tools that support the modeling language's evolution. Today, I have seen above the 20 tools to building DSM supports and not a single one has able to an evolution in its first generation. It seems that once the vendors only first-generation tools is being through a few versions and then being thrown away, then the vendor be unable to go second-generation tools from scratch to enable modeling-language evolution. That has been our experience, too, with MetaEdit+. Their previous has been built and scraped after three versions in the late 80s and the early 90s. Is rather like the problem getting Microsoft Word to allow evolution in its document standard. Early in '97, the fourth version did Microsoft decide to make a tool whose document format allows you to just keep on updating without having [a] special kind of manual conversion or special file converters. So, then, if Brian Selic value said the key success factors when using the DSM language are the level of abstraction and automation, and these are also the key factor when building DSM language and its tool support. So, if we like the idea of DSM, whether you are a modeler, modeling-language designer, or a vendor, you'd better start eating the dog food.

Bryan Selic:   Hi, this is Bryan Selic of IBM. I enjoyed listening to the opening statement of my co-panelists, and I am really glad to hear that we all share the optimism about model-driven development and its potential.

One thing that Jack Greenfield mentioned that particularly struck me is the need [to] approach this problem from a pragmatic angle, and what I would like to say is next focus on the pragmatics of model-driven development. I think it's useful to define model-driven development as a model development in which the model becomes a dominant artifact in [the] whole development process. I'm somewhat [in] disagreement with Jack [in] that I think there is more than just two forms and degrees of model-driven development. It varies from many different ways to doing it in many different forms, from simple round-trip engineering to full-fledged use of modeling languages, programming languages.

The key thing is, of course, is that as we increase the level of abstraction and automation, we are getting higher and higher benefits, in terms of productivity and quality. However, there are certain pragmatic issues that get in the way of just leaping from the traditional program-based development towards model-driven development and its full force. For example, the tools are not as mature as we want them to be—not just the tool, but the modeling languages. For example, languages such as UML have been exposed to have a great deal of criticism, even though they are designed by experts and various software gurus, so it's not [an] easy thing to design domain-specific languages. And yet we have to [be] quite careful when [we] start off with something that enables people to design their own languages easily, to not give too much hope to saying goes that, for example: Look at the programming languages there is naughty profusion of domain-specific programming languages, and that's quite interesting to understand why. And again the pragmatic comes into it.

Our languages need tools that are beyond just error-loosing code generator. They make utilities, repositories, code generator, trace-management tools, debugging tools, analysis tools, testing tools, documentation tools, etc. And it [took] us a long time to develop these tools to sufficient quality for programming languages, and it's going to [be] even harder to develop tools like that for modeling languages. So, we have to be careful here not to repeat the CASE tool mistake of creating expectations that we cannot meet, because the technology is immature. We must not ignore the pragmatics.

Harry Pierson:   Thanks for listening. I am the host of ARCast, Harry Pierson, and don't forget: The conversation continues next week, so tune in.

Show:
© 2014 Microsoft