The Working Programmer - Multiparadigmatic .NET, Part 1
By Ted Neward | September 2010
Over the years, many of us in the .NET community have heard of Microsoft’s “personas” for the Visual Studio environment: Einstein (the genius), Elvis (the rock star), and Mort (the “average” developer). As useful as these personas might be for Microsoft in trying to figure out precisely for whom they’re building Visual Studio and the Microsoft .NET platform, I’ve found them to be less helpful. In fact, I’ve come to realize that for the vast majority of the .NET ecosystem, developers mostly fall into one of two basic (and highly stereotypical) camps:
The C++ Developer. This is the developer who learned all the “good” object-oriented practices and seeks to build Rich Domain Models at every opportunity, even when writing a batch file. If this developer is old enough to have actually written C++ professionally, chances are she focused on building frameworks and reusable abstractions, to the point that she probably never shipped anything. Such developers can be identified by their pretentious attitude, and are often found quoting “Patterns” at people in a vain effort to “educate those poor huddled masses who just don’t understand what the Quality Without A Name is.”
The VB Developer. This is the developer who heard all of the hoopla about objects, templates, procedures, and everything else that’s been tossed around over the years (decades?), and has firmly and resolutely adopted an “I’ll do anything so long as the code ships” attitude. In fact, this developer is so focused on shipping code that when presented with a request to add a new button to an existing form, he’ll rewrite the entire thing from scratch. These developers are noted for their trademark “don’t tell me how it works, just tell me what to do” attitude, and are often found ripping code off of Google and pasting it into their programs (sometimes randomly) until it works.
Before the deluge of hate mail begins, let’s point out the obvious: these are gross stereotypes, and I’m certainly not pointing fingers at anyone or implying that either group is better than the other. (I come from the first group, but I’ll be the first to take up arms against anyone who wants to suggest that that crowd is somehow superior.)
I note the existence of these two groups largely because this next set of columns is going to be aimed more specifically at the latter—the VB Developer —who hasn’t spent a lot of time thinking about software design. But, perhaps surprisingly, what we cover should interest the first group as well, because with the release of Visual Studio 2010 and the .NET Framework 4, things have gotten a lot more complicated, at least in the language space. If working programmers are going to have any chance at designing software (or extending software already designed) in the coming decade without making it into a giant puddle of silicon goo, it will be because they get a good grounding in multi-paradigm design, or what I have come to call multiparadigmatic programming. (Yeah, it’s a pretentious title. My C++ roots are showing—sue me.)
The term multi-paradigm design (and the concept, if it can be said to have a single author) originated in the book “Multi-Paradigm Design for C++” by James O. Coplien (Addison-Wesley Professional, 1998). Based on the last part of the title, it’s relatively easy to guess which of the two camps the book initially targeted. However, the language turned out to be almost beside the point; Coplien’s point still resonates, a decade later:
One hidden danger … is that the term “object-oriented” has become a synonym for “good.” … In today’s market, you can find the “object” label attached to every paradigm imaginable. That leads to hybrid design environments that build on an attachment to the past, usually supported by a desire to build on investments in old methods and technologies. And most of these environments are just called “object-oriented.” One liability of such combined methods is that they obscure some of the central principles of object-oriented design, substituting principles from other methods in their place. These confused combinations of design techniques can lead to architectural disaster. … Maintenance becomes difficult, the overall structure weakens, and it takes tremendous energy to keep the system viable.
But pure objects aren’t the answer, either. Perfectly good paradigms have been marginalized by object-oriented hype. The tone in contemporary development shops is that no one would be caught dead using procedural decomposition, even for a batch sorting routine—objects must somehow be crafted into the solution. This leads to designs in which square pegs have been forced into round holes.
The history of computer science is littered with idealistic solutions to problems, starting almost from its inception: repeated attempts to create an overarching single view or approach to solving problems gave us at first assemblers, then compilers and along the way spawned an entire cottage industry in “Your language sucks and here’s why.” The more pragmatic practitioners shrugged and said, “When all you have is a hammer, everything looks like a nail.” As languages grew more complex, somehow we lost sight of the fact that a tool could in fact be used for multiple purposes.
For me, enlightenment on this subject hit while listening to Anders Hejlsberg talk about C# 3.0 at the Dynamic Languages Summit in Redmond a few years ago. He pointed out that C# was incorporating some ideas from other languages, and said something to the effect of “Languages are losing their classifications. No longer can we say that a language is just an object-oriented language, or just a dynamic language, because so many of them borrow from lots of different ideas.”
His commentary echoed Coplien’s words from a decade prior: “C++ goes further [beyond paradigms that preceded it, like modularity, abstract data types, procedures and data structures] to support procedural, modular, object-based, object-oriented, and generic programming on an equal footing.”
C# goes even further, incorporating functional concepts in the 3.0 release, and dynamic concepts in the 4.0 release. Visual Basic had dynamic long ago (though it was commonly scorned), and thanks to Microsoft’s goal of “language parity” between it and C#, supports the same features. Without these additional language paradigms lurking beneath the surface, solutions like LINQ are more difficult, and force the developer to rely on other mechanisms (such as code-generation, which is itself an interesting aspect of metaprogramming—more on that later) to capture the core commonality of a working system in ways that “inherit from a base class” couldn’t.
This is the core of what we’re going to explore: the object-oriented zealots of the world have insisted for years that inheritance represents the best approach to reusing code. (This is the reason classes aren’t marked “sealed” [C#] or “NotInheritable” [Visual Basic] by default.) And yet, we don’t actually override base class methods when inheriting from the Form class in WinForms or ASP.NET; instead, we provide delegates for invocation. Why not just override? Why take the hit (small though it might be) of the delegate invocation? Why did code-behind go from an inheritance-based model to a partial-classes-based model? Why have extension methods at all?
The answers to each of these questions can be (and have been!) explained in object-oriented terminology, but the underlying reason remains the same: not everything can easily be represented in classic object design constructs. Trying to find the common parts of code and bringing them together into a single first-class construct (which object-oriented programming would hold is a class, and structural programming would deem is a data structure, and functional programming would maintain is a function, and so on) is the goal of software design, and the more ways we can vary the parts of the code that need variation, the more we can write systems that weather the harsh winters of customers calling with “Just one tiny little thing I forgot to tell you ...”
As an exercise, consider this: The .NET Framework 2.0 introduced generics (parameterized types). Why? From a design perspective, what purpose do they serve? (And for the record, answers of “It lets us have type-safe collections” are missing the point—Windows Communication Foundation uses generics extensively, clearly in ways that aren’t just about type-safe collections.)
We’ll get into that in the next piece.
Not Dead (or Done) Yet!
Clearly there’s much, much more yet to be said on this subject—each of the paradigms present in the .NET Framework deserves some exploration and explanation, complete with code, if this first part is going to make any sense whatsoever to the working developer. Those subsequent parts will be coming, so hang in there. By the time we’re done, I hope (and think) you’ll have a whole slew of better design tools for building out good (and by that I mean well-abstracted, maintainable, extensible and usable) software.
But for now, focus on looking at the current designs you’re working with, and see if you can identify the parts of the design that use some of the high-level concepts of each of those different paradigms—obviously, the object parts will be relatively easy to spot, so concentrate on some of the others. What parts of your codebase (or the .NET Framework) are procedural in nature, or metaprogrammatic?
By the way, if there’s a particular topic you’d like to see explored, don’t hesitate to drop me a note, and I’ll see about trying to schedule it in once we’re done with this particular series. In a very real way, it’s your column, after all.
Ted Neward is a principal with Neward & Associates, an independent firm specializing in enterprise .NET Framework and Java platform systems. He has written more than 100 articles, is a C# MVP, INETA speaker and the author or coauthor of a dozen books, including the forthcoming “Professional F# 2.0” (Wrox). He consults and mentors regularly. Reach him at email@example.com and read his blog at blogs.tedneward.com.