November 2010

Volume 25 Number 11

The Working Programmer - Multiparadigmatic .NET, Part 3: Procedural Programming

By Ted Neward | November 2010

Ted NewardLast month, the software-design exercise as one of commonality and variability stood as the centerpiece of the discussion (see msdn.microsoft.com/magazine/gg232770). It left us with the idea that software languages such as C# and Visual Basic offer different paradigms for representing these commonality/variability concepts along different dimensions, and that the heart of multiparadigmatic design is in pairing up the demands of the domain with the facilities of the language.

This month, we begin by examining one of the older facilities of programming languages, “procedural programming,” also sometimes known as “structured programming,” though the two are somewhat subtly different. Although commonly seen as “old school” and therefore outdated and useless in modern software design, the procedural design paradigm still shows up in a surprising number of places.

Proceeding It, Old School

For those of us who weren’t alive when structured programming emerged as a new term, its core tenet was to put some definition (structure) around the code being written—at a practical level, this meant “single entry points” and “single exit points” to the blocks of code being written in assembly at the time. The goal here was pretty simple, in retrospect: put some higher-level abstractions around the repetitive bits of code that were floating around.

But frequently, these commands (procedures) needed some variation to them if they were to be at all useful, and parameters—input passed to the procedure to vary its execution—were included as part of the approach, first informally (“pass the character you want to display in the AX register”), then formally (as parameters to functions, as in C/C++/C#/Java/Visual Basic and the like). Procedures often calculate some kind of returned value, sometimes derived from the input passed in, sometimes simply to indicate success or failure (such as in the case of writing data to a file or database); these are also specified and handled by the compiler.

However, all of this is a remedial topic for most readers. What the multiparadigm approach asks of us isn’t to rehash history, but to look at it again through the lens of commonality analysis. What, specifically, is being generalized in the procedural approach, and how do we introduce variability? Once the variability is identified, what kind of variability is it—positive or negative?

With commonality/variability glasses on, the procedural paradigm yields up interesting secrets: Commonality is gathered into procedures, which are essentially named blocks of code that can be invoked from any given context. (Procedural languages rely heavily on “scope” to isolate work inside the procedure away from the surrounding context.) One way to introduce variability within the procedure is by way of parameters (indicating how to process the remainder of the parameters, for example), which can have either positive or negative variability, depending on how the procedure itself is written. If the source to that procedure is unavailable or shouldn’t be modified for some reason, variability can still be had by creating a new procedure that either calls to the old procedure, or not, depending on whether positive or negative variability is desired.

Hello Procedures

In essence, the procedure provides common behavior, which can vary based on input. And, ironically enough, the first example we see of the procedural paradigm lies in the very first example most Microsoft .NET Framework programmers ever see:

Sub Main()
    Console.WriteLine("{0}, {1}!", "Hello", "world!")
End Sub

In the WriteLine implementation, developers pass a format string describing not only what to print out but how to print it, including formatter commands contained within the replacement markers, like so:

Sub Main()
    Console.WriteLine("Hello, world, it's {0:hh} o'clock!", Date.Now)
End Sub

The implementation of WriteLine is an interesting case study, in that it differs somewhat from its ancient predecessor, printf from the C standard library. Recall that printf took a similar kind of format string using different formatting markers and wrote directly to the console (the STDOUT stream). If a programmer wanted to write formatted output to a file, or to a string, different variations of printf had to be invoked: fprintf in the case of file output, or sprintf in the case of a string. But the actual formatting of the output was common, and often C runtime libraries took advantage of this fact by creating a single generic formatting function before sending the results to the final destination—a perfect example of commonality. However, this formatting behavior was considered “closed” to the average C developer and couldn’t be extended. The .NET Framework takes one step beyond that, offering developers the chance to create new formatting markers by passing responsibility off to the objects passed in to WriteLine after the format string. If the object implements the IFormattable interface, it’s given the responsibility for figuring out the formatting marker and returning an appropriately formatted string for processing.

Variability could also hide behind other places in the procedural approach. When sorting values, the qsort (a Quicksort implementation) procedure needed help to know how to compare two elements to determine which one was greater or lesser than the other. To require developers to write their own wrappers around qsort—the traditional variability mechanism, when the original was untouchable—would’ve been too awkward and difficult. Fortunately, the procedural paradigm offered a different approach, an early variation of what would later become known as Inversion of Control: The C developer passed in a pointer to a function, which qsort invoked as part of its operation. This, in essence a variation of the parameters-as-variability approach, offered an open-ended variability approach, in that any procedure (so long as it met the parameter and return type expectations) could be used. Although somewhat rare at first, over time this paradigm’s idiom became more and more commonplace, usually under the general label of “callbacks”; by the time Windows 3.0 was released, it was an accepted core practice and necessary to write Windows programs.

Hello Services

Most interestingly, the place where the procedural paradigm has achieved the most widespread success (if we blithely ignore the unbelievable success and ubiquity of the C standard library, of course) is in the service-oriented realm. (Here, I use the term “service” to mean a wider collection of software, rather than the traditional narrow view of just WS-* or SOAP/Web Services Description Language [WSDL]-based services; REST-based implementations as well as Atom/RSS implementations fit much the same definition.)

According to past literature appearing on msdn.com, such as “Principles of Service-Oriented Design” (msdn.microsoft.com/library/bb972954), services obey four basic tenets:

  • Boundaries are explicit.
  • Services are autonomous.
  • Services share schema and contract, not class.
  • Service compatibility is based on policy.

These tenets, perhaps without intending to do so, reinforce the nature of services as belonging to the procedural paradigm of design more than to the object-oriented one. “Boundaries are explicit” reinforces the notion that the service is an entity separate and distinct from the system invoking it; this view is reinforced by the notion that “services are autonomous” and therefore distinct from one another, ideally even at an infrastructure-management level. “Services share schema and contract, not class” speaks to the notion that services are defined in terms of the parameters sent to them, expressed as XML (or JSON) constructs, not specific runtime types from a particular programming language or platform. Finally, “Service compatibility is based on policy” suggests that services must be compatible based on policy declarations, which provide more of a context around the invocation—this is something that the procedural paradigm historically has assumed from the surrounding environment, and as such, it isn’t necessary to define explicitly.

Developers may be quick to point out that in classic WSDL-based services, it’s more difficult to create variability because the service is tied to the schema definition of the input type. But this is only for the most basic (or code-generative) of services—input and result types can be (and frequently are) reused across service definitions. In fact, if the notion of service is expanded to include REST-based systems, then the service can accept any number of different kinds of input types—essentially the parameters to the procedure are taking on an open-ended and interpretive role not generally seen in traditional statically typed procedures—and behave differently, bringing the variability within that service squarely to the fore once again. Some of that behavior will, of course, need to be validational in nature, because the service’s URL (its name) won’t always be appropriate for every kind of data that can be thrown at it.

When services are seen through the lens of a messaging system, such as BizTalk, ServiceBus or some other Enterprise Service Bus, the procedural aspect still holds, though now the entire variability rests with the messages being passed around, because the messages carry the entirety of the call context—not even the name of the procedure to invoke is present. This also implies that the variability mechanism by which we wrap another procedure in a new one—either introducing or restricting variability in doing so—is no longer present, because we typically don’t control how messages are passed around the bus.

Succeeding with Proceeding

The procedural paradigm demonstrates some of the earliest commonality/variability dimensions:

  • Name and behavior. Names convey meanings. We can use commonality of name to group items (such as procedures/methods) that have the same meaning. In fact, “modern” languages have allowed us to capture this relationship more formally by allowing us to have different methods use the same name, so long as they vary in the number and/or types of parameters; this is method overloading. C++, C# and Visual Basic can also take advantage of appropriately named methods by creating methods whose names are well-understood based on algebra; this is operator overloading. F# takes this even further by allowing developers to create new operators.
  • Algorithm. Algorithms aren’t just mathematical calculations, but rather repeated steps of execution. If the entire system (rather than individual layers) is seen in a top-down form, interesting process/code fragments—use cases, in fact—begin to emerge that form families. After these steps (procedures) have been identified, families can form around the variability based on how the algorithm/procedure operates on different kinds of data/parameters. In C#, F# and Visual Basic, these algorithms can be varied by placing them in base classes, then varied by inheriting the base class and replacing the base’s behavior; this is method overriding. Algorithmic behavior can also be customized by leaving part of that behavior unspecified and passed in; this is using delegates as Inversion of Control or callbacks.

One final note before we wrap up this piece. The procedural paradigm may not line up one-to-one with the service-oriented world; in fact, many service-oriented architecture evangelists and proponents will reject even the smallest association to the procedural paradigm, for fear that such an association will somehow take the shine off of their vested interest. Politics aside, the classic service—be it a RESTful one or a SOAP/WSDL-based one—bears a striking resemblance to the classic procedural paradigm. As a result, using the same commonality analysis during service design helps create an acceptable level of granularity, though designers must take care to ensure that the (assumed) traversal of the network to execute the service at the service host’s location won’t be blithely ignored. In particular, naïve implementations of services using the procedural paradigm might attempt to use the “pass a callback” approach to variability, and while this isn’t entirely a terrible idea, it could represent a major bottleneck or performance problem.

To this day, the procedural paradigm still appears throughout a great deal of what we do, but it’s been lurking under the surface, hiding from developers under an assumed name. Our next subject, object orientation, has no such excuse—it’s the perky, outgoing, “Hey, come look at me!” younger sister to its moody, melodramatic and often-ignored procedural older sibling. In next month’s piece, we’ll start analyzing the commonality/variability dimensions of objects, and some of what we find may prove surprising.

In the meantime, as an intellectual exercise, cast your gaze around the various tools you use and identify which of them use fundamentally procedural tactics. (Hint: Two of them are tools you use every day while writing software: the compiler and MSBuild, the build system hidden away behind the Build button in Visual Studio.)

And, as always, happy coding!


Ted Neward is a principal with Neward & Associates, an independent firm specializing in enterprise Microsoft .NET Framework and Java platform systems. He’s written more than 100 articles, is a C# MVP and INETA speaker, and has authored and coauthored a dozen books, including “Professional F# 2.0” (Wrox, 2010). He also consults and mentors regularly. Reach him at ted@tedneward.com with questions or consulting requests, and read his blog at blogs.tedneward.com.

Thanks to the following technical expert for reviewing this article: Anthony Green