A Brief History of Services
As of December 2011, this topic has been archived. As a result, it is no longer actively maintained. For more information, see Archived Content. For information, recommendations, and guidance regarding the current version of Internet Explorer, see Internet Explorer Developer Center.
March 13, 2000
It is often said that in order to see where you are going, it is important to see where you have been. Let's try applying this concept to the Web, and see whether we can figure out where things might be going.
We can start off by looking back—way back—to before the Web existed, when personal computers were used for running stand-alone text-mode applications. While I could talk about my experiences with the IBM 360s, PDP 11/70s, and HP 3000s, I think that is just a tad too far back. Let's instead return to the days of MS-DOS—say, 1982. Back then, you would write a document in a word-processing application. To connect to a "bulletin board" system, you would use a program dedicated to that purpose. If you wanted to manage a home recipe collection, you would probably have some dedicated application that was specifically designed to do just that. But what if you wanted to include the listing of the recipe in an "article" you were writing up, and then post it out to the bulletin board system you were using? This would usually involve a rather complex process of executing each of the applications, one at a time, and then trying to determine how to export data from one application into the next.
By about 1984, the notion of interoperability started sinking in with the application developers, and they devised methods for sharing data files between applications. Sometimes, this meant simply using bare-bones text files, which usually resulted in not only a loss of any specific formatting, but in a mangling of the document—making it necessary for the user to manually re-configure the information each step of the way.
When the Windows operating system came along for the PC in 1985, it brought two exciting features that are often overlooked today. First, it provided an environment in which specifically designed applications were able to run simultaneously. Second, it provided a system-wide "clipboard," which allowed users to simply select, copy, and paste information from one application to another. A special feature of the clipboard was the ability to add several different versions of the same data to the clipboard, thus allowing the target application to select the format with which it was most compatible. This made it possible to move data from one word processor to another, preserving as much formatting as possible, without the user needing to pick and choose which of the dozens of different formats would work the best.
Over time, the interoperability between applications became more and more robust. With the early versions of OLE, applications could actually display their functionality within another application. For example, you could view an Excel chart or diagram from within a word-processor document. In OLE 2.0, these embedded applications could actually share information back and forth, making the interoperability virtually seamless. It is now possible for one application to be in full communication and control of several different applications, creating a much more expansive solution than that application could provide by itself.
Keep this notion of how increased interoperability of applications' functionality has greatly expanded the role computers play in our lives, and let us now shift our focus to the Web.
As long as Web sites provide the full end-to-end functionality that their users demand, there is no need for interoperability. But any developer who thinks that a single Web site can provide everything their users might need is only fooling himself. At the very core of the Web is the notion that it derives the bulk of its benefit through connectivity with other Web sites.
From One Page Island to Another
Web pages started out as islands of information. Links provided an extremely easy way to move from one island to another, but virtually everything relating to the original Web site was lost once you arrived at your destination. It was similar to the way MS-DOS allowed you to execute several different applications on the same machine, but the full context and interface of the second application replaced the first. Since those earlier days, the functionality of Web sites has increased rather dramatically to include dictionaries, order processing, e-mail, and now even Web-based word processing.
The most common form of "interoperability" of Web sites today has been via frames. A Web site that you placed an order with might want to provide some mechanism for showing you the tracking information for your package, and the easiest way for them to do this has been to link to the package carrier Web site, passing it the appropriate tracking information. By inserting this tracking page into one of their site's frames, it would produce the appearance of being part of their site, but it would still be an external page over which they would have little control. What about a deeper level of interaction? What about providing a more intimate way of merging the functionality of multiple applications into a single seamless solution?
Web servers could create a hard-coded communication channel between themselves to share information. But this often would require some sort of low-level understanding between the participating sites as to the format and processing of the transactions. Another method would be for one Web server to pretend to be a Web browser and to perform a normal page request of another server; then through "screen scraping" (the semi-automated process of reading the "output" of one program, recognizing the fields with important information, then reformatting this information out in another manner), that first server could provide a customized re-interpretation of the requested page. As you can see, these offerings have their shortcomings.
A third way to provide interoperability between Web sites is to think of each Web site not as the form of the information, but as its function. Using this scenario, a package tracking service wouldn't be just a set of Web pages allowing users to enter their data and then view the tracking information. Instead, at the core of that site, a service would take package information as input and return tracking information as output. It's a way of rethinking the form of many of today's Web sites and seeing them as applications. These applications, the Web Services, are displayed to the user just as easily as a nicely formatted Web page—because they are comprised of a series of data values that are intended to be processed by another application.
The Missing Link
The Web Services model is sort of like the missing link between traditional applications and Web sites. You can approach the problem from either side, and arrive at the same destination. Both the application developer and the Web designer think about the service that they are really providing the user, and separating that service's function from the graphical form that it takes on the screen. This is not to imply that it is easier to provide this functionality; often it is more difficult. The end result, however, is that you gain a more flexible solution that can better adapt to different ways that your web site might need to present itself, and that can allow other sites or applications to enhance your functionality without needing to update your code.
By looking at how traditional applications evolved over time to meet users needs, we should be able to see how these same users will be placing the same sorts of needs on Web sites. So it shouldn't be much of a surprise that exposing flexible, interoperable Web Services is where we are focusing our attentions now.
Robert Hess hosts the MSDN Show.