A .NET Primer

As of December 2011, this topic has been archived. As a result, it is no longer actively maintained. For more information, see Archived Content. For information, recommendations, and guidance regarding the current version of Internet Explorer, see Internet Explorer Developer Center.

Robert Hess
Microsoft Corporation

December 11, 2000


Join Me in the Way-Back Machine
Back to the Future
A Standard Protocol
Enter .NET
Properties, Methods, and Events, Oh, My!

You'd probably have to be living under a rock not to have heard something about Microsoft's new .NET strategy. Depending on whom you talk to, or what you read, it is either a grandly architected vision that will finally bring together the entire distributed infrastructure of the Internet, or it is just another name for Web application. At the current time, one thing appears clear about .NET, there is a lot of confusion as to what it is, and what it means to Web and application developers.

While I don't claim to be an expert on .NET, I think I've been involved enough in its design and development to provide some useful information to help folks grasp what its scope, purpose, and objectives are. So while what follows may not provide developer-level information about how to design an application that uses .NET, it might give you some guidance on how to describe it to your colleagues and family.

Join Me in the Way-Back Machine

First, we have to go back in time a little to look at how new application models evolve into the features of an operating system. Imagine for a moment that you are back in the beginning of the 1980s. The IBM Personal Computer has become one more option to consider when trying to select a computer to buy. Its operating system is MS-DOS, and like virtually all operating systems of the day, it is strictly a command-line, text-based operating system.

While most applications for these early computers kept to the safety of a text-based environment, some developers took up the challenges of inventing their own private, graphical environments. The user would boot up a computer into its text-based operating system, then launch the application, which would switch over to graphics mode. Some applications were simplistic, providing only minimal capabilities; others were relatively rich and extensive.

It was a lot of work to design and implement a set of Graphical User Interface (GUI) libraries for each application. Plus, there was little commonality among the various implementations. To fully realize the importance and capabilities of a GUI, it needed to be implemented by the underlying operating system. Toward this end, Apple, Microsoft, and others developed graphical operating systems that not only supported the development of GUI applications, but also greatly expanded what these applications could do. More than just libraries of subroutines for drawing menus and windows, such graphical operating systems also provided significant services that application developers could use to streamline their development. Things such as a device-independent printer model, or even a system clipboard, were valuable resources for developers.

I hope it's pretty clear where I'm going with this.

Back to the Future

Let's now jump ahead to the current time. Think of the Internet as playing the same role the personal computer did in the early '80s. Think of a Web site as being an application, and if the Web site is just a hyperlinked text document, then that's the same as a text-mode application. If the Web site provides interactive services of some sort, then it is the same as a GUI application. In an interactive service, the user provides information to the Web site, then within the context of the site that information is passed on to application logic on the Web server that processes it and returns a result. Examples of this would be a shopping cart, the language translation services supplied by AltaVista, or the package-tracking services supported by UPS.

Think about the shopping cart example. Almost all shopping carts will support entering credit card information. While in some cases this just becomes information that it stores as part of the transaction record, a good shopping cart will actually verify the credit card information quickly with the company that issues the card and alert the user if he or she might have entered the number incorrectly. What happens if the Web site wants to support a new credit card? The developers of that site have to contact the credit card company, find out what support (if any) it has for electronic authorization, and then write customized code for however that credit card company chose to implement this.

What about the translation services? If you were writing an e-mail application, don't you think it would be useful to allow your users to instantly translate e-mail? Because AltaVista exposes a Web site that does this, you could silently launch a hidden browser control, pass it the information necessary, look at the returned information, locate where the translated message appears on the page, then extract that and present it to the user. This is similar to how some graphical applications might have had to extract and use information generated by a text-mode application or a terminal connection to a mainframe computer. It's known as screen scraping, and is complicated and fairly fragile because any changes in screen layout can cause serious problems, as well as being unethical under certain circumstances.

For the package-tracking services, imagine if your site allowed users to look at the status of products they had ordered from you. It would make sense for you to provide them with the UPS tracking numbers, and perhaps even format them as links to the UPS site, so that they could get a full report on the progress of the packages as they wound their way across the country. But wouldn't it make more sense for your site to interact with the UPS site and provide that information right there on your site? This way, the user interface would be the same as the rest of your site, plus users wouldn't risk getting lost in the UPS site without knowing how to get back to your site.

To provide an integrated experience to the user, each of these examples requires you either to find out from the other company exactly how it exposes its services in a programmatic manner so that you can develop customized interfaces to them, or to write screen-scraping code to extract this information, if the company provided it in a way that could be accessed via a Web page.

Obviously, the code for either of these approaches would be fairly large, very involved, and potentially the source for numerous bugs and glitches. This approach is also essentially the same one developers in the '80s were forced into with text-mode applications when they wanted to interact with devices that the operating system didn't directly support.

A Standard Protocol

A better approach would be some form of common method that interacts with these remote services—a way not only to connect to a company's services, but also to find out which services it offers. Such are the goals of XML, SOAP, and UDDI. XML (eXtensible Markup Language) represents a way to describe information so that it is easy to programmatically extract the information you need. SOAP (Simple Object Access Protocol) is based on XML, which is specifically designed to provide information about the interfaces that some remote services provide. And UDDI (Universal Description, Discovery, and Integration) is an industry initiative to provide a standard way of discovering the availability of services and resources on the Internet. If AltaVista's translation services provided a SOAP interface, you'd be able to post to it a standard request for services, and it would send back information that told you which interfaces it provided and how to structure a request to one of those interfaces.

Such standard interfaces go a long way toward making it easier to connect to, and utilize, the services another Web site might want to provide. However, because the underlying operating system doesn't currently support XML/SOAP directly, you still have to jump through some hoops to use these interfaces. You could write your own HTTP-based XML parsing engine, or use someone else's, then interact with the XML Document Object Model to construct and examine the data being passed back and forth. While not trivial, it is definitely easier than resorting to screen scraping.

Enter .NET

A better solution is for all of the code to use XML/SOAP/UDDI interfaces supported by the OS, and exposed to you as fairly normal function calls or object interactions. For example, something like the following would be a great way for an application to call into AltaVista's translation services.

   avTrans = new AVTranslation ("Here is a sample line of text", "English");
   strResult = avTrans.translateTo ("French");

This is part of what .NET provides. It not only allows applications to use straight-forward code to access and utilize remote Web-based services, but it also allows Web servers to just as easily expose their services to external applications. This does not mean that the server that is providing its services, or the client applications and Web sites that are utilizing these services, has to be written using .NET. All it means is that the .NET platform will make it a lot easier to develop these applications.

Properties, Methods, and Events, Oh, My!

The design of .NET, however, doesn't stop at the Internet interface level. The same programming model used to access remote services will also be used to utilize local services and internal business logic. In other words, the programming paradigm will be the same throughout the construction and design of the application.

The intent is to create a pervasive, component-oriented programming model. It doesn't matter whether the application is displaying a remote translation service, the local file system, or a dialog box. All of these support the same general, component-oriented interface: properties, methods, and events. The programming language support of .NET allows easy and consistent access to any other component, regardless of what programming language it was developed in. And all programming languages are equal citizens. A Visual Basic application can call a method in a component written in C++, and can catch events from an application written in Cobol. And at no time is any one of these participants aware that another is written in any other language.

There are, of course, many more layers to .NET and how applications that use it can take advantage of the services and infrastructure that it provides. Imagine if you only ever used an old paper teletype terminal attached to a mainframe, then somebody tried to explain to you what a graphical user interface was. How quickly would you grasp the use and importance of scrollbars, combo boxes, menu bars, or palette management? .NET is likewise ushering in a new way to think about the development of applications. If you look at the individual features provided, you might not see any one thing that is new, but when you take the entire collection as a cohesive platform and see how all the pieces fit together. the potential is quite exciting—at least to me anyway.

By no means should this brief discussion of .NET be seen as the definitive description of all that it entails. I shall definitely be diving deeper into the features and capabilities of .NET in future articles.

Robert Hess hosts the MSDN Show.