Run Anywhere, with Anything
As of December 2011, this topic has been archived. As a result, it is no longer actively maintained. For more information, see Archived Content. For information, recommendations, and guidance regarding the current version of Internet Explorer, see Internet Explorer Developer Center.
October 11, 1999
Reading through the technology press these days, you will see a lot of space dedicated to highlighting various bugs and problems with software and systems. It seems to me that there are a lot more of these reports now than there used to be. But, it's not that software is worse; more likely, just the opposite is true. I feel that the quality of software today is far better than it was when I first got started in programming. In fact, this whole "Y2K" issue is a direct outgrowth of the poor programming practices of those "good old days." Today's programs are better written, less error prone, and more robust than anybody in those days could imagine. But their quality is also the source of all the scrutiny.
As computer programs do a better job of performing more tasks, people begin to use them more in their everyday lives. Features on users' "wish lists" are implemented faster than they can think up new wishes. As the wish list shrinks, suddenly the items on the complaint list become more noticeable. At the same time, the very act of satisfying the wish list of a diverse user community results in ever more complicated applications. And if you think the user options of these applications can be complex, just imagine what the source code looks like!
People are always ready to point fingers at an application that doesn't do something correctly, or at hardware that doesn't work as expected, or at system functionality that gets in the way. The truth of the matter is that there are just far too many cooks in the kitchen, and this can often cause problems. The result is a form of chaos that can often be difficult for the individual players to anticipate. As a simple illustration, let me relate to you a personal experience.
I was running a computer lab many years ago. We wanted to install external SCSI drives onto some systems so we ordered a bunch of new-to-the-market SCSI cards from a well-known company. When they came in, we started testing them. To our surprise, we found that we couldn't access the external drives. We checked and rechecked all the settings, terminators, and cables. We talked with the SCSI card company and the drive manufacturer. Everything was set up correctly, but still it didn't work. Further investigation brought to light the problem. The SCSI cards we bought were good—in fact, too good. They were designed to exactly meet all of the tolerances of the SCSI specifications. Unfortunately, the cables we had purchased, which were the standard SCSI cables you could buy in any computer store, and which worked with all of our "lesser" SCSI cards, were not able to meet the tight tolerances of this better SCSI card. As soon as we switched the cables out for more expensive ones, the problems went away. We heard afterwards that the SCSI card manufacturer decided later to loosen up some of the tolerances at which their cards were working, so others would not have this problem.
Here is an example of one company that made an extremely good product, and another company that made a product that could have been tested on all (currently) available hardware. Yet, when the two products eventually met, failure was encountered. Why did the cable company produce a lesser-grade SCSI cable? Because people always want to pay less for a product, and if they could produce "as good" of a cable for a lesser price, then that is what people would want. And if their tests, on all existing hardware, showed that their cable was just as good as the more expensive ones, where was the problem?
Remember that this problem was simply an issue between a PC card and a cable. Nothing too fancy going on there, and yet failure occurred. Imagine the possibilities when you start adding additional layers to the problem. Motherboards, device drivers, applications, operating systems, input devices, networked systems—a large, complex collection of individual components from such a tremendous number of companies that the number of different permutations is mind numbing.
In the above example of the SCSI card, whose fault was it that the configuration didn't work? Was it the cable manufacturer for making a cable that worked with all "existing" SCSI cards, but wasn't quite good enough to work with all "potential" SCSI cards? Perhaps; or was it the SCSI card manufacturer for manufacturing a strictly compliant card, which might not work with marginal cables? Well, no—but perhaps it was the card company's failure to perform proper testing that prevented this issue from being clearly identified at the outset.
Which leads us into the fine line of developing robust applications. Sometimes you have to choose between compatibility and reliability. The only way that the SCSI card company could make their card "compatible" with existing hardware configurations, was to relax some of the tolerances to which they were trying so hard to adhere. While this was a hardware example, many such examples also exist in the software world as well.
It is your duty as a solution developer to be diligent when building reliability into your products. Reliability refers to the ability of a solution to continue to run, even after being seriously compromised by various foreseeable situations that might otherwise have caused the solution to fail. Notice that I say "foreseeable," and not "unforeseen." I make this distinction because in order for a program to be able to properly recover from a situation, it has to execute code that was specifically written to deal with that. Thus, somebody, at sometime, had to have foreseen this possible problem and ensured the appropriate code was available to handle it.
Providing this level of reliability requires not only a lot of work and planning, but it also requires a lot of knowledge about the situations in which your solution will be deployed—including the hardware that it might encounter, as well as the users who might be using it. It can often be extremely difficult for techno-savvy program managers and designers to comprehend how an everyday user might approach their programs. It can be an eye-opening experience to watch a random user install and use the program you developed and "thought" was easy to use.
Entire books can (and have) been written about how to work on writing solid code and robust, reliable solutions. The devil, as they say, is in the details. The details that you need to pay attention to are those that address how to play your role properly within a chaotic and complex environment. The more you understand every aspect of your solution's interaction with the operating system, hardware, device drivers, and users, the more control you will have over how well your application can expect the unexpected.
Robert Hess is an evangelist in Microsoft's Developer Relations Group. Fortunately for all of us, his opinions are his own.