From the December 2001 issue of MSDN Magazine.

MSDN Magazine

Multiple Entry Points, Optimizing JScript
Edited by Nancy Michell
W

hile most discussions of usability concern what sites are doing wrong and how to put it right, there are sites that are doing a good job, and you can adapt some of their ideas for your own projects.
      Finding ways to get more visitors to your site is an ongoing challenge. Sometimes the answer is to provide or promote a use that would not necessarily be associated with your kind of site—uses that go beyond click and buy. Why? Well, it's no mystery why department stores don't have clocks on their walls. They don't want customers to be reminded of the time; they want them to stay as long as possible, knowing that as time goes on, customers will be more likely to buy something.
      The same is true on the Internet. If you can keep users there long enough, they may just find something they want to buy. Of course, you must engage the customer, not simply keep him on your site by confusing him or asking him to "wait on long lines." As I mentioned last month, there are lots of great uses on the Internet. Some emerged on their own, while others were built into the design.
      Online booksellers like Amazon.com and BarnesAndNoble.com have responded to a number of realizations about how visitors use their sites. For example, as is obvious when you look at these sites, a tremendous data store makes up the business. Because customers need access to that database to place orders, they usually have free access to it and sometimes find novel uses for it. For example, eBay began as, and still is, a site where people can sell their collections, or begin to build them. But because of the tremendous number of buyers and sellers, eBay naturally became the place to look if you wanted to know the likely value of an object you own. That's a use that may not have been anticipated at first, but it evolved over time.
      Back to the booksellers. Users certainly do come by and start to snoop around. But unlike some linearly constructed online stores, paths all over the bookseller sites are interconnected. There are many, many ways to find a book. Among other ways, you can search for it by name, author, or keyword. You can view lists of people who share your literary interests. You can view recommendations based on how you have rated other books. The point is that different users, or even the same user at different times, will find the point of entry that fits their particular needs at that time.
      Are these alternate routes absolutely necessary? Well, you could simply provide a blank page with a search box so users could search for a particular book. But by allowing users to look at what other users are reading, and realizing that these are not just any other users, but people who share the same interests, you offer something that a simple list of products, no matter how well categorized, could never provide. The idea—and it's not very different from traditional marketing practices—is to expose the customer to as many products as possible. And the goal is to make those products not just numerous, but meaningful and well-targeted. This is something that brick and mortar operations simply can't do. It would be great for business if, when a customer walked into a department store, all the products rearranged themselves in front of the customer in his priority order. However, that can't happen in a physical space or in a print catalog—only online.
      Of course, allowing visitors to write reviews, as the booksellers do, serves the writer as well as the reader. It establishes community, gets the writer published, and when "reviews" of the reviews are permitted, may keep the writer coming back. All in all, the strategies used by online booksellers keep customers clicking all over the place while always having something new and exciting to explore. If you follow their example, you may see your sales soar.
      Never underestimate the value of multiple entry points. You should think about how users may want to search your site. The career sites, for instance, allow searches by company, job title, industry, keywords, and even salary. Does this mean that if you're an art poster retailer you should allow searching by size, predominant color, or even frame style in addition to artist and genre? Well, why not? If you put these criteria into your meta tags, the search can be executed on them, presenting the user with a much more tailored result. Considering the low cost of this approach, it's worth it.
      As always, speed is important. As I mentioned last month, users have no time to waste. In light of that, here's some information from our faithful internal answer men (and women) on how to optimize any JScript® code you may have on your site. The following includes excerpts from a lively discussion on performance issues from a group of Microsoft developers.


Q Do you have any tips on the fastest way to index into tables?

A Sure. Here are a few suggestions you can follow.
  1. Simple, but cool:
          s = myTable.rows[0].cells[0].innerText;
    
    
    is consistently faster than:
         s = myTable.rows(0).cells(0).innerText;
    
    
    Notice the [ ] instead of the ( ). This one is super easy to implement, so there is no reason not to use it.
  2. More of an FYI:
          s = myTable.rows[0].cells[0].innerText;
    
    
    is faster than
          s = myTable.firstChild.firstChild.innerText;
    
    
    which is faster than:
          s = myTable.children[0].children[0].innerText;
    
    
    All told the fastest method (the first) turned out to be about 16 percent faster in informal tests than the old original style.
Q Do you have any ideas for optimizing JScript in general?

A Here are a few tips that should help:
  1. Use while instead of for. Using a while loop instead of a for loop also shows a small but significant performance gain, whether you are looping 10,000 times or 200,000 times.
  2. Always declare variables in script. This is particularly true for local variables within HTC behaviors, where a massive search for the variable will commence if it is referenced but hasn't already been declared. This tip got strong support from our group of developers who note that not only are lookups on declared variables faster, but the code is also better organized, easier to read, and less prone to bugs.
  3. Concatenate small strings where you can. If you have a lot of string concatenation to do, then concatenate small strings whenever possible rather than working with larger strings.
          Everyone agreed that this is reasonable advice, but some caveats are in order. String concatenation is expensive, although it's more of a server-side concern. Usually string concatenation problems crop up when you do something like this:
         s = s + a + b + c + d + e + f + g;
       s = s + h + j + k + l;
       foo.myprop = s;
    
    
    First, if s is already large, then this is inefficient because JScript has no optimizations for handling string concatenation. It's better to do this
         s = s + (a + b + c + d + e + f + g);
    
    
    assuming that strings a through g are small compared to s.
          Better still is to never do the concatenation in the first place if the strings are large. In ASP you often see this:
         s = s + a + b + c + d + e + f + g;
       s = s + h + j + k + l;
       Response.Write(s);
    
    
    How does it compare to the following?
         Response.Write(s);
       Response.Write(a + b + c + d + e + f + g);
       Response.Write(h + j + k + l);
    
    
    Well, Response.Write already is an optimized string buffer, however there is some expense associated with calling it many times, so you need to experiment to determine the correct balance. This experimentation should be done under full load conditions. String concatenation hits the heap, which imposes a contention cost, which imposes a context switch cost.
          In the end, there is no "one size fits all" rule for optimizing string concatenations. It is insanely complicated. Remember, though, JScript is not C. In C you can read a line of code and have some idea of what the runtime cost of that code will be. Not so in JScript. The number of possible factors are beyond armchair analysis.
  4. Don't expect short variable names to boost performance. If you're looking for performance, this is not the answer. The working set cost of var Foo compared to var FooBarBazBlahBlehFredABC is unbelievably minuscule.
          The lookup cost is also trivial. It's done by hash code and a string compare. Hashing and comparing a string is an operation measured in microseconds. Furthermore, if you go early-bound, then the lookup is done by the compiler at compile time and the length of the variable name is utterly irrelevant at runtime—variables are resolved by hardcoded table lookups.
          If you go late-bound, then the total overhead cost of the late-bound call will be hundreds of times the portion of that cost spent hashing and comparing the string. Either way, the length of the string is irrelevant.
          It would be more reasonable to avoid long names to avoid spelling mistakes than it would be to justify short names on the basis of performance.
      If you're really having performance problems, however, there's a good chance that these types of solutions aren't going to solve your problems. Understanding the milieu that you are working in and making smart design decisions at the get-go will have a much larger impact on performance.
      Often, perf tuning through tips and tricks to optimize individual lines of code doesn't work. The way to write fast code is to:
  • Write correct code. Correct and slow is much better than fast and wrong.
  • Write extremely clear code so that when you go to optimize it, you can understand it.
  • Have crystal clear user-based performance goals. "This page will load in 5 seconds at 28.8." "This mouseover script will appear to be seamless to the user on a 300MHz machine," and so forth.
  • If your goals are already reached, go home. If not, measure. Then measure again. You must find the hot spot. You must find the slowest thing. Spending days researching which JScript functions are millionths of a second faster than others is only useful if those functions are the hot spot.
  • Now experiment. Measure some more. Keep going until it is fast enough. If you can't make it fast enough, throw away the tools and start over with better tools.
      Some developers may wonder if they should even bother to optimize their code when faster and faster processors are always just around the corner. But that idea generated some hot debate among this group of developers.
      Why is it that some software developers believe that Moore's Law will save their code no matter how bad it is? The consensus here was that such positions could be the death of the software industry, and that hardware is never the answer. Even the fastest machine in the world can be put into a loop.

Q Is JScript always the best choice for fast code on a Web site?

A If you really need tight and fast code, then no. JScript is an unoptimized, bytecode-interpreted, late-bound language. It is, in many cases, thousands of times slower than C. You could write a custom ActiveX®-based object if you must have tight, fast code.
      On the other hand, ActiveX controls are often not an option. Many users dislike them, and in many cases they can be slower (and certainly bigger) than JavaScript, especially when you need to interact repeatedly with the DOM, and many network administrators/users disable them for security reasons. Additionally, they do not work on Macintosh or Unix flavors of Microsoft® Internet Explorer and Netscape browsers.
      How to optimize, though, certainly depends on the task at hand. Take, for example, the handling of complex data binding. One key purpose of the DOM and DHTML was to allow the client to grab data, disconnect from the server, and manipulate the data on the client. JScript is a means to this manipulation. When coding ASP pages that pull data from SQL Server™, squeezing the most time out of every transaction makes a big difference when you have a huge number of records. Every course on algorithms at every university teaches that the smallest O(n) is desirable, even if the computers can handle much larger. Sometimes a page displays thousands of records and has options that loop through every table row on the client side in order to hide a specific column. There is an extremely noticeable delay that's directly related to how many records there are. Optimizations, whether they are linked to the browser or the scripting engine, are quite valuable and regularly magnified. After all, usability studies have shown that if a page is even a second slower (client-side, server-side, whatever), customers go elsewhere.
      Remember, it's a good idea to write client-side code with an eye toward user experience first, then reliability (lack of bugs), speed, and then clarity. Granted, the code should be developer maintainable, but user experience is certainly paramount.

Got a question? Send questions and comments to webqa@microsoft.com.
Thanks to the following Microsoft developers for their technical expertise: Tantek Celik, Aaron Elder, Mark Ingalls, Eric Lippert, David Lovell, and Barry Weiss.