The need to compile MSIL into native code raises several questions. For example, should the common language runtime convert all of the file's MSIL code to CPU instructions at load time? The advantage of doing this is that the program runs very fast. However, the disadvantage is that compiling the MSIL requires time, and this would significantly prolong the application's initialization time. In addition, very rarely does a user cause an application to execute all of its code. If the common language runtime compiles all of the MSIL to CPU instructions, it is likely that a lot of time and memory would be wasted. The consensus is that it would be much more efficient to have the common language runtime compile the MSIL instructions as functions are being called. When the common language runtime loads a class type, it connects stub code to each method. When a method is called, the stub code directs program execution to the component of the common language runtime engine that is responsible for compiling the method's MSIL into native code. Since the MSIL is being compiled just-in-time (JIT), this component of the runtime is frequently referred to as a JIT compiler or JITter. Once the JIT compiler has compiled the MSIL, the method's stub is replaced with the address of the compiled code. Whenever this method is called in the future, the native code will just execute and the JIT compiler will not have to be involved in the process. As you can imagine, this boosts performance considerably. The common language runtime is slated to ship with two JIT compilers, the normal compiler and an economy compiler. The normal compiler examines a method's MSIL and optimizes the resulting native code just like the back end of a normal, unmanaged C/C++ compiler. The economy JIT compiler is typically used on machines where the cost of using memory and CPU cycles is high (such as many Windows CE-powered devices). The economy compiler simply replaces each MSIL instruction with its native code counterpart. As you can imagine, the economy compiler compiles code much faster than the normal compiler; however, the native code produced by the economy compiler is significantly less efficient. Even so, the economy compiler will still produce code that executes much faster than interpreted code. When the common language runtime is ported to a new CPU platform, Microsoft first creates the economy JITter for that platform. Since the economy JITter is relatively easy to implement, this allows .NET applications to run on the new CPU platform in a very short period of time. Once the economy JITter is complete, Microsoft then focuses its efforts on the normal JITter to improve application performance. When Microsoft ships the .NET common language runtime, the normal JITter will be the default on most machines. However, on machines with small footprints, like Windows CE-powered devices, the economy JITter will be the default because it requires less memory to run. In addition, the economy JITter supports code pitching. Code pitching is the ability for the common language runtime to discard a method's native code block, freeing up memory used by methods that haven't been executed in a while. Of course, when the common language runtime pitches a block of code, it replaces the method with a stub so that the JITter can regenerate the native code the next time the method is called. For those of you who are used to developing in low-level languages like C or C++, you're probably thinking about the performance ramifications of all this. After all, unmanaged code is compiled for a specific CPU platform, and when invoked the code can simply execute. In this managed environment, compiling the code is accomplished in two phases. First, the compiler passes over the source code, doing as much work as possible in producing MSIL. But then, in order to actually execute the code, the MSIL itself must be compiled into native CPU instructions at runtime, requiring that more memory and additional CPU time be allocated to do the work. Believe me, since I approached the runtime from a C/C++ background myself, I was quite skeptical and concerned about this additional overhead. The truth is, managed code does not execute as fast and does have more overhead than unmanaged code. However, Microsoft has done a lot of performance work to address these issues. In addition, I've spoken to many developers at Microsoft who truly believe that in the future managed code will actually offer better performance than unmanaged code. Here's why: when the JITter compiles the MSIL code at runtime, it knows more about the execution environment than the compiler knows. For example, the JITter can detect that the host CPU is a Pentium III and generate CPU instructions that take advantage of any performance enhancements that Intel has made to the Pentium III over the Pentium II or Pentium. In addition, the JITter may generate code that uses CPU registers that a compiler would normally avoid using. Or, a JITter may be able to detect that a variable always contains a specific value and can generate small, fast code that works solely because the JITter can make a runtime assumption. Furthermore, memory allocations are significantly faster than allocating memory via the Win32 HeapAlloc function. I will address this issue more fully in a future article. Microsoft plans to offer a tool, tentatively called PreJit.exe, that can compile an entire assembly to native code and save the result on disk. When the assembly is loaded the next time, this saved version is loaded and the application starts up faster as a result. Because this tool takes advantage of information about other assemblies that have already been loaded when PreJit is run, it is best used in a warm-up mode. You turn PreJit on, and as assemblies are loaded, they are compiled and saved. After your application has run for a sufficient length of time, you turn off PreJit, and from then on the application will start up more quickly. Microsoft uses a variation on this same technique to make the key assemblies (such as the base framework) that will ship with .NET start faster.
From the September 2000 issue of MSDN Magazine.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Subscribe to MSDN Flash newsletter
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.