Scalability

Servers are becoming more and more powerful each year, and they are doing so not by making the core faster, but by adding more cores. As of this writing, most entry-level server computers include at least four cores, and this number will only increase over time. If your application performs CPU-intensive computing, you will want support for parallelizing your application so that it can make the best use of whatever cores it has available.

The .NET Framework provides built-in support for threads in the System.Threading namespace. The easiest way to get multithreading support is to use managed asynchronous APIs in the .NET Framework that make use of the I/O and Worker thread pools that are supplied by the Framework itself. For finer-grained control, or if you have long-running tasks that you want to run on a separate thread, you can also manually create your own threads. And of course if you have data that is shared across threads or even across processes, Windows Server 2008 and the .NET Framework work together to provide many types of synchronization options, from kernel objects like the semaphore to simple interlocked adjustments to integers.

If you need even finer-grained control over threads, Windows Server 2008 provides a Thread Ordering Service, which allows you to group threads and configure the order of threads that will execute during a specified period. This service ensures that each thread in the group runs once during that period.

Ee373803.note(en-us,MSDN.10).gifNote
The Thread Ordering Service requires native code.

Many computers running Windows Server 2008 can be clustered into a single supercomputer managed by Microsoft Windows® HPC Server 2008. Academic researchers and commercial developers alike often have problems that can only be solved with massive amounts of computing power. It’s good to know that your skills as a developer of managed code on Windows Server 2008 can scale up to the level of a supercomputer.

Version 4 of the .NET Framework improves its thread pool significantly and paves the way for the next generation of parallel computing support in managed code. This new initiative is called the Parallel FX Library (PFX) and includes classes that significantly simplify the code you need to write to take advantage of multiple cores when you have work that can be performed in parallel. Assuming you have code that is parallelizable (for example, it does not create side effects), for loops can easily be distributed across multiple cores, and LINQ statements can be parallelized by simply adding an extra query operator.

The Microsoft project code-named “Velocity” provides a highly scalable in-memory application cache for all kinds of data. By using cache, you can significantly improve application performance by avoiding unnecessary calls to the data source. Distributed cache enables your application to match increasing demand with increasing throughput by using a cache cluster that automatically manages the complexities of load balancing. When you use “Velocity,” you can retrieve data by using keys or other identifiers, named “tags.” “Velocity” supports optimistic and pessimistic concurrency models, high availability, and a variety of cache configurations. “Velocity” includes an ASP.NET session provider object that enables you to store ASP.NET session objects in the distributed cache without having to write to databases, which increases the performance and scalability of ASP.NET applications.

Show: