This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
Improving .NET Application Performance and Scalability
J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman
Home Page for Improving .NET Application Performance and Scalability
Send feedback to Scale@microsoft.com
Summary: Improving .NET Application Performance and Scalability provides an approach to engineering applications for performance and scalability. This chapter introduces the guide, outlines its structure, and shows you how to apply the guidance to your specific scenario.
Why We Wrote This Guide
Scope of This Guide
Features of This Guide
How to Use This Guide
Organization of This Guide
Approach Used in This Guide
Framework for Performance
Feedback and Support
The Team Who Brought You This Guide
Contributors and Reviewers
Tell Us About Your Success
This guide provides a principle-based approach for engineering performance and scalability throughout your application life cycle.
The guidance is task-based and presented in parts that correspond to life cycles, tasks, and roles. It is designed to be used as a reference or be read from beginning to end, and is divided into five parts:
- Part I, "Introduction to Engineering for Performance," outlines how to apply performance considerations throughout your application life cycle.
- Part II, "Designing for Performance," gives you an approach for architecting and designing for performance, using performance modeling. The design guidelines in this part include a set of guiding principles and technology-agnostic practices.
- Part III, "Application Performance and Scalability," provides deep platform knowledge across the Microsoft® .NET Framework technologies.
- Part IV, "Database Server Performance and Scalability," presents a consolidation of the most important techniques for improving database performance.
- Part V, "Measuring, Testing, and Tuning," provides a process, tools, and techniques for evaluating performance and scalability.
We wrote this guide to accomplish the following:
- To provide guidance on how to approach performance
- To help integrate performance engineering throughout your application life cycle
- To explain performance considerations and tradeoffs
- To provide deep performance-related technical guidance on the .NET Framework
This guide covers recommendations from Microsoft on how to build .NET applications that meet your performance needs. It promotes a life cycle-based approach to performance and provides guidance that applies to all roles involved in the life cycle, including architects, designers, developers, testers, and administrators. The overall scope of the guide is shown in Figure 1.
Figure 1: The scope of the guide
The guidance is organized by categories, principles, roles, and stages of the life cycle:
- The goal of the guide is to help you build applications that meet their performance objectives; that is, to build applications that are fast and responsive enough, and are able to accommodate specific workloads. The main performance objectives are response time, throughput, resource utilization (CPU, memory, disk I/O, and network I/O), and workload.
- Measuring lets you see whether your application is trending toward or away from the performance objectives. The measuring, testing, and tuning chapters show you how to monitor performance by capturing metrics, and how to tune performance through appropriate configuration and setup.
- Performance modeling provides a structured and repeatable approach to meeting your performance objectives.
- The guide provides a set of architecture and design guidelines, including a series of proven principles, practices, and patterns that can help improve performance.
- The guide also promotes a performance and scalability frame that enables you to organize and prioritize performance issues.
Technologies in Scope
While many of the principles and design guidelines provided in this guide are technology-agnostic, the guide focuses on applications built with the .NET Framework and deployed on the Microsoft Windows® 2000 Server family of operating systems. Where appropriate, new features provided by Windows Server 2003 are highlighted. Table 1 shows the products and technologies that this guidance is based on.
Table 1: Primary Technologies Addressed by This Guide
|Platforms||.NET Framework 1.1
Windows 2000 Server family
Windows Server 2003 features are also highlighted.
|Web Servers||Microsoft Internet Information Services (IIS) 5.0 (included with Windows 2000 Server)
IIS 6.0 (where relevant)
|Database Servers||Microsoft SQL Server 2000|
|.NET Framework Technologies||Common language runtime (CLR), ASP.NET, Enterprise Services, Extensible Markup Language (XML) Web Services, Remoting, ADO.NET|
A great deal of work has gone into maximizing the value of this guidance. It provides the following features:
- Framework for performance. The guide provides a schema that organizes performance into logical units to help integrate performance throughout your application life cycle.
- Life cycle approach. The guide provides end-to-end guidance on managing performance, throughout your application life cycle, to reduce risk and lower total cost of ownership. It also provides information for designing, building, and maintaining applications.
- Roles. Information is segmented by roles, including architects, developers, testers, and administrators, to make it more relevant and actionable.
- Performance and scalability frame. The guide uses a frame to organize performance into a handful of prioritized categories, where your choices heavily affect performance and scalability success. The frame is based on reviewing hundreds of applications.
- Principles and practices. These serve as the foundation for the guide and provide a stable basis for recommendations. They also reflect successful approaches used in the field.
- Processes and methodologies. These provide steps for performance modeling, testing, and tuning. For simplification and tangible results, the life cycle is decomposed into activities with inputs, outputs, and steps. You can use the steps as a baseline or to help you evolve your own process.
- Modular. Each chapter within the guide is designed to be read independently. You do not need to read the guide from beginning to end to get the benefits. Use the parts you need.
- Holistic. The guide is designed with the end in mind. If you do read the guide from beginning to end, it is organized to fit together. The guide, in its entirety, is better than the sum of its parts.
- Job aids. The guide provides an architecture and design review to help you evaluate the performance implications of your architecture and design choices early in the life cycle. A code review helps you spot implementation issues. Checklists that capture the key review elements are provided.
- How Tos. The guide provides a set of step-by-step procedures to help you implement key solutions from the guide.
- Subject matter expertise. The guide exposes insight from various experts throughout Microsoft and from customers in the field.
- Validation. The guidance is validated internally through testing. Also, extensive reviews have been performed by product, field, and product support teams. Externally, the guidance is validated through community participation and extensive customer feedback cycles.
- What to do, why, how. Each section in the guide presents a set of recommendations. At the start of each section, the guidelines are summarized using bold, bulleted lists. This gives you a snapshot view of the recommendations. Then, each recommendation is expanded upon telling you what to do, why, and how:
- What to do. This gives you the recommendation.
- Why. This gives you the rationale for the recommendation, helps you understand the issues, and explains any trade-offs you may need to consider.
- How. This gives you the implementation details to make the recommendation actionable.
- Performance Best Practices at a Glance. Provides fast answers to common questions and problems.
- Fast Track. Takes a fast path through the essentials of the framework used by the guide to help you quickly implement the guidance in your organization.
This guide is valuable for anyone who cares about application performance objectives. It is designed to be used by technologists from many different disciplines, including architects, developers, testers, performance analysts, and administrators. The guidance is task-based, and is presented in parts that correspond to the various stages of the application life cycle and to the people and roles involved during the life cycle.
You can read this guide from beginning to end, or you can read only the relevant parts or chapters. You can adopt the guide in its entirety for your organization or you can use critical components to address your highest-priority needs. If you need to move quickly, use the fast track. If you have more time and want to deliberately introduce a performance culture, you can work the guidance into your application development life cycle and processes and use it as a training tool.
Ways to Use the Guide
There are many ways to use this comprehensive guidance. The following are some ideas:
- Use it as a reference. Use the guide as a reference and learn the performance dos and don'ts of the .NET Framework.
- Use it as a mentor. Use the guide as your mentor for learning how to build software that meets its performance objectives. The guide encapsulates the lessons learned and experience from many subject matter experts.
- Incorporate performance into your application life cycle. Adopt the approach and practices that work for you and incorporate them into your application life cycle.
- Use it when you design applications. Design applications using principles and best practices. Benefit from lessons learned.
- Perform architecture and design reviews. Use the question-driven approach to evaluate architecture and design choices from a performance and scalability perspective. Use the questions as a starting point, modify them to suit your needs, and expand them as you learn more.
- Perform code reviews. Use the code review chapter as a starting point to improve your development practices.
- Establish and evaluate your coding guidelines. Many of the technical dos and don'ts depend on context. Evolve your own guidelines using the technical guidance as input but mold it to suit your needs.
- Create training. Create training from the concepts and techniques used throughout the guide, as well as technical insight across the .NET Framework technologies.
Applying the Guidance to Your Role
This guide applies to the following roles:
- Architects and lead developers can use the principles and best-practice design guidelines in Part II, "Designing for Performance," to help architect and design systems capable of meeting performance objectives. They can also use the performance modeling process to help assess design choices before committing to a solution.
- Developers can use the in-depth technical guidance in Part III, "Application Performance and Scalability," to help design and implement efficient code.
- Testers can use the processes described in Part V, "Measuring, Testing, and Tuning," to load, stress, and capacity test applications.
- Administrators can use the tuning process and techniques described in Part V, "Measuring, Testing, and Tuning," to tune performance with the appropriate application, platform, and system configuration.
- Performance analysts can use the deep technical information on the .NET Framework technologies to understand performance characteristics and to determine the cost of various technologies. This helps them analyze how applications that fail to meet their performance objectives can be improved.
Applying the Guidance to Your Life Cycle
Regardless of your chosen development process or methodology, Figure 2 shows how the guidance applies to the broad categories associated with an application life cycle.
Figure 2: Life cycle mapping
Note that development methodologies tend to be characterized as either linear ("waterfall" approaches) or iterative ("spiral" approaches). Figure 2 does not signify one or the other but simply shows the typical functions that are performed and how the guidance maps to those functions.
The guide is arranged in parts, chapters, and sections, as shown in Figure 3. Parts map to the application life cycle (plan, build, deploy, and maintain). Chapters are task-based. Guidelines and lessons learned are aggregated, summarized using bulleted lists, and presented using a "what to do," "why," and "how" formula for fast comprehension. Special features such as Performance Best Practices at a Glance, Fast Track, Checklists, and How Tos help you comprehend and apply the guidance faster and easier.
Figure 3: Parts of the guide
Performance Best Practices at a Glance
The "Performance Best Practices at a Glance" section provides a problem index for the guide, highlighting key areas of concern and where to go for more detail.
The "Fast Track" section in the front of the guide helps you implement the recommendations and guidance quickly and easily.
This guide is divided into five parts:
- Part I, "Introduction to Engineering for Performance"
- Part II, "Designing for Performance"
- Part III, "Application Performance and Scalability"
- Part IV, "Database Server Performance and Scalability"
- Part V, "Measuring, Testing, and Tuning"
Part I, "Introduction to Engineering for Performance"
This part shows you how to apply performance considerations throughout your application life cycle and introduces fundamental performance and scalability concepts and terminology. Part I includes one chapter:
Part II, "Designing for Performance"
Performance modeling helps you assess your design choices before committing to a solution. By upfront consideration of your performance objectives, workload, and metrics for your scenarios, you reduce risk. Use the design guidelines chapter to learn practices, principles, patterns, and anti-patterns so as to make informed choices. Part II includes three chapters:
- Chapter 2 — Performance Modeling
- Chapter 3 — Design Guidelines for Application Performance
- Chapter 4 — Architecture and Design Review of a .NET Application for Performance and Scalability
Part III, "Application Performance and Scalability"
This part provides a series of chapters that provide deep platform knowledge across the .NET Framework technologies. Use these chapters to learn about the key performance and scalability considerations for the various .NET technologies, and to improve the efficiency of your code in these areas. Part III includes nine chapters:
- Chapter 5 — Improving Managed Code Performance
- Chapter 6 — Improving ASP.NET Performance
- Chapter 7 — Improving Interop Performance
- Chapter 8 — Improving Enterprise Services Performance
- Chapter 9 — Improving XML Performance
- Chapter 10 — Improving Web Services Performance
- Chapter 11 — Improving Remoting Performance
- Chapter 12 — Improving ADO.NET Performance
- Chapter 13 — Code Review: .NET Application Performance
Part IV, "Database Server Performance and Scalability"
This part shows you how to improve SQL Server performance. Part IV includes one chapter:
Part V, "Measuring, Testing, and Tuning"
This part shows you which metrics to capture so as to monitor specific performance aspects. It also explains how to load, stress, and capacity test your applications, and how you can tune performance with appropriate application, platform, and system configuration. Part V includes three chapters:
- Chapter 15 — Measuring .NET Application Performance
- Chapter 16 — Testing .NET Application Performance
- Chapter 17 — Tuning .NET Application Performance
The Checklists section of the guide contains printable, task-based checklists. They are quick reference sheets to help you turn information into action. This section includes the following checklists:
- Checklist: ADO.NET Performance
- Checklist: Architecture and Design Review for Performance and Scalability
- Checklist: ASP.NET Performance
- Checklist: Enterprise Services Performance
- Checklist: Interop Performance
- Checklist: Managed Code Performance
- Checklist: Remoting Performance
- Checklist: SQL Server Performance
- Checklist: Web Services Performance
- Checklist: XML Performance
This section contains How To content that provides step-by-step procedures for key tasks. This section includes the following How To procedures:
- How To: Improve Serialization Performance
- How To: Monitor the ASP.NET Thread Pool Using Custom Counters
- How To: Optimize SQL Indexes
- How To: Optimize SQL Queries
- How To: Page Records in .NET Applications
- How To: Perform Capacity Planning for .NET Applications
- How To: Scale .NET Applications
- How To: Submit and Poll for Long-Running Tasks
- How To: Time Managed Code Using QueryPerformanceCounter and QueryPerformanceFrequency
- How To: Use ACT to Test Performance and Scalability
- How To: Use ACT to Test Web Services Performance
- How To: Use Custom Performance Counters from ASP.NET
- How To: Use CLR Profiler
- How To: Use EIF
- How To: Use SQL Profiler
How do you produce software that consistently meets its performance objectives? The approach used in this guide is as follows:
- Give performance due consideration up front.
- Set objectives and measure.
- Know the cost.
Give Performance Due Consideration Up Front
Identify if and where performance matters and consider what your application's performance objective are. Plan and act accordingly. The simple act of considering performance up front will help you make more thoughtful decisions when designing your application.
Set Objectives and Measure
Performance objectives usually include response time, throughput, resource utilization, and workload. If the software you produce does not meet all of its goals, including performance, you failed.
Without objectives, you do not know what good performance looks like. You could easily spend far too much or too little effort improving performance. You could make poor design trade-offs by adding unnecessary complexity or you could oversimplify where a more complex approach was warranted. You could attempt to handle exotic security or reliability issues, which create an unsupportable performance burden, or you might decline to handle issues that properly belong in your system. In short, you will find yourself in a poor position to make good engineering decisions.
With well-defined performance objectives for your key scenarios, you know where to focus and you know when you are finished. Rather than reacting to performance issues, you drive performance throughout your application life cycle. Metrics are the tools used to measure your scenarios and match them against your objectives. Example metrics include response time, resource cost, latency, and throughput. The objective is the value that is acceptable. You match the value of the metrics to your objectives to see if your application is meeting, exceeding, or not meeting its performance goals.
Know the Cost
When you engineer solutions, you need to know the cost of your materials. You know the cost by measuring under the appropriate workload. If the technology, API, or library will not meet your performance objectives, do not use it. Getting the best performance from your platform is often intrinsically tied to your knowledge of the platform. While this guide provides a great deal of platform knowledge, it is no replacement for measuring and determining the actual cost for your specific scenarios.
This guide brings together people, process, and technology to create a framework for repeatedly achieving performance and scalability objectives throughout your software life cycle. This framework is shown in Figure 4.
Figure 4: The principle-based framework for the guide
The main elements of the framework are the following:
- Categories. Performance recommendations and guidelines have been organized and prioritized into categories for ease of consumption.
- Principles. Performance, like many other aspects of software engineering, lends itself to a principle-based approach, where core principles are applied, regardless of the implementation technology or application scenario. The recommendations throughout the guide are founded on solid principles that have been proved over time.
- Roles. The guide is designed to provide advice and recommendations applicable to the various roles associated with a product development life cycle, including architects and lead developers, developers, testers, and performance analysts.
- Life cycle. Different parts of the guide map to the various stages of the product development life cycle.
- Performance modeling. Performance modeling provides a structured and repeatable approach to modeling the performance of your software. Performance cannot be added to an application as an afterthought, and performance should be given its due consideration early in the development life cycle. Performance modeling and measuring should continue throughout the life cycle.
We have made every effort to ensure the accuracy of this guide.
Feedback on the Guide
If you have comments on this guide, send an e-mail message to firstname.lastname@example.org. We are particularly interested in feedback regarding the following:
- Technical issues specific to recommendations
- Usefulness and usability issues
- Writing and editing issues
Technical support for the Microsoft products and technologies referenced in this guidance is provided by Microsoft Product Support Services (PSS). For product support information, please visit the Microsoft Product Support Web site at: http://support.microsoft.com.
Community and Newsgroup Support
MSDN Newsgroups: http://www.microsoft.com/communities/newsgroups/default.mspx
Table 2: Newsgroups
This guide was produced by the following .NET development specialists:
- J.D. Meier, Microsoft, Program Manager, patterns & practices
- Srinath Vasireddy, Microsoft, Program Manager, patterns & practices
- Ashish Babbar, Infosys Technologies Ltd.
- Alex Mackman, Content Master Ltd., Founding member and Principal Technologist
Many thanks to the following contributors and reviewers:
- Special thanks to key contributors: Anandha Murukan; Andy Eunson; Balan Jayaraman, Infosys Technologies Ltd; Christopher Brumme (CLR and COM interop); Connie U. Smith, Ph.D.; Curtis Krumel (SQL Server); David G. Brown (SQL Server); Denny Dayton; Don Willits ("Uber man"); Edward Jezierski; Ilia Fortunov; Jim O'Brien, Content Master Ltd; John Allen (ASP.NET); Matt Odhner (ACT); Prabhaker Potharaju (SQL Server); Rico Mariani (Performance Modeling, CLR, Code Review, Measuring); Ray Escamilla (Tuning); Scott Barber (Performance Modeling and Testing); Sharon Bjeletich (SQL Server)
- Special thanks to key reviewers: Adam Nathan (Interop); Brad Abrams; Brandon Bohling, Intel Corporation; Carlos Farre, Solutions IQ; Chuck Delouis, Veritas Software (SQL Server); Cosmin Radu (Interop); Eddie Lau (ACE); Eric Morris (ACE); Erik Olsen (ASP.NET); Gerardo Bermudez (CLR, Performance Modeling); Gregor Noriskin; Ken Perilman; Jan Gray; John Hopkins (ACE); Joshua Lee; K.M. Lee (ACE TEAM); Mark Fussell (XML); Matt Tavis (Remoting); Nico Jansen (ACE Team); Pablo Castro (ADO.NET and SQL); Patrick Dussud (CLR); Riyaz Pishori (Enterprise Services); Richard Turner (Enterprise Services); Sonja Keserovic (Interop); Thomas Marquardt (ASP.NET); Tim Walton; Tom McDonald; Wade Mascia (ASP.NET threading, Web services, and Enterprise Services); Yasser Shohoud (Web services)
- Thanks to external reviewers: Ajay Mungara, Intel Corporation; Bill Draven, Intel Corporation; Emil Lerch, Intel Corporation; Carlos Santos (Managed Code); Chris Mullins, Kiefer Consulting; Christopher Bowen, Monster.com; Chuck Cooper; Dan Sullivan; Dave Levine, Rockwell Software; Daniel Cazzulino, Lagash Systems SA; Diego Gonzalez, Lagash Systems SA (XML); Franco Ceruti; Fredrik Normen "N2", Barium AB (extensive review); Grant Fritchey; Greg Buskirk; Greg Kiefer, Kiefer Consulting; Ingo Rammer, IngoRammer.com; James Duff, Vertigo Software; Jason Masterman, Barracuda .NET (Remoting); Jeff Fiegel, Acres Gaming; Jeff Sukow, Rockwell Software; John Lam; John Vliet, Intel Corporation; Juval Lowy (COM interop); Kelly Summerlin, TetraData; Mats Lanner, Open Text Corporation; Matt Davey; Matthew Brealey; Mitch Denny, Monash.NET; Morten Abrahamsen (Performance and Transactions); Nick Wienholt, dotnetperformance.com; Norm Smith (Data Access and Performance Modeling); Pascal Tellier, prairieFyre Software Inc.; Paul Ballard, Rochester Consulting Partnership, Inc.; Per Larsen (Managed Code Performance); Scott Allen (Design Guidelines); Philippe Harry Leopold Frederix (Belgium); Scott Stanfield, Vertigo Software; Ted Pattison, Barracuda .NET (COM Interop); Thiru Thangarathinam; Tim Weaver, Monster.com; Vivek Chauhan (NIIT); Thiru Thangarathinam; Wat Hughes, Creative Data (SQL Server)
- Microsoft Consulting Services and Product Support Services (PSS): Dan Grady; David Madrian; Eddie Clodfelter; Hugh Wade; Jackie Richards; Jacquelyn Schmidt; Jaime Rodriguez; James Dosch; Jeff Pflum; Jim Scurlock; Julian Gonzalez (Web services); Kenny Jones; Linnea Bennett; Matt Neerincx; Michael Parkes; Michael Royster; Michael Stuart; Nam Su Kang; Neil Leslie; Nobuyuki Akama; Pat Altimore; Paul Fallon; Scott Slater; Tom Sears; Tony Bray
- Microsoft Product Group: Alexei Vopilov (Web services); Amrish Kumar; Arvindra Sehmi; Bill Evans; Brian Spanton; Keith Ballinger (WSE); Scot Gellock (Web services); Brian Grunkemeyer (CLR); Chris Eck; David Fields (NT); David Guimbellot; David Mortenson (CLR); Dax Hawkins; Dhananjay Mahajan (Enterprise Services); Dino Chiesa; Dmitry Robsman; Doug Rothaus (ADO.NET); Eddie Liu; Elena Kharitidi (Web services); Fabio Yeon; Harris Syed (Enterprise Services); Jason Zander; Jeffrey Cooperstein; Jim Radigan; Joe Long (Web services vs. ES vs. Remoting); Joshua Allen; Larry Buerk; Lubor Kollar (SQL Server); Maoni Stephens; Michael Coulson; Michael Fanning; Michael Murray (FxCop); Omri Gazitt; Patrick Ng (FX DEV); Peter Carlin (SQL Server); Rebecca Dias (WSE); Rick Vicik; Robin Maffeo (CLR Thread pool); Vance Morrison; Walter Stiers; Yann Christensen
- Thanks to our patterns & practices members for technical feedback and input: Jason Hogg (ADO.NET and XML); Naveen Yajaman; Sandy Khaund; Scott Densmore; Tom Hollander; Wojtek Kozaczynski
- Thanks to our test team: (Infosys Technologies Ltd): Austin Ajit Samuel Angel; Dhanyah T.S.K; Lakshmi; Prashant Bansode; Ramesh Revenipati; Ramprasad Gopalakrishnan; Ramprasad Ramamurthy; Terrence J. Cyril
- Thanks to our editors for helping to ensure a quality experience for the reader: Sharon Smith; Tina Burden McGrayne, Entirenet; Susan Filkins, Entirenet; Tyson Nevil, Entirenet
- Thanks to our product manager: Ron Jacobs
- Finally, thanks to: Alex Lowe; Chris Sells; Jay Nanduri; Nitin Agrawal; Pat Filoteo; Patrick Conlan (SQL Server); Rajasi Saha; Sanjeev Garg (Satyam Computer Services); Todd Kutzke
If this guide helps you, we would like to know. Tell us by writing a short summary of the problems you faced and how this guide helped you out. Submit your summary to MyStory@Microsoft.com.
In this introduction, you were shown the structure of the guide and the basic approach used by it to engineer for performance and scalability. You were also shown how to apply the guidance to your role or to specific phases of your product development life cycle.
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.